text
stringlengths
4
2.78M
--- author: - | Alfredo Iorio[^1]\ Faculty of Mathematics and Physics\ Charles University in Prague\ V Holešovičkách 2, 18000 Prague 8, Czech Republic\ alfredo.iorio@mff.cuni.cz title: | Graphene and Black Holes:\ novel materials to reach the unreachable --- Physics is an experimental science. Nonetheless, in the last decades, it proved very difficult (if not impossible) to reconcile theoretical investigations of the fundamental laws of Nature with the necessary experimental tests. The divarication between theory and experiments is the central problem of contemporary fundamental physics that, to date, is still unable to furnish a consistent quantum theory of gravity, or to obtain experimental evidences of milestones theories like Supersymmetry (see, e.g., the whole Issue [@thooftstring] on the status of String Theory). A widespread view of this problem is that experimental observations of these type of phenomena can only be achieved at energies out of the reach of our laboratories (e.g., the Planck energy $10^{19}$GeV vs reachable energies in our laboratories of $10^3$GeV). In our view, due to unprecedented behaviors of certain novel materials, nowadays indirect tests should be considered as a viable alternative to direct observations. This field of research is not a novelty. It usually goes under the generic name of “analogue gravity” [@volovik1], although, perhaps, to call it “bottom-up” approach does it more justice: analogue experimental settings on the bottom, fundamental theories of Nature on the top. Nonetheless, for various reasons, it has been seen as little more than a curiosity. An amusing and mysterious series of coincidences, that never (up to our knowledge) were taken as tests of those aspects of the fundamental theories that they are reproducing. Neither are taken as experimental tests of the fundamental theories, the mysterious and amusing coincidences of the “top-bottom” approach of the AdS/CFT correspondence (where theoretical constructions of the fundamental world, are used to describe experimental results at our energy scale [@maldacena]). As well known, with graphene [@geimnovoselovFIRST] we have a quantum relativistic-like Dirac massless field available on a nearly perfectly 2-dimensional sheet of carbon atoms (see [@pacoreview2009] for a review). Recent work shows the emergence of gravity-like phenomena on graphene [@iorio]. More precisely, the Hawking effect can take place on graphene membranes shaped as Beltrami pseudospheres of suitably large size, hence even (2+1)-dimensional black-hole scenarios [@btz] are in sight. The Hawking effect here manifests itself through a finite temperature electronic local density of states. For reviews see [@reviewslectures]. The predictions of the Hawking effect on graphene, are based on the possibility to obtain very specific shapes, e.g., the Beltrami pseudosphere, that should recreate, for the pseudoparticles of graphene, conditions related to those of a spacetime with an horizon. What we are first trying [@ComingTrumpet] is to obtain a clear picture of what happens to $N$ classical particles, interacting via a simple potential, e.g., a Lennard-Jones potential, and constrained on the Beltrami. This will furnish important pieces of information on the actual structure of the membrane. In fact, is well known from similar work with the sphere (that goes under the name of “generalized Thomson problem”, see, e.g., [@bowick]) that defects will form more and more, and their spatial arrangements are highly non trivial, and follow patterns related to the spontaneous breaking of the appropriate symmetry group [@siddharthamorse]. Once the coordinates of the $N$ points are found in this way, we need to simulate the behaviors of Carbon atoms arranged in that fashion, hence we essentially change the potential to the appropriate one, and perform a Density Functional type of computation. The number of atoms we can describe this way is of the order of $10^3$, highly demanding computer-time wise, but still too small for the reaching of the horizon. Nonetheless, the results obtained will be important to refine various details of the theory. We need to go further, towards a big radius of curvature $r$, when the Hawking effect should be visible. Any serious attempt to understand Quantum Gravity has to start from the Hawking effect. That is why black holes are at the crossroad of many of the speculations about the physics at the Planck scale. From [@iorio] a goal that seems in sight is the realization of reliable set-ups, where graphene well reproduces the black-hole thermodynamics scenarios, with the analogue gravity of the appropriate kind to emerge from the description of graphene’s membrane. The lattice structure, the possibility to move through energy regimes where discrete and continuum descriptions coexist, and the unique features of matter fields whose relativistic structure is induced by the spacetime itself, are all issues related to Quantum Gravity [@worldcrystal] that can be explored with graphene. Many other tantalizing fundamental questions, can be addressed with graphene. To mention only two: There are results of [@mauriciosusy], that point towards the use of graphene to have alternative realizations of Supersymmetry, and there are models of the Early Universe, based on (2+1)-dimensional gravity [@vanderbij], where graphene might also play a role. We are lucky that these “wonders” are predicted to be happening on a material that is, in its own right, enormously interesting for applications. Hence there is expertise worldwide on how to manage a variety of cases. Nonetheless, the standard agenda of a material scientist is of a different kind than testing fundamental laws of Nature. Therefore, let us conclude by invoking the necessity of a dedicated laboratory, where condensed matter, and other low energy systems, are experimentally studied with the primary goal of reproducing phenomena of the fundamental kind. For the reasons outlined above, graphene is a very promising material for this purpose. Acknowledgement {#acknowledgement .unnumbered} =============== The author warmly thanks the LISC laboratories of FBK and ECT\*, Trento, Italy, for the kind hospitality, and acknowledges the Czech Science Foundation (GAČR), Contract No. 14-07983S, for support. [99]{} AA.VV., Found. Phys. [**43**]{} (2013) Issue 1. G.E. Volovik, The Universe in a helium droplet, Clarendon Press (Oxford) 2003; M. Novello, M. Visser, G.E. Volovik, Artifical Black Holes, World Scientific (Singapore) 2002; C. Barceló, S. Liberati, M. Visser, [*Analogue Gravity*]{}, Liv. Rev. Rel. [**14**]{} (2011) 3. J. Maldacena, [*The Gauge/Gravity Duality*]{}, in G. T. Horowitz (Ed.), Black Holes in Higher Dimensions, Cambridge Univ. Press (Cambridge) 2012 (arXiv:1106.6073). K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Gregorieva, A. A. Firsov, Science [**306**]{} (2004) 666. A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, A. K. Geim, Rev. Mod. Phys. [**81**]{} (2009) 109. A. Iorio, Ann. Phys. [**326**]{} (2011) 1334; Eur. Phys. J. Plus [**127**]{} (2012) 156; J. Phys.: Conf. Series [**442**]{} (2013) 012056; A. Iorio, G. Lambiase, Phys. Lett. B [**716**]{} (2012) 334; Phys. Rev. D [**90**]{} (2014) 025006. M. Bañados, C. Teitelboim, J. Zanelli, Phys. Rev. Lett. [**69**]{} (1992) 1849. A. Iorio, [*Curved spacetimes and curved graphene: a status report of the Weyl-symmetry approach*]{}, Int. J. Mod. Phys. D (2015), to appear; [*Weyl Symmetry in Graphene*]{}, Lecture Notes at the Workshop “Modeling Graphene-like Systems,” (University College Dublin, Ireland, May 2014), available at http://mathsci.ucd.ie/$\sim$miguel/GRAPHENEWORKSHOP2014/AlfredoIorio/ R. Gabbrielli, A. Iorio, N. Pugno, S. Simonucci, S. Taioli, in progress. M. J. Bowick, D. R. Nelson, A. Travesset, Phys. Rev. B [**62**]{} (2000) 8738. A. Iorio, S. Sen, Phys. Rev. B [**74**]{} (2006) 052102. R. Loll, [*Discrete approaches to quantum gravity in four dimensions*]{}, Liv. Rev. Rel. [**1**]{} (1998), 13; G. ’t Hooft, Int. J. Mod. Phys. A [**24**]{} (2009) 3243. P. D. Alvarez, M. Valenzuela, J. Zanelli, J. High Energy Phys. [**1204**]{} (2012) 058; P. D. Alvarez, P. Pais, J. Zanelli, Phys. Lett. B [**735**]{} (2014) 314. J. J. van der Bij, Phys. Rev. D [**76**]{} (2007) 121702. [^1]: E-mail: alfredo.iorio@mff.cuni.cz
--- abstract: 'We describe randomized algorithms for computing the dominant eigenmodes of the Generalized Hermitian Eigenvalue Problem (GHEP) $Ax=\lambda Bx$, with $A$ Hermitian and $B$ Hermitian and positive definite. The algorithms we describe only require forming operations $Ax$, $Bx$ and $B^{-1}x$ and avoid forming square-roots of $B$ (or operations of the form, $B^{1/2}x$ or $B^{-1/2}x$). We provide a convergence analysis and a posteriori error bounds that build upon the work of [@halko2011finding; @liberty2007randomized; @martinsson2011randomized] (which have been derived for the case $B=I$). Additionally, we derive some new results that provide insight into the accuracy of the eigenvalue calculations. The error analysis shows that the randomized algorithm is most accurate when the generalized singular values of $B^{-1}A$ decay rapidly. A randomized algorithm for the Generalized Singular Value Decomposition (GSVD) is also provided. Finally, we demonstrate the performance of our algorithm on computing the Karhunen-Loève expansion, which is a computationally intensive GHEP problem with rapidly decaying eigenvalues.' address: 'Institute for Computational and Mathematical Engineering, Huang Building 475 Via Ortega, Stanford University, California-94305 ' author: - 'Arvind K. Saibaba, Peter K. Kitanidis and Jonghyun Harry Lee' bibliography: - 'randomized.bib' title: 'Randomized algorithms for Generalized Hermitian Eigenvalue Problems with application to computing Karhunen-Loève expansion' --- Introduction ============ Consider the Generalized Hermitian Eigenvalue Problem (GHEP) $$\label{eqn:ghep} Ax = \lambda Bx$$ where, $B$ is Hermitian positive definite and $A$ is Hermitian. The analysis is also relevant if $B$ is not positive definite. In that case, if $B$ is not positive definite but some combination $(\alpha A + \beta B)$ is positive definite, then the transformed problem $ Ax = \theta (\alpha A + \beta B)x $ has eigenvalues $\theta_i = \lambda_i/(\alpha \lambda_i + \beta)$ and has the same eigenvectors as Equation . We can transform the GHEP into a Hermitian Eigenvalue Problem (HEP), which is of the form $Mx = \lambda x$ for matrices $M$ positive semidefinite. Since $B$ is positive definite, it has a Cholesky Decomposition $B=LL^*$. Define $y = L^*x$ and multiplying both sides of Equation  by $L^{-1}$, we have $$\label{eqn:hep} L^{-1}AL^{-*}L^{*}x = \lambda L^*x \qquad \Rightarrow \qquad L^{-1}AL^{-*}y = \lambda y$$ which is a HEP and hence, any algorithm for HEPs can be used to solve GHEPs. However, computing the Cholesky decomposition is not computationally feasible for several matrices. It should be noted that this type of transformation  can be derived for any definition of square root of a matrix. Although several algorithms exist for computing the square root of a matrix, or performing matrix-vector products (henceforth, called matvecs) $B^{1/2}x$ or $B^{-1/2}x$, their application to large-scale problems is not always efficient. Another transformation, $B^{-1}Ax = \lambda x $ makes the problem into a regular eigenvalue problem. Even though $A$ and $B$ are Hermitian, in general $B^{-1}A$ will not be Hermitian. We will focus our attention on problems for which computing Cholesky decomposition (or any other square root, for that matter) is too expensive to compute explicitly. The key idea that we will exploit in this paper is the fact that, while $B^{-1}A$ is not Hermitian, it is Hermitian with respect to another inner product, the $B$-inner product which we will define shortly. This property has previously been exploited by Krylov subspace based eigensolvers [@grimes1994shifted; @saad1992numerical]. An added advantage to using $B$-inner products is that the resulting eigenvectors are now $B$-orthonormal. There are several methods for solving the GHEP . These include approaches based on power and inverse iteration methods, Lanczos based methods and Jacobi-Davidson method. For a good review on this material, please refer to [@bai1987templates chapter 5] and [@saad1992numerical]. A good survey of existing software for sparse eigenvalue problems including GHEP is available at [@str-6]. Randomized algorithms have been developed for approximately computing a low-rank decomposition when the singular values decay rapidly (for a comprehensive review, see [@halko2011finding]). After computing the approximate low-rank decomposition, an additional post-processing step can be performed to compute the approximate singular value decomposition. For Hermitian operators, this post-processing can be modified to obtain an approximate eigenvalue decomposition as well. The randomized SVD algorithm can be applied to the matrix $C{\stackrel{\text{def}}{=}}B^{-1}A$ to obtain an approximate singular value decomposition. However, applying the algorithm directly to the matrix $C$, will result in singular vectors that are orthonormal but not $B$-orthonormal. A symmetric low-rank decomposition is highly desirable in many application. As a result, we would like to develop square-root free variants of the randomized SVD algorithm to compute the dominant eigenmodes of the GHEP. The algorithms described in this paper are useful when it is necessary to quickly compute an approximation to the largest eigenmodes. The only requirement is availability of fast ways to compute $Ax$, $Bx$ and $B^{-1}x$ and it avoids computations of the form $B^{1/2}x$ and $B^{-1/2}x$. As a result, this algorithm is applicable to very general settings. The randomized algorithms are often faster, are quite robust and accompanied by theoretical guarantees. The error analysis suggests that the algorithms are most accurate when the (generalized) singular values of $B^{-1}A$ decay rapidly. Moreover, the low-rank decompositions can be produced to any user defined tolerance, which allows the user to trade-off between computational cost and accuracy. While it is certainly true that under the same settings, Krylov subspace methods often are more accurate especially for systems of the form  with rapidly decaying eigenvalues, randomized schemes are numerically robust and allow freedom in exploiting gains from parallelism and block matrix-vector products. As a result, randomized algorithms are well suited to computationally intensive problems and modern computing environments. For example, when efficient block methods to compute $Ax$, $Bx$ or $B^{-1}x$ exist, they can be used to significantly speed up calculations. Finally, Krylov subspace methods must be often accompanied by sophisticated algorithms to monitor restart, orthogonality and loss of precision. Randomized algorithms, on the other hand, are straightforward to implement in very few lines of code which are transparent to read. To summarize, one must weigh the trade-offs between using randomized algorithms which are low-cost, easy to implement and robust and using Krylov subspace based methods that are capable of higher accuracy but are often, much more expensive. A further discussion of the suitability of randomized algorithms to high performance computing is available in [@halko2011finding; @bui2012extreme]. In addition to describing the randomized algorithms, a rigorous error analysis is also provided that closely follows the proof techniques developed in [@liberty2007randomized; @halko2011finding]. Furthermore, we provide computable a posteriori error bounds on 1) the approximate low-rank representation, and 2) the error between the true and the approximate eigenvalues (and eigenvectors) as a function of the low-rank representation error. To the best of our knowledge, the latter result is not available even for the case $B=I$. We also provide a randomized algorithm for the Generalized Singular Value Decomposition (GSVD). We demonstrate the performance of our algorithms on a challenging application - computing the dominant eigenmodes of the Karhunen-Loève expansion. Algorithms {#sec:randomized} =========== matrices $A \in \mathbb{C}^{n\times n}$, and $\Omega \in \mathbb{R}^{n\times (k+p)}$ a Gaussian random matrix. Here $k$ is the desired rank, and $p$ is an oversampling factor. Compute $Y = A\Omega$, and compute QR factorization $Y=QR$ Form $ B = Q^*A $ Compute SVD of the small matrix $B = \tilde{U}\Sigma V^*$ Form the orthonormal matrix $U = Q\tilde{U}$ $U$, $\Sigma$ and $V$ that satisfy $A \approx U\Sigma V^*$ Let us first review the randomized SVD algorithm that is described in [@halko2011finding] to compute the rank-$k$ decomposition for any matrix $A\in \mathbb{C}^{m\times n}$ that has rapidly decaying singular values. The algorithms proceed by computing a matrix $Q$ whose columns form a basis for the approximate range of $A$. This is accomplished by forming matvecs of A with random vectors drawn from an i.i.d. Gaussian distribution. The matrix $Q$ satisfies the bound ${\lVert (I-QQ^*)A \rVert_2} \leq \varepsilon$. This is summarized in Algorithm \[alg:randsvd\]. If $A$ is Hermitian, and we have found a $Q$ that satisfies ${\lVert (I-QQ^*)A \rVert_2} \leq \varepsilon$, then it can be shown that ${\lVert A-QQ^*AQQ^* \rVert_2} \leq 2\varepsilon$. With this observation, an additional step can be performed to compute a Hermitian eigenvalue decomposition. The smaller matrix $T=Q^*AQ$ is formed and its eigendecomposition $S\Lambda S^*$ is computed and $A \approx U \Lambda U^*$, where $U=QS$. This is the two pass version of the algorithm to compute largest eigenvalues and corresponding eigenvectors. A second round of matrix-vector products involving $A$ (to compute $T=Q^*AQ$) can be avoided by using the information contained in $Y, Q$ and $\Omega$. This is known as a single pass algorithm. This is summarized in Algorithm \[alg:randhep\]. For further details regarding the aforementioned algorithms, the reader is referred to [@halko2011finding]. matrices $A \in \mathbb{C}^{n\times n}$, and $\Omega \in \mathbb{R}^{n\times (k+p)}$ a Gaussian random matrix. Here $k$ is the desired rank, and $p$ is an oversampling factor. Compute $Y = A\Omega$, and compute QR factorization $Y=QR$ Form $ T = Q^*AQ $ (two-pass) or $T \approx (Q^*Y)(Q^*\Omega)^{-1} $ (single-pass) Compute EVD of the small matrix $T = S\Lambda S^*$ Form the orthonormal matrix $U = QS$ $U$, $\Lambda$ that satisfy $A \approx U\Lambda U^*$ The main difference is that we replace the inner-product with a $B$-inner product and as a result, we no longer maintain an orthonormal basis $Q$ but a $B$-orthonormal basis. Here we summarize some basic results about $B$-inner products and the resulting vector and matrix norms. The $B$-inner product is defined as $\langle x,y \rangle_B {\stackrel{\text{def}}{=}}y^*Bx$ and the $B$-norm ${\lVert x \rVert_B} {\stackrel{\text{def}}{=}}\sqrt{x^*B x}$. It satisfies the following inequality, (see, for example [@meerbergen1998theoretical]) $$\label{eqn:Bnorm2normineq} \frac{{\lVert x \rVert_2}^2 }{{\lVert B^{-1} \rVert_2}} \quad \leq \quad {\lVert x \rVert_B}^2 \quad \leq \quad {\lVert x \rVert_2}^2 {\lVert B \rVert_2}$$ Let us define the matrix $C {\stackrel{\text{def}}{=}}B^{-1}A$. It can be verified that $C$ is self-adjoint with respect to the $B$-inner product, i.e. $\langle Cx,y\rangle_B = \langle x,Cy \rangle_B$. The $B$-norm of a matrix is defined as an induced vector norm ${\lVert M \rVert_B}= \max_{{\lVert x \rVert_B} = 1} {\lVert Mx \rVert_B}$. We will make use of this fact to derive randomized algorithms for GHEP that produces a Hermitian low-rank decomposition. It can be verified that for any matrix $M$, making the transformation $y = B^{1/2}x$, we have that $$\label{eqn:prop2} {\lVert M \rVert_B} = \max_{x} \frac{{\lVert Mx \rVert_B}}{{\lVert x \rVert_B}} = \max_{y} \frac{{\lVert B^{1/2}MB^{-1/2}y \rVert_2} }{{\lVert y \rVert_2}} = {\lVert B^{1/2}MB^{-1/2} \rVert_2} $$ For the error analysis, we will need a generalized notion of singular values, defined as follows $$\label{eqn:gensingular} \sigma_{B} (M) = \left\{\mu \left| \mu \text{ are the stationary points of } \frac{{\lVert Mx \rVert_B}}{{\lVert x \rVert_2}} \right.\right\}$$ This definition is similar to [@van1976generalizing definition 3] with $S=B$ and $T=I$. This results in a following decomposition of the form $$M = U\Sigma_B V^* \qquad U^*BU=I \qquad V^*V=I$$ and $\Sigma_B = \text{diag}\{\sigma_{B,1},\dots,\sigma_{B,n}\}$ are the generalized singular values. They have a subscript to distinguish them from the singular values defined in the regular sense. The existence of this decomposition is guaranteed by [@van1976generalizing Theorem 3]. Approximating the range of $C$ ------------------------------ The key step of the algorithms that follow involves the following result: we can compute a matrix $Q \in \mathbb{C}^{n\times (k+p)}$, which is $B$-orthonormal, i.e. $Q^*BQ = I$ such that $$\label{eqn:projectionapproximation} {\lVert (I-QQ^*B)C \rVert_B} \leq \varepsilon$$ where, the range of $Q$ approximates the range of $C$, and $p$ is an oversampling factor. We define the projection matrix $P_B {\stackrel{\text{def}}{=}}QQ^*B$ and observe that ${\lVert P_B \rVert_B} = 1$. The reason we choose to use the $B$-norm ${\lVert \cdot \rVert_B}$ is because $C$ is self-adjoint with respect to the B-inner product. This is implemented as follows: We draw the random matrix $\Omega$ from a standard Gaussian distribution, and form $Y=B^{-1}A\Omega$. Then we construct a matrix $Q$ that forms a basis for the range of $Y$ and is B-orthonormal. This is obtained using a QR decomposition using the $B$-inner product. The cost of computing this basis is dominated by the cost of forming the matvecs with respect to $B^{-1}$ and $A$ and computing the QR decomposition. A practical way to estimate the error in the approximation  and the average behavior of this error $\varepsilon$ is provided in Section \[sec:aposteriori\]. The computational costs of this algorithm is discussed in Section \[sec:compcosts\]. Several algorithms exists for QR decomposition with standard inner product $\langle x,y\rangle = y^*x$ such as Gram-Schmidt (both classical and modified), using Householder transformations and Givens rotations. However, the use of of the $B$-inner product precludes the use of Householder transformations and Givens rotations. The use of modified Gram-Schmidt for QR decomposition with weighted inner product has been discussed before (for example, see [@grimes1994shifted]). It is well known that the modified Gram-Schmidt is more stable than the classical Gram-Schmidt method even for the case $B=I$. Hence, we only consider modified Gram-Schmidt approach. However, even though the computation of $R$ is extremely accurate, $Q$ is not always orthonormal (or $B$-orthonormal) due to accumulation of round-off errors. We consider two alternative algorithms: Modified Gram-Schmidt with re-orthogonalization, denoted by MGS-R, a new algorithm considered in this paper, and ‘PreCholQR’ [@lowely2014stability]. To ensure the B-orthogonality up to machine precision, we extend the algorithm proposed in [@gander1980algorithms Section 9.3], that uses the standard inner-product, to now use the $B$ inner-product. The algorithm proposed in [@gander1980algorithms] was an extension to the re-orthogonalization proposed by Rutishauser. It maintains a factorization that is more accurate than MGS by accumulating changes in $R$ due to the re-orthogonalization process and unlike the standard MGS it is also designed to work even when the matrix is rank-deficient. The extension to the $B$-inner product can be accomplished readily by changing the definition of inner-products and is summarized in Algorithm \[alg:mgs\]. Numerical examples in Section \[sec:accuracy\] indicate that the modified Gram-Schmidt with re-orthogonalization is superior because it explicitly enforces orthogonality. $Y = [y_1,\dots,y_n]$ and a positive definite matrix $W$ $Q := [y_1,\dots,y_n] $ and R = zeros(n,n) $\hat{q}_k = Wq_k$, $t := \sqrt{\hat{q}_k^*q_k}$ flag $= 1$, $ c= 0$ $c = c+ 1$ $s = \hat{q}_i^*q_k $, $r_{i,k} += s$ and $q_k -= sq_i$ $\hat{q}_k := Wq_k$ and $tt = \sqrt{\hat{q}_k^*q_k}$ flag $= 1$, $t=tt$ flag $=0$ $tt = 0 $ $r_{kk} = tt$ $tt = 1/tt$ $q_k = q_ktt$, and $\hat{q}_k = \hat{q}_ktt$ $Q \in \mathbb{C}^{m\times n}$, $WQ \in \mathbb{C}^{m\times n}$ and $R \in \mathbb{C}^{n\times n}$ We also consider the ‘CholQR’ and ‘PreCholQR’ algorithms described and analyzed in in [@lowery2014stability]. In particular, ‘PreCholQR’ has an additional cost due to a thin QR decomposition but has better stability properties. Given a matrix $Y \in \mathbb{C}^{m\times n}$ it outputs matrices $Q$ and $R$ such that $Y=QR$ and $Q^*WQ = I$. The algorithm and the relevant matrices have been summarized in Algorithm \[alg:cholqr\] and \[alg:precholqr\]. In particular, accounting for round-off error, the resulting decompositions for PreCholQR satisfy $$\begin{aligned} {\lVert Y-QR \rVert_2} \quad \leq& \quad cmn^2u{\lVert Q \rVert_2}{\lVert U \rVert_2}{\lVert S \rVert_2} \\ {\lVert Q^*WQ - I \rVert_2} \quad \leq & \quad c'mn^2u {\lVert Q \rVert_2}^2{\lVert B \rVert_2} + {{\cal{O}}}(u^2)\end{aligned}$$ • where $c$ and $c'$ denote constants and $u$ denotes machine precision. Numerical experiments involving the stability have been performed in Section \[sec:accuracy\]. $Y \in \mathbb{C}^{m\times n}$, $W \in \mathbb{C}^{m\times m}$ positive definite $Z=WY$ $C = Y^*Z$ $R = \text{chol}(C)$ $Q = YR^{-1}$, $WQ = ZR^{-1}$ $Q \in \mathbb{C}^{m\times n}$, $WQ \in \mathbb{C}^{m\times n}$ and $R \in \mathbb{C}^{n\times n}$ $Y \in \mathbb{C}^{m\times n}$, $B \in \mathbb{C}^{m\times m}$ positive definite = qr(Y) = CholQR(Z) $R = US$ $Q \in \mathbb{C}^{m\times n}$, $WQ \in \mathbb{C}^{m\times n}$ and $R \in \mathbb{C}^{n\times n}$ Two pass algorithm ------------------ In this Section, we derive a symmetric low-rank decomposition to the GHEP in Equation  that uses two sets of matrix-vector products involving the matrix $A$. This algorithm will be called a two pass algorithm. In Section \[sec:singlepass\] we will derive an algorithm that only uses one set of matrix-vector products. The single pass algorithm has a smaller computational cost but is less accurate. Let us assume that a $Q \in \mathbb{C}^{n\times (k+p)}$ exists such that ${\lVert (I-P_B)C \rVert_B} \leq \varepsilon$ and is relatively easy to compute. Then, we can derive the following error bound which provides the approximation error to a symmetric low-rank decomposition, $$\begin{aligned} \label{eqn:symmapprox} {\lVert (C-P_BCP_B) \rVert_B} \leq & \quad {\lVert C - P_BC \rVert_B} + {\lVert P_BC - P_BCP_B \rVert_B} \\ \nonumber \leq & \quad \varepsilon + {\lVert P_B \rVert_B} {\lVert C-CP_B \rVert_B} \\ \nonumber \leq & \quad 2\varepsilon\end{aligned}$$ This inequality relies on the following results ${\lVert P_B \rVert_B} = 1$ and $ {\lVert C(I-P_B) \rVert_B} = {\lVert (I - P_B)C \rVert_B}$. From , we have the following low-rank decomposition $$\label{eqn:symmdecomp} C \approx P_B C P_B \quad \Rightarrow \quad A \approx (BQ) (Q^*AQ) (BQ)^* = (BQ)T(BQ)^*$$ where, $T{\stackrel{\text{def}}{=}}Q^*AQ$. From this point, the eigenvalues of the system  can be approximately computed as the eigenvalues of the matrix $T$ and the $B-$orthogonal eigenvectors $U$ can be computed by the product of $Q$ with the eigenvectors of $T$. The algorithm is summarized in Algorithm \[alg:doublepass\]. Algorithm \[alg:doublepass\] starts by constructing a Gaussian random matrix $n\times (k+p)$ with i.i.d. entries chosen from an normal distribution with zero mean and unit variance. Here $p$ is a oversampling factor, that is chosen to lower the error in the eigenvalue calculations. Typically, $p$ is chosen to be less than $20$ following the arguments in [@halko2011finding; @liberty2007randomized]. The improvement in the approximation error with increasing $p$ is verified in both theory and experiment (see Sections \[sec:aposteriori\] and \[sec:kle\]). We then form matvecs with $C$ to construct $Y$. Next, we $B$-orthonormalize the columns of $Y$, using modified Gram-Schmidt with $B$-inner products. This algorithm is summarized in Algorithm \[alg:mgs\]. Then, we form the $(k+p)\times (k+p)$ matrix $T = Q^*AQ$, which requires a second round of matvecs with $A$. In Section \[sec:singlepass\], we will describe an algorithm that avoids this second round of forming matvecs with $A$. We then compute the eigenvalue decomposition of this smaller matrix $T$, and use this to construct the approximate generalized eigendecomposition of the matrix $C$. It can be verified that $U^*BU = I$. matrices $A$, $B$, and $\Omega \in \mathbb{R}^{n\times (k+p)}$ a Gaussian random matrix. Here $A,B \in \mathbb{C}^{n\times n}$, k is the desired rank, $p\sim 20$ is an oversampling factor. Compute $Y = C\Omega$, where $C{\stackrel{\text{def}}{=}}B^{-1}A$ Form QR-factorization $Y =QR$ such that $Q^*BQ = I$ Form $ T {\stackrel{\text{def}}{=}}Q^*AQ$ and Compute the eigenvalue decomposition $T = S\Lambda S^*$. Keep the $k$ largest eigenmodes as $S = S(:,1:k)$ and $\Lambda = \Lambda(:,1:k)$. The columns of $S$ are orthonormal. Matrices $U \in \mathbb{C}^{n\times k}$ and $\Lambda\in \mathbb{R}^{k\times k}$ that satisfy $$A \approx B U \Lambda (BU)^*\qquad\text{with}\qquad U = QS \quad \text{and}\quad U^*BU = I$$ that satisfy Single Pass algorithm {#sec:singlepass} --------------------- Algorithm \[alg:doublepass\] requires forming two sets of matvecs $Ax$ for a total of $2(k+p)$ matvecs. In some applications, matrix-vector products with $A$ can be expensive and must be used economically. It is possible to use the information already available in the matrices $Q$, $Y$ and $\Omega$ to avoid a second round of matvecs with A. This is called a single pass algorithm, following the convention in [@halko2011finding]. In order to derive such an algorithm, we make the following observation. First, we define $\bar{Y} {\stackrel{\text{def}}{=}}A\Omega$ $$\Omega^*\bar{Y} = \Omega^*A\Omega \approx (\Omega^*BQ)\underbrace{Q^*AQ}_{{\stackrel{\text{def}}{=}}T}(Q^*B\Omega)$$ using the relation in . Therefore, we can compute $T \approx (\Omega^*BQ)^{-1}(\Omega^*\bar{Y}) (Q^*B\Omega)^{-1}$ by avoiding additional matvecs with $A$. At first glance, it appears that we need a second round of matvecs with $B$ to form $F {\stackrel{\text{def}}{=}}Q^*B\Omega$. However, this is not the case since by using Algorithm \[alg:mgs\] we have both $Q$ and $BQ$. Therefore, forming $F$ only requires an additional ${{\cal{O}}}(k+p)^3$. We summarize the single pass algorithm in Algorithm \[alg:singlepass\]. Although this method is computationally advantageous, an additional error is used in computing $T$ which can be understood using Theorem \[thm:singlepasserr\] in section \[sec:aposteriori\]. matrices $A$, $B$ and $\Omega \in \mathbb{R}^{n\times (k+p)}$ is a Gaussian random matrix. Here $A,B \in \mathbb{R}^{n\times n}$, k is the desired rank, $p\sim 20$ is an oversampling factor. Compute $\bar{Y} = A\Omega$, and $Y = B^{-1}A\Omega$ Compute $Y =QR$ such that $Q^*BQ = I$ Form $ \tilde{T} = (\Omega^*BQ)^{-1}(\Omega^*\bar{Y}) (Q^*B\Omega)^{-1}$ Compute the eigenvalue decomposition $\tilde{T} = S\Lambda S^*$. Keep the $k$ largest eigenmodes as $S = S(:,1:k)$ and $\Lambda = \Lambda(:,1:k)$. The columns of $S$ are orthonormal. Matrices $U \in \mathbb{R}^{n\times k}$ and $\Lambda\in \mathbb{R}^{k\times k}$ that satisfy $$A \approx (BU) \Lambda (BU)^*\qquad\text{with}\qquad U = QS$$ We note that a different (but similar) approximation was proposed in [@halko2011finding]. Starting with $$Q^*\bar{Y} = \Omega^*A\Omega \approx (Q^*BQ)\underbrace{Q^*AQ}_{{\stackrel{\text{def}}{=}}T}(Q^*B\Omega)$$ where we have used the relation in  that $A \approx (BQ)T(BQ)^*$. However, we have not pursued this approach before. Nyström method -------------- Yet, another alternative was proposed in [@halko2011finding] to construct a low rank approximation to $A$ given a matrix $Q$ with orthonormal columns that approximates the range of $A$. The Nyström method builds a more sophisticated rank-$k$ approximation, namely $A \approx AQ(Q^*AQ)^{-1}Q^*A$. It can be verified that this approximation can be used without modification even for the case $B \neq I$. However, to convert this low-rank approximation $A \approx AQ(Q^*AQ)^{-1}Q^*A$ to the form $A \approx BU \Lambda (BU)^*$, we have to deviate slightly. First, we use Cholesky factorization to factorize $T = LL^*$. Next, construct $M {\stackrel{\text{def}}{=}}AQL^{-*}$. Then, we use Algorithm \[alg:mgs\] with input matrices $Y=M$ and $W = B^{-1}$ to get $Q_MR_M = M$ such that $Q_M^*B^{-1}Q_M = I$ and $\hat{Q}_M^*B\hat{Q}_M = I$. Compute the SVD of $R_M = U_M\Sigma_MV_M^*$. Finally, we construct the low-rank factorization $A \approx BU\Lambda (BU)^*$ by constructing $U = \hat{Q}_M U_M$ and $\Lambda = \Sigma_M^2$. The algorithm is summarized in \[alg:nystrom\]. For numerical stability, if $T$ is rank-deficient or ill-conditioned, its inverse can be replaced with the pseudo-inverse and the algorithm proceeds similarly. The computational cost of the Nyström algorithm is the same as the two-pass algorithm with an additional round of matvecs with $B^{-1}$ and an overall additional cost of ${{\cal{O}}}(k+p)^2n$. Theoretical and empirical results for the Nyström method suggests that it is often a better approximation than the two-pass algorithm. The reason is that the Nyström method is essentially performing (for free) an additional step of the randomized power iteration described in [@halko2011finding Algorithm 4.3]. matrices $A$, $B$ and $\Omega \in \mathbb{R}^{n\times (k+p)}$ is a Gaussian random matrix. Here $A,B \in \mathbb{R}^{n\times n}$, k is the desired rank, $p\sim 20$ is an oversampling factor. Compute $Y = B^{-1}A\Omega$ Compute $Y =QR$ such that $Q^*BQ = I$ using modified Gram-Schmidt (see Algorithm \[alg:mgs\]). Form $ T= Q^*AQ$ and compute the Cholesky factorization $T = LL^*$ Form $M = AQL^{-*}$ Using Algorithm \[alg:mgs\] with $W=B^{-1}$ to get $Q_MR_M = M$ such that $Q_M^*B^{-1}Q_M = I$ and $\hat{Q}_M^*B\hat{Q}_M = I$. Compute the SVD of $R_M = U_M\Sigma_MV_M^*$. Keep the $k$ largest modes as $U_M = U_M(:,1:k)$ and $\Lambda = \Sigma_M(:,1:k)^2$. Matrices $U \in \mathbb{R}^{n\times k}$ and $\Lambda\in \mathbb{R}^{k\times k}$ that satisfy $$A \approx (BU) \Lambda (BU)^*\qquad\text{with}\qquad U = \hat{Q}_FU_F$$ Summary of computational costs {#sec:compcosts} ------------------------------ We now briefly discuss the costs associated with the various algorithms described so far. The cost of the two pass algorithm is $2(k+p)$ matvecs with $A$, $(k+p)$ matvecs and $B^{-1}x$ and an additional ${{\cal{O}}}(k+p)^2n$ operations for forming the approximate eigenvalues and eigenvectors. The $B$-orthogonalization is accomplished using Algorithm \[alg:mgs\] which only uses one set of $(k+p)$ matvecs with $B$ (assuming no re-orthogonalization), but in return we get two sets of vectors $Q$ and $\hat{Q}$ which satisfy $Q^*BQ = I$ and $\hat{Q}^*B^{-1}\hat{Q} = I$. The Modified Gram-Schmidt also requires ${{\cal{O}}}(k+p)^2n$ operations for computing inner products. The single pass algorithm, on the other hand, only uses one set of matvecs with A. The comparison of the costs between the algorithms is summarized in Table \[tab:compcosts\]. However, it should be noted that if re-orthogonalization occurs in the modified Gram-Schmidt algorithm then the number of matvecs involving $B$ and $B^{-1}$ could be higher. However, under certain circumstances, the algorithms described can be further accelerated. We provide a few examples: - It is sometimes advantageous to apply a matrix to $k+p$ vectors simultaneously rather than execute $k+p$ matvecs consecutively. For example, out-of-core finite-element codes are more efficient when they are programmed to exploit the presence of a block of the matrix $A$ in fast memory, as much as possible [@saad1992numerical]. - Computing $A\Omega$ and $B^{-1}A\Omega$ can be trivially parallelized. Since this is often the chief bottleneck, considerable gains might be obtained by parallelism. It should be noted that the gains from using randomized techniques in comparison to classical methods (such as Krylov subspace methods) is not because they have a smaller computational cost but rather because they allow us to to reorganize our calculations such that we can fully exploit matrix properties and the computer architecture [@halko2011finding]. Method  Cost $Ax$ $Bx$ $B^{-1}x$ Scalar work ------------------------------------------ ---------- --------- ----------- ----------------------- Two Pass Algorithm \[alg:doublepass\] $2(k+p)$ $(k+p)$ $(k+p)$ ${{\cal{O}}}(k+p)^2n$ Single pass Algorithm \[alg:singlepass\] $(k+p)$ $(k+p)$ $(k+p)$ ${{\cal{O}}}(k+p)^2n$ Nyström Algorithm \[alg:nystrom\] $2(k+p)$ $(k+p)$ $2(k+p)$ ${{\cal{O}}}(k+p)^2n$ : Summary of computational costs (assuming no re-orthogonalization in the modified Gram-Schmidt algorithm)[]{data-label="tab:compcosts"} Generalized Singular Value Decomposition ======================================== Generalized SVD (GSVD) is often used in the context of inverse problems and deblurring. It has applicability both as an analytical tool and practical utility in computing minimum norm solutions in regularized weighted least squares problems. Two different generalizations of the SVD have been discussed in [@van1976generalizing]. Here we consider the second definition in [@van1976generalizing definition 3], with $A\in \mathbb{C}^{m\times n}$ and two positive definite matrices $S\in\mathbb{C}^{m\times m}$ and $T\in\mathbb{C}^{n\times n}$ $$\label{eqn:gensingular} \sigma_{S,T} (A) = \left\{\mu \left| \mu \text{ are the stationary points of } \frac{{\lVert Mx \rVert_{S}}}{{\lVert x \rVert_{T}}} \right.\right\}$$ This results in a following decomposition of the form $$U^{-1}AV = \Sigma_{S,T} \qquad U^*SU=I \qquad V^*TV=I$$ and $\Sigma_{S,T} = \text{diag}\{\sigma_{S,T,1},\dots,\sigma_{S,T,n}\}$ are the generalized singular values. A simple modification of the algorithms for GHEP yields us an algorithm for the GSVD as defined above. We first compute $Y_1=A\Omega_1$ and $Y_2 = A^*\Omega_2$. We S-orthonormalize $Y_1$ and T-orthonormalize $Y_2$ using Algorithm \[alg:precholqr\] so that $Y_1 = Q_1R_1$ with $Q_1^*SQ_1 = I$ and $Y_2 = Q_2R_2$ with $Q_2^*TQ_2 = I$. We have the following error bounds ${\lVert (I-Q_1Q_1^*S)A \rVert_2} \leq \varepsilon_S$ and ${\lVert (I-Q_2Q_2^*T)A^* \rVert_2} \leq \varepsilon_T$. Error bounds of the type derived in Proposition \[prop:random\] can be established in this case as well. It can be shown that $${\lVert A-Q_1Q_1^*SATQ_2Q_2^* \rVert_2} \leq \varepsilon_S + \varepsilon_T{\lVert Q_1Q_1^*S \rVert_2}$$ Based on the above approximate low-rank representation, compute $F = Q_1^*SATQ_2$ and compute its SVD $F= \tilde{U}\Sigma \tilde{V}^*$. Then, the approximate GSVD can be computed using $$A \approx U \Sigma V^*\qquad U = Q_1\tilde{U} \qquad V = Q_2\tilde{V}$$ and the matrices $U$ and $V$ satisfy the relations $U^*SU = I$ and $V^*TV = I$. Generalized SVD is more popularly defined in the following form [@van1976generalizing; @golub2012matrix]: given two matrices $A\in \mathbb{C}^{m_A\times n}$ and $B \in \mathbb{C}^{m_B\times n}$, with $m_A \leq n, $the GSVD is given by $$A = UCX^* \quad B = VSX^*$$ where, $U \in \mathbb{C}^{m_A\times m_A}$ and $V \in \mathbb{C}^{m_B\times m_B}$ are unitary matrices, $X\in \mathbb{C}^{n\times n}$ is a square matrix $C,S$ are diagonal matrices with non-negative entries and satisfy the relation $C^*C + S^*S = I$. The generalized singular values are given by $\sigma(A,B)$ are given by the ratio of the diagonal entries of $C$ and $S$. The relation between the two definitions presented here is that when rank$(B)=n$, the generalized singular values of the matrix pair $\sigma(A,B) = \sigma_{S,T}$ with $S= I_{m_A}$ and $T= B^*B$. Convergence and a posteriori error bounds {#sec:aposteriori} ========================================= The idea of randomized algorithms is to compute matrix-vector products involving matrix $C {\stackrel{\text{def}}{=}}B^{-1}A$ with vectors $\omega_i$ that have i.i.d. entries chosen from standard normal distribution. These columns, when appropriately orthonormalized form an approximate basis for the column space spanned by the eigenvectors corresponding to the largest eigenvalues. In Section \[sec:randomized\], we assumed that we can compute a $Q\in \mathbb{R}^{n\times (k+p)}$ that satisfied the error bound . In order to estimate the resulting error in the low-rank representation $\varepsilon$, we use the following result stated in the form of a proposition. \[prop:random\] Draw a sequence of random vectors $\omega_i$ that have i.i.d. entries chosen from standard normal distribution. Let $C {\stackrel{\text{def}}{=}}B^{-1}A$ with $A$ symmetric and $B$ symmetric positive definite. Fix a positive integer $r$ and $\alpha > 1 $. $$\label{eqn:randomapprox} {\lVert (I-QQ^*B)C \rVert_B} \leq \alpha \sqrt{\frac{2{\lVert B^{-1} \rVert_2}}{\pi}} \smash{\displaystyle\max_{i = 1,\dots,r}} {\lVert (I-QQ^*B)C\omega_i \rVert_B}$$ holds with probability at least $1-\alpha^{-r}$. Using the relation in Equation  we have the inequality that $${\lVert (I-QQ^*B)C \rVert_B} = {\lVert B^{1/2}(I-QQ^*B)CB^{-1/2} \rVert_2}\leq \sqrt{{\lVert B^{-1} \rVert_2}}{\lVert B^{1/2}(I-QQ^*B)C \rVert_2}$$ Define the matrix $M = B^{1/2}(I-QQ^*B)C$ and using the result from [@halko2011finding lemma 4.1] to the matrix $M$, we arrive at $$\begin{aligned} {\lVert (I-QQ^*B)C \rVert_B} \quad \leq& \quad \alpha \sqrt{\frac{2{\lVert B^{-1} \rVert_2}}{\pi}} \smash{\displaystyle\max_{i = 1,\dots,r}} {\lVert B^{1/2}(I-QQ^*B)C\omega_i \rVert_2} \\ = & \quad \alpha \sqrt{\frac{2{\lVert B^{-1} \rVert_2}}{\pi}} \smash{\displaystyle\max_{i = 1,\dots,r}} {\lVert (I-QQ^*B)C\omega_i \rVert_B}\end{aligned}$$ holds with probability at least $1-\alpha^{-r}$. In practice, ${\lVert B^{-1} \rVert_2}$ might not be easy to compute. Instead, we propose a crude estimator that is easy to compute. Observe that ${\lVert q_i \rVert_B} = 1$. Using inequality , we have $$\label{Binvnormapprox} \frac{{\lVert q_i \rVert_2}^2}{{\lVert B^{-1} \rVert_2}} \leq {\lVert q_i \rVert_B}^2 = 1 \quad \Rightarrow \quad \sqrt{{\lVert B^{-1} \rVert_2}} \geq \max_{i=1,\dots,r} {\lVert q_i \rVert_2}$$ The significance of the Proposition \[prop:random\] is that we now have an easy to compute a-posteriori bound for our error that can be obtained by forming matvecs with C. However, as [@halko2011finding] suggests, this is a crude estimate. The cost of this estimator is mostly performing matvecs with $A$ and $B^{-1}$. Thus, we can make a guess for the numerical rank of $B^{-1}A$, compute the low-rank approximation $C \approx QQ^*BC$, evaluate the error estimate in Proposition \[prop:random\] and keep adding more samples if this error estimate is too large. However, the matvecs $B^{-1}A\Omega$ performed on random vectors for the error estimator can be re-used. As a result, the error estimator is almost free of cost. A better estimate can be obtained by using power iteration acting on a random vector. The analysis in [@halko2011finding] suggests that if the spectrum of $C$ decays rapidly, then the error in the approximation is quite small. We are now ready to state our main result and defer the proof to the Appendix. \[thm:main\] Let $Q$ be computed according to Algorithm \[alg:doublepass\] by choosing a Gaussian random matrix $\Omega \in \mathbb{R}^{n\times r}$ with $r=k+p$. Let $C=U\Sigma_B V^*$ be the singular value decomposition in the generalized sense . We have the inequality $$E{\lVert (I-P_B)C \rVert_B} \leq\sqrt{{\lVert B^{-1} \rVert_2}}\left[ \left( 1+\sqrt{\frac{k}{p-1}}\right)\sigma_{B,k+1} +\frac{e\sqrt{k+p}}{p}\left(\sum_{j=k+1}^n\sigma_{B,j}^2\right)^{1/2} \right]$$ where, $\sigma_{B,j}$ for $j=1,\dots,n$ are the generalized singular values given by  and $E[\cdot]$ denotes the expectation. In addition to the average spectral error, an expression for the deviation bounds of the spectral error can be derived similar to [@halko2011finding Theorem 10.8]. The spectral error suggests that if the singular values (in the generalized sense) are decaying rapidly then the error due to the low-rank approximation is small, in expectation. Based on the analysis in [@halko2011finding], this result is not surprising and we defer the proof of this theorem to the appendix. The generalized singular value decomposition can be computed using the algorithm described in [@van1976generalizing]. However, this approach requires forming square roots of $B$. We can instead use the following inequality to provide an estimate $\sigma_{B,k} \leq \sqrt{{\lVert B \rVert_2}} \sigma_{k}$. Furthermore, the above error bound suggests that the error is high when ${\lVert B^{-1} \rVert_2}$ is large. In several cases $B^{-1}$ is bounded, for instance in finite elements, where $B$ is the mass matrix and is spectrally equivalent to the identity operator. Otherwise, if some combination $(\alpha A + \beta B)$ can be found such that its inverse has a small norm, we can instead solve the transformed problem $ Ax = \theta (\alpha A + \beta B)x $. At this point, we have provided both an a-priori and a posteriori measure of error in the low-rank approximation. However, does a small error in the low-rank approximations imply that there is a small error in the subsequent eigenvalue calculations? In order to answer this question, we turn to some results from the theory of spectral approximation. We now derive expressions for the error between the computed eigenvalues and the true eigenvalues and the angle between the true and approximate eigenvectors. It should be noted that a result of this kind is common in the theory of perturbation for eigenvalues of Hermitian matrix and makes use of the Kato-Temple Theorem [@saad1992numerical Theorem 3.8] and [@bai1987templates Section 7.1, chapter 5]. \[prop:apost\] Let $Q$ satisfy the relation ${\lVert (I-P_B)C \rVert_B} \leq \varepsilon$ so that ${\lVert C-P_BCP_B \rVert_B} \leq 2\varepsilon$. Let the eigenpair $(\tilde{\lambda},\tilde{u})$ be an approximation to the eigenvalue problem $Ax = \lambda Bx $ calculated by Algorithms \[alg:doublepass,alg:singlepass,alg:nystrom\]. Then we have the following error bounds $$|\lambda-\tilde{\lambda}| \leq \min\{2\varepsilon,\frac{4\varepsilon^2}{\delta}\}\qquad \sin\angle_B (u,\tilde{u}) \leq \frac{2\varepsilon}{\delta}$$ where, $\delta = \min_{\lambda_i\neq\lambda} |\tilde{\lambda}-\lambda_i|$ is the gap between the approximate eigenvalue $\tilde{\lambda}$ and any other eigenvalue and $\angle_B(x,y) = \arccos \frac{|<x,y>_B|}{{\lVert x \rVert_B}{\lVert y \rVert_B}}$ We start by defining the residual corresponding to the approximate eigenpair $r = A\tilde{u}-\tilde{\lambda}B\tilde{u}$. We first start with the proof that ${\lVert r \rVert_{B^{-1}}} \leq {2\varepsilon} $. By definition, ${\lVert r \rVert_{B^{-1}}} = {\lVert B^{-1/2}r \rVert_2}$. Plugging in the expression for $r$, we have $${\lVert B^{-1/2}r \rVert_2} = {\lVert B^{1/2}(C\tilde{u}- \tilde{\lambda}\tilde{u}) \rVert_2} = {\lVert C\tilde{u}-\tilde{\lambda}\tilde{u} \rVert_B}$$ Also, in a slight change of notation from Algorithm \[alg:doublepass\], we denote the approximate eigenpairs by $(\tilde{\lambda}_i,\tilde{u}_i)$ for $i=1,\dots,r$ to distinguish it from the exact eigenpair $(\lambda,u)$. We have $T=Q^*AQ = S\tilde{\Lambda}S^*$ and $\tilde{U} = QS$. We make the following observations: 1) $P_BCP_B = QQ^*AQQ^*B = \tilde{U} \tilde{\Lambda} \tilde{U}^*B $ and 2) since $\tilde{u}$ is a column of the B-orthonormal matrix $\tilde{U}$, we have $\tilde{\lambda}\tilde{u} = \tilde{U} \tilde{\Lambda} \tilde{U}^*B \tilde{u}$ Using these observations, $${\lVert C\tilde{u}-\tilde{\lambda}\tilde{u} \rVert_B} = {\lVert C\tilde{u}-\tilde{U}\tilde{\Lambda}\tilde{U}^*B\tilde{u} \rVert_B} \leq {\lVert C-\tilde{U}\tilde{\Lambda}\tilde{U}^*B \rVert_B} = {\lVert C-P_BCP_B \rVert_B} \leq 2\varepsilon$$ Then from [@bai1987templates Section 7.1, chapter 5], we have the following relations $$|\lambda-\tilde{\lambda}| \leq {\lVert r \rVert_{B^{-1}}} \qquad|\lambda-\tilde{\lambda}| \leq \frac{{\lVert r \rVert_{B^{-1}}}^2}{\delta} \qquad \sin\angle_B (u,\tilde{u}) \leq \frac{{\lVert r \rVert_{B^{-1}}}}{\delta}$$ The proof is completed by plugging in the inequality ${\lVert r \rVert_{B^{-1}}} \leq 2\varepsilon$. The error in low-rank representation is not the only factor that controls the error in the eigenvalues calculations. Proposition suggests that the accuracy is also determined by an additional parameter called the *spectral gap* $\delta$, defined as the gap between the approximate eigenvalue $\tilde{\lambda}$ and any other eigenvalue. When the eigenvalues are clustered, the spectral gap is small and the eigenvalue calculations are accurate as long as the error in the low-rank representation is small. However, in this case the resulting eigenvector calculations maybe inaccurate because the parameter $\delta$ appears in the denominator for the approximation of the angle between the true and approximate eigenvector. The following result provides an upper bound for the difference in the eigenvalues computed using the two-pass and single-pass algorithms as described in Algorithm \[alg:doublepass\] and Algorithm \[alg:singlepass\] respectively. Numerical results confirm that typically, the two-pass algorithm is more accurate than the single pass algorithm. \[thm:singlepasserr\] Let $\tilde{T}$ be computed using the expression $\tilde{T} = (\Omega^*BQ)^{-1}(\Omega^*\bar{Y})(Q^*B\Omega)^{-1}$. Furthermore, assume that $Q$ satisfies the error bound ${\lVert (I-QQ^*B)C \rVert_B} \leq \varepsilon$. Label the eigenvalues of $T = Q^*AQ$ as $\mu_1,\dots,\mu_{k+p}$ and the eigenvalues of $\tilde{T}$ as $\theta_1,\dots,\theta_{k+p}$. The eigenvalues $\mu_j$ and $\theta_j$ for $j=1,\dots,k+p$ are related by the inequality $$|\mu_j-\theta_j | \leq 2\varepsilon\sqrt{\kappa(B)} \frac{\sigma_\text{max}^2 (\Omega)}{\sigma_\text{min}^2 (F)}$$ where, $F {\stackrel{\text{def}}{=}}Q^*B\Omega$ and $\kappa(B) = {\lVert B \rVert_2}{\lVert B^{-1} \rVert_2}$ is the condition number of the matrix B. We start with bounding the error ${\lVert T-\tilde{T} \rVert_2}$, where $T=Q^*AQ$. $$\begin{aligned} {\lVert T-\tilde{T} \rVert_2} \quad = & \quad {\lVert F^{-*}F^* T FF^{-1} -F^{-*}(\Omega^*A\Omega )F^{-1} \rVert_2} \\ \nonumber = & \quad {\lVert F^{-*}\Omega^*BQ(Q^*AQ)Q^*B\Omega F^{-1}- F^{-*}\Omega^*A\Omega F^{-1} \rVert_2} \\ \nonumber \leq & \quad {\lVert A - BQ(Q^*AQ)(BQ)^* \rVert_2}{\lVert \Omega F^{-1} \rVert_2}^2\end{aligned}$$ From the the assumption that ${\lVert (I-QQ^*B)C \rVert_B} \leq \varepsilon$ and Equation  and we have that ${\lVert A-BQ(Q^*AQ)(BQ)^* \rVert_B} \leq 2\varepsilon$. For a matrix $M$, it can be shown that $$\frac{{\lVert M \rVert_2}}{\sqrt{\kappa(B)}}\quad \leq \quad {\lVert M \rVert_B} \quad \leq \quad \sqrt{\kappa(B)} {\lVert M \rVert_2}$$ As a result, ${\lVert A-BQ(Q^*AQ)(BQ)^* \rVert_2} \leq 2\varepsilon\sqrt{\kappa(B)} $. Finally, putting it all together, $${\lVert T-\tilde{T} \rVert_2} \quad \leq\quad 2\varepsilon \sqrt{\kappa(B)} \frac{\sigma_\text{max}^2 (\Omega)}{\sigma_\text{min}^2 (F)}$$ Finally, applying the Bauer-Fike Theorem [@saad1992numerical Theorem 3.6], and using the fact that matrix $T$ is symmetric and has an orthonormal eigenvectors, we have the desired result. The error bound in Theorem \[thm:singlepasserr\] provides insight into the error made using the single pass approximation. As a consequence, it is important to understand that the error in the single pass approximation can significantly degrade the approximation of the eigenvalues. The terms that contribute are 1) error in the low-rank decomposition $\varepsilon$, 2) ill-conditioned matrices $B$, and 3) large $\sigma_\text{max} (\Omega)$ and small $\sigma_\text{min} (F)$. The largest singular value of $\Omega$ is asymptotically $\sqrt{n}$ for $k \ll n$ [@halko2011finding], so the single pass approximation is poor when the sizes of the matrices are large. Karhunen-Loève expansion {#sec:kle} ======================== Motivation and background ------------------------- The Karhunen-Loève expansion (KLE) [@ghanem1991stochastic] is a representation of a stochastic process as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function. In contrast to a Fourier series where the coefficients are real numbers and the expansion basis consists of sinusoidal functions, the coefficients in the Karhunen-Loève Theorem are random variables and the expansion basis depends on the process. In fact, the orthogonal basis functions used in this representation are determined by the covariance function of the process. The random field is characterized by a mean and a covariance function. The KLE requires the computation of eigenpairs, which are derived from an Fredholm integral eigenvalue problem with the covariance function as the kernel. Consider the random field $s(\textbf{x})$, with mean $\mu(\textbf{x})$ and covariance $\kappa(\textbf{x},\textbf{y})$, on the bounded domain $\textbf{x}\in{\cal{D}}$. The covariance kernel is assumed to be symmetric and positive definite. The KLE can now be written as $$\label{eqn:kle} s(\textbf{x}) = \mu(\textbf{x}) + \sum_{i=1}^\infty \xi_i\sqrt{\lambda_i}\phi_i(\textbf{x})\quad \text{with,}$$ $$\mu({\textbf{x}})=E[s(\textbf{x})], \qquad \xi_i = \frac{1}{\sqrt{\lambda_i}}\int_{ {\cal{D}}} (s(\textbf{x})-\mu(\textbf{x}) )\phi_i(\textbf{x}) d{\textbf{x}}$$ Here, $\xi_i$ are uncorrelated random variables, $(\lambda_i,\phi_i(\textbf{x}))$ are the eigenpair obtained as the solution to the Fredholm integral equation of the second kind $$\label{eqn:intkle} \int_{\cal{D}} \kappa(\textbf{x},\textbf{y})\phi(\textbf{y})d\textbf{y} = \lambda \phi(\textbf{x})$$ Since the covariance $\kappa(\cdot,\cdot)$ is symmetric and positive definite, the eigenfunctions $\phi_i(\cdot)$ are mutually orthogonal and form a basis for $L^2({\cal{D}})$ and the eigenvalues $\lambda_i$ are real, non-negative and can be arranged in decreasing order $\lambda_1\geq\lambda_2\geq \dots \geq 0$. If the random field is Gaussian, then $\xi_i \sim{\cal{N}}(0,1)$. Further details are provided in [@ghanem2003stochastic]. The eigenpair $(\lambda_i,\phi_i(\textbf{x}))$ in the KLE, can be computed by first discretizing the weak form of system of Equations  (i.e. performing a Galerkin projection) using piecewise linear basis functions and, subsequently solving the linear eigensystem using a generalized eigenvalue solver for symmetric matrices, that requires only matrix-vector products involving the discretized operator. The relevant equations after discretization are $$\label{eqn:klediscrete} M{\Gamma_\text{prior}}M \phi_i = \lambda_i M \phi_i \qquad i = 1,\dots,N$$ where, ${\Gamma_\text{prior}}$ is the covariance matrix that arises from the discrete representation of the Gaussian random field corresponding to the covariance kernel $\kappa(\cdot,\cdot)$. $M$ is the mass matrix $M_{ij} = \int_{\cal{D}} v_i v_j d\textbf{x}$ and $v_i$ are the piecewise linear basis functions. The mass matrix is a discrete representation of the continuous identity operator and hence we expect it to be well-conditioned. We define $A{\stackrel{\text{def}}{=}}M{\Gamma_\text{prior}}M$ and $B{\stackrel{\text{def}}{=}}M$. We also have that $B^{-1}A = {\Gamma_\text{prior}}M$ is symmetric with respect to $M$-inner products. The KLE is truncated to a finite number terms $K$, which typically far fewer than the number of basis functions and independent of it. The number of terms retained in the series depends on the decay of the eigenvalues, which, in turn depends on the smoothness of the covariance kernel [@schwab2006karhunen]. When the kernel is piecewise smooth, then the eigenvalues decay algebraically, and when the kernel is piecewise analytic then the decay is exponential. This GHEP nicely fits the requirements of the randomized algorithm, since it has rapidly decaying eigenvalues. Accuracy of the eigenvalue calculations {#sec:accuracy} --------------------------------------- We consider three different covariance kernels chosen from the Matérn covariance family with $d = {\lVert {\textbf{x}}-\textbf{y} \rVert_2}/l$ $$\label{eqn:matern3} \kappa_\nu ({\textbf{x}},\textbf{y}) = \left\{ \begin{array}{ll} \exp(-d) & \quad \nu = 1/2\\ (1+\sqrt{3}d)\exp(-\sqrt{3}d) & \quad \nu = 3/2 \\ (1+\sqrt{5}d + \frac{5}{3}d^2)\exp(-\sqrt{5}d) & \quad \nu = 5/2 \end{array} \right.$$ In the rest of this subsection, we consider the KLE corresponding the covariance kernels defined in Equation  defined on the domain $x \in [-1,1]$ with the length scale parameter chosen to be $l=2$. The domain has been discretized using $201$ grid points. We deliberately chose a small problem to compare the accuracy against the results obtained from direct algorithms. We note that the rate of decay of eigenvalues is higher for covariance kernels with increasing values of $\nu$, thus providing a wide range of eigenvalue decays to study the performance of our algorithm. [|c|c|c|c|c|]{}Kernel & ${\lVert QR-Y \rVert_2}$ & ${\lVert Q^*BQ - I \rVert_2}$ & ${\lVert Q^*BY - R \rVert_2}$ & ${\lVert YR^{-1}-Q \rVert_2}$\ \ $\kappa_{1/2}(r) $ & $1.8\times 10^{-15}$ & $1.1\times 10^{-11}$ & $1.7\times 10^{-11}$ & $5.5\times 10^{-11}$\ $\kappa_{3/2}(r) $ & $2.3 \times 10^{-15}$ & $1.3 \times 10^{-7}$ & $2.4 \times 10^{-7}$ & $8.8 \times 10^{-7}$\ $\kappa_{5/2}(r) $ & $2.2 \times 10^{-15}$ & $6.1 \times 10^{-4}$ & $1.2 \times 10^{-3}$ & $5.4 \times 10^{-3}$\ \ $\kappa_{1/2}(r) $ & $1.7 \times 10^{-15}$ & $1.5 \times 10^{-15}$ & $1.5 \times 10^{-15}$ & $5.8 \times 10^{-11}$\ $\kappa_{3/2}(r) $ & $2.1 \times 10^{-15}$ & $1.1 \times 10^{-15}$ & $1.0 \times 10^{-15}$ & $8.2 \times 10^{-7}$\ $\kappa_{5/2}(r) $ & $2.3 \times 10^{-15}$ & $1.7 \times 10^{-15}$ & $1.0 \times 10^{-15}$ & $5.6 \times 10^{-3}$\ \ $\kappa_{1/2}(r)$ & $1.06\times 10^{-14}$ & $1.17\times 10^{-15}$ & $9.84\times 10^{-16}$ & $1.43\times 10^{-10}$\ $\kappa_{3/2}(r)$ & $9.06\times 10^{-15}$ & $1.11\times 10^{-15}$& $ 7.01 \times 10^{-16}$ & $2.79\times 10^{-06}$\ $\kappa_{5/2}(r) $ & $9.78 \times 10^{-15}$ & $1.15\times 10^{-15}$ & $8.78\times 10^{-16}$ & $2.8\times 10^{-2}$\ ### Accuracy of QR with weighted inner product: We compare the algorithms for computing the QR decomposition of $Y = B^{-1}A\Omega$ where $\Omega \in \mathbb{R}^{201\times 100}$ with i.i.d. entries chosen from ${\cal{N}} (0,1)$. For the modified Gram-Schmidt algorithm (MGS), we consider the algorithm in [@grimes1994shifted] without additional re-orthogonalization. For the algorithm with re-orthogonalization (MGS-R) we consider the one proposed in Algorithm \[alg:mgs\]. We compare the following metrics: ${\lVert QR-Y \rVert_2}$, ${\lVert Q^*BQ - I \rVert_2}$, ${\lVert Q^*BY - R \rVert_2}$ and ${\lVert YR^{-1}-Q \rVert_2}$. If the quantities were computed in exact arithmetic, they would all be identically zero. However, in the presence of round-off errors, these quantities are not numerically zero. We compare the results for three different covariance kernels defined in  and we have $A=MQM$ and $B=M$. The results are summarized in Table \[tab:mgs\]. We clearly see that as $\nu$ increases the eigenvalues of the KLE decay rapidly, as a result $Y$ becomes more and more ill-conditioned. Applying the algorithm MGS results in the quantity ${\lVert QR-Y \rVert_2}$ being satisfied to nearly machine precision. However, the other metrics ${\lVert Q^*BQ - I \rVert_2}$, ${\lVert Q^*BY - R \rVert_2}$ and ${\lVert YR^{-1}-Q \rVert_2}$ perform badly as $\nu$ increases. On the other hand, for the re-orthogonalized MGS (MGS-R) the quantities ${\lVert QR-Y \rVert_2}$, ${\lVert Q^*BQ - I \rVert_2}$ and ${\lVert Q^*BQ - R \rVert_2}$ are satisfied to nearly machine precision. However, like MGS ${\lVert YR^{-1}-Q \rVert_2}$ is higher because $R$ is close to singular. It is clear that while re-orthogonalization has a significant effect on the orthogonality of $Q$, it comes at a higher expense because of additional re-orthogonalization. The accuracy of ‘PreCholQR’ is comparable with MGS-R. Unless mentioned explicitly we use MGS-R throughout this section for all the numerical experiments. ![Comparison of the error between the true eigenvalues $\lambda_k$ and the approximate eigenvalues $\tilde\lambda_k$ as a function of oversampling parameter $p$ for each of the covariance kernels defined in Equation  - $\kappa_{1/2}$ (black), $\kappa_{3/2}$ (red) and $\kappa_{5/2}$ (blue). The plots correspond to $k=20,40,60$ and $80$. 2-pass algorithm (solid line) refers to Algorithm \[alg:doublepass\], 1-pass (dashed line) algorithm refers to Algorithm \[alg:singlepass\] and Nyström algorithm (dotted line) refers to Algorithm \[alg:nystrom\]. []{data-label="fig:errcovker"}](figs/error_covariance_kernel) ### Effect of oversampling parameter $p$: We consider the effect of the oversampling parameter $p$ on the accuracy of the low-rank approximation and the computed eigenvalues. We plot (in Figure \[fig:errcovker\]) the error using two-pass, single-pass and Nyström algorithms applied to all three covariance kernels defined in Equation \[eqn:matern3\] as a function of the oversampling parameter $p$. For fairness in comparison, to eliminate the effect of random sampling, we use the same sequence of pseudo-random numbers while computing the low-rank decomposition. We can see from Theorem \[thm:main\] that by increasing $p$, the error of the low-rank estimate improves. However, the rate of improvement of the error with increasing oversampling also seems to increase when the rate of decay of the singular values is higher. This is consistent with the result of Theorem \[thm:main\]. However, while the error decreases while using the two-pass and the single-pass algorithms, the rate of improvement of error with increased oversampling is more pronounced in the case of two-pass and Nyström algorithms. This is because, in the single-pass algorithm, an additional error is introduced while converting the low-rank decomposition $A \approx (BQ)(Q^*AQ)(BQ)^*$ to a generalized eigendecomposition of the form $A \approx (BU)\Lambda(BU)^*$. To gain more insight, we consider the error between the matrices $T$ (that is formed exactly in the two-pass and Nyström algorithms) and its approximation $\tilde{T}$ (that is formed in the single-pass algorithm) as a function of oversampling parameter $p$ for each of the covariance kernels. The results are displayed in Figure \[fig:errsinglepass\]. The error between $T$ and $\tilde{T}$ decreases with oversampling although slowly. ![Comparison of the error in the approximation of $T$ (that is formed in the two-pass algorithm) and its approximation $\tilde{T}$ (that is formed in the single-pass algorithm) measured as $\sum_k|\lambda_k-\theta_k|/\sum_k|\lambda_k|$ (where $\lambda_k$ and $\theta_k$ are defined in Theorem \[thm:singlepasserr\])as a function of oversampling parameter $p$ for each of the covariance kernels defined in Equation  - $\kappa_{1/2}$ (black), $\kappa_{3/2}$ (red) and $\kappa_{5/2}$ (blue). The plots correspond to $k=20,40,60$ and $80$. []{data-label="fig:errsinglepass"}](figs/error_single_pass) ![Comparison of the actual error in the low-rank representation $f_k$ with the random estimator $e_k$ from Proposition \[prop:random\] and the approximation error from Theorem \[thm:main\]. An oversampling factor of $5$ is used and we also choose $r=5$ for the randomized estimator. Here, we use $\kappa_{5/2}$ defined in Equation .[]{data-label="fig:errsvd"}](figs/errsvd) ### Accuracy of the estimator: Next, we analyze the performance of the proposed estimator for the error in the low rank decomposition ${\lVert (I-QQ^*B)C \rVert_B} \leq \varepsilon$. An oversampling factor of $p=5$ was used. We compare the following quantities: - $\sqrt{{\lVert B^{-1} \rVert_2}}\sigma_{B,k+1}(C)$, where $(k+1)$ generalized singular value of the matrix $C$. This is, roughly speaking, an estimate of the error according to Theorem \[thm:main\]. - The actual error in the low-rank approximation $f_k = {\lVert (I-Q_kQ_k^*B)C \rVert_B}$. - Estimator of the error $f_k$ computed using the result in Proposition \[prop:random\], and is denoted as $e_{5,k}$. We pick $\alpha = 2$ and $r=5$. Figure \[fig:errsvd\] shows the comparison between the three quantities listed above. We observe that the error in the low-rank approximation $f_k$ is nearly equal to the estimate $\sqrt{{\lVert B^{-1} \rVert_2}}\sigma_{B,k+1}(C)$ that is predicted from theory. Moreover, the true error is bounded from above by the estimated error $e_k$ and hence, the estimator provides a good upper bound for the actual error. Next, we try to answer the following question: How often (statistically speaking) is the estimator for the error close to the true error? To answer this, we generate $1000$ realizations at different values of $k=20,40,60,80$ and compare the true error with the estimated error. The results are presented in Figure \[fig:errdist\]. It can be seen that both the actual and the estimated error are concentrated about the mean. ![Distribution of the true and the estimated error generated for $1000$ samples corresponding to $k=20,40,60,80$ eigenvalues. Here, $f_k$ is the actual error in the low-rank representation and the random estimator $e_k$ from Proposition \[prop:random\]. We use $\kappa_{3/2}$ defined in Equation . []{data-label="fig:errdist"}](figs/errdist) ### Effect of correlation length $l$: The rate of decay of eigenvalues is controlled by the smoothness of the kernel [@schwab2006karhunen]. Additionally, the rate of decay is also dependent on the correlation length $l$, which appears in Equation  through the distance function $d = {\lVert {\textbf{x}}-\textbf{y} \rVert_2} / l$. As has been observed in [@cliffe2011multilevel], for small correlation lengths there is a pre-asymptotic regime before there is a significant decay rate of the eigenvalues. To demonstrate the effect of correlation length $l$ on the accuracy of the randomized calculations, we consider the following numerical experiment. The eigenvalues are computed for the KLE using the covariance kernel $\kappa_{\nu=5/2}$ as defined in Equation . The domain for the computations is $[-1,1]$ and the number of grid points are $501$. The true and approximate eigenvalues are displayed in Figure \[fig:corrlength\] for 3 different correlation lengths $l=[0.01,0.1,1]$. Also plotted is the error between the true and approximate eigenvalues measured as $\sum_k|\lambda_k-\tilde\lambda_|/\sum_k |\lambda_k|$. From the figure, it can be seen that there is no appreciable decay in the eigenvalues for small correlation lengths $0.5\%$ of domain length. However, for correlation lengths that are greater than $5\%$ of the domain length, which is typically used in practice, the accuracy of the eigenvalue calculations is moderate and improves significantly with increasing correlation length. It should be noted that the randomized algorithms may not be very accurate for extremely small correlation lengths. ![Effect of correlation length on accuracy of eigenvalues. We use $\kappa_{5/2}$ as the covariance kernel as defined in Equation . (left) comparison between the true eigenvalues (solid line) and approximate eigenvalues (dot-dashed line) computed for different correlation lengths (right) the error between the true and the approximate eigenvalues measured as $\sum_k|\lambda_k-\tilde\lambda_|/\sum_k |\lambda_k|$ as a function of correlation length. []{data-label="fig:corrlength"}](figs/eigdecaycorrlength "fig:") ![Effect of correlation length on accuracy of eigenvalues. We use $\kappa_{5/2}$ as the covariance kernel as defined in Equation . (left) comparison between the true eigenvalues (solid line) and approximate eigenvalues (dot-dashed line) computed for different correlation lengths (right) the error between the true and the approximate eigenvalues measured as $\sum_k|\lambda_k-\tilde\lambda_|/\sum_k |\lambda_k|$ as a function of correlation length. []{data-label="fig:corrlength"}](figs/eigerrcorrlength "fig:") • ### Accuracy of the KL expansion: Thus far, we have established the accuracy of the eigenvalues using the randomized approach. However, the accuracy of the KL expansion depends on both the accuracy of the eigenvalues and the eigenvectors. The accuracy of the truncated discrete KL expansion can be quantified using the following theorem. \[thm:kleerror\] Let $(\lambda,\phi)$ be the exact eigenpair of Equation  and the $(\tilde\lambda,\tilde\phi)$ be the approximate eigenpair computed using the Randomized algorithms. Assume that $\arcsin (2\varepsilon/\delta) < \pi/2$ $$\mathbb{E}\left[\left\lVert \sum_{k=1}^n \xi_k\left( \sqrt{\lambda_k}\phi - \sqrt{\tilde\lambda_k} \tilde\phi\right) \right\rVert_M^2 \right] \quad \lessapprox \quad n\min\left\{2\varepsilon,\frac{2\varepsilon}{\delta} \right\} + \sum_{k=1}^n \lambda_k \frac{4\varepsilon^2}{\delta^2}$$ Here the expectation $\mathbb{E}[\cdot]$ is w.r.t to the random variables $\xi_k$. • Using the property that $\mathbb{E}[\xi_i\xi_j] = \delta_{ij}$ the expression on the left reduces to $$\mathbb{E}\left[\left\lVert \sum_{k=1}^n \xi_k\left( \sqrt{\lambda_k}\phi - \sqrt{\tilde\lambda_k} \tilde\phi\right) \right\rVert_M^2 \right] = \sum_{k=1}^n \left\lVert\sqrt{\lambda_k}\phi - \sqrt{\tilde\lambda_k} \tilde\phi \right\rVert_M^2$$ Next considering each term in the summation, we have $$\begin{aligned} \left\lVert\sqrt{\lambda_k}\phi_k - \sqrt{\tilde\lambda_k} \tilde\phi_k \right\rVert_M^2 \leq& \quad \left\lVert\sqrt{\lambda_k}\phi_k - \sqrt{\lambda_k} \tilde\phi_k \right\rVert_M^2 + \left\lVert\sqrt{\lambda_k}\tilde\phi_k - \sqrt{\tilde\lambda_k} \tilde\phi_k \right\rVert_M^2 \\ \leq & \quad \lambda_k {\lVert \phi_k - \tilde\phi \rVert_{M}}^2 + |\lambda_k - \tilde\lambda_k| {\lVert \tilde\phi \rVert_{M}}^2\end{aligned}$$• We have that ${\lVert \tilde\phi_k \rVert_{M}}^2 = 1$ and ${\lVert \phi_k \rVert_{M}}^2 = 1$ and $\angle_M(\phi_k,\tilde\phi_k) = \arccos \langle \phi, \tilde\phi\rangle_M$. $$\begin{aligned} {\lVert \phi_k-\tilde\phi_k \rVert_{M}}^2 \quad \leq & \quad {\lVert \phi_k \rVert_{M}}^2 + {\lVert \tilde\phi_k \rVert_{M}}^2 - 2\langle \phi, \tilde\phi_k\rangle_M = 2(1-\cos \angle_M(\phi_k,\tilde\phi_k))\\ = & \quad 2 \left( 1 - \sqrt{1 - \sin^2 \angle_M(\phi_k,\tilde\phi_k)} \right) \\ \leq & \quad \left(\frac{2\varepsilon}{\delta}\right)^2 + \mathcal{O} \left(\frac{2\varepsilon}{\delta}\right)^4\end{aligned}$$• Here we have used the result of Proposition \[prop:apost\] that bounds $\sin\angle_M(\phi_k,\tilde\phi_k) \leq 2\varepsilon/\delta$. The proof is completed by plugging the above expression into the summation and using the inequality in Proposition \[prop:apost\] $|\lambda_k-\tilde\lambda_k| \leq \min\{2\varepsilon,4\varepsilon^2/\delta \}$. ![(left) Accuracy of the eigenvalues $\lambda_k - \tilde\lambda_k$ (right) accuracy of the eigenvectors quantified as $\lambda_k 2(1-\cos\angle_M(\phi_k,\tilde\phi_k))$ which appears in the proof of Theorem \[thm:kleerror\]. []{data-label="fig:kleerror"}](figs/eigvalerror "fig:") ![(left) Accuracy of the eigenvalues $\lambda_k - \tilde\lambda_k$ (right) accuracy of the eigenvectors quantified as $\lambda_k 2(1-\cos\angle_M(\phi_k,\tilde\phi_k))$ which appears in the proof of Theorem \[thm:kleerror\]. []{data-label="fig:kleerror"}](figs/eigvecerror "fig:") • Estimation of the spectral gap is hard in practice, since the exact eigenvalues are not known. We consider the accuracy of the discretized KL expansion. We consider a 1D KL expansion in the domain $[-1,1]$ discretized using $501$ basis functions. Furthermore, we consider three different Matérn class covariance kernels described in Equation  and take the correlation length $l = 0.4$. From the analysis in Theorem \[thm:kleerror\], we have seen that the second factor controlling the error is the accuracy of the eigenvalues $|\lambda_k - \tilde\lambda_k|$ the factor $\lambda_k 2(1-\cos\angle_M(\phi_k,\tilde\phi_k))$ and these quantities have been plotted in Figure \[fig:kleerror\]. From the figure, it can be seen that the error in both the quantities deteriorates with the index number of the eigenvalues $k$ and the accuracy is higher as the parameter $\nu$ increases. Furthermore, the accuracy of both quantities is roughly the same order of magnitude and therefore, both terms have similar contributions to the error in the discretized KL expansion computed using the randomized algorithms described in this paper. Implementation using $\mathcal{H}$-matrices ------------------------------------------- Since the matrix ${\Gamma_\text{prior}}$ is dense, storage and computational costs of matvecs involving the matrix ${\Gamma_\text{prior}}$ scales as ${{\cal{O}}}(N^2)$. In order to mitigate these costs, ${\cal H}$-matrix approach has previously been used for efficient representation of covariance matrices arising out of Gaussian random fields in [@saibaba2012efficient; @ambikasaran2012large; @saibaba2012application]. Hierarchical matrices [@borm2003introduction] (or ${\mathcal{H}}$-matrices, for short) are efficient data-sparse representations of certain densely populated matrices. The main idea that is used repeatedly in these kind of techniques, is to split a given matrix into a hierarchy of rectangular blocks and approximate each of the blocks by a low-rank matrix. Hierarchical matrices have been used successfully in data-sparse representation of matrices arising in the Boundary Element method or for the approximation of the inverse of a Finite Element discretization of an elliptic partial differential operator. Fast algorithms have been developed for this class of matrices, including matrix-vector products, matrix addition, multiplication and factorization in almost linear complexity [@borm2003introduction]. The matrix-vector products involving the dense covariance matrix can be computed in ${{\cal{O}}}(N\log N)$ using the ${\cal H}$-matrix approach, where $N$ is the number of grid points after discretization. The use of ${\cal H}$-matrices for computing the KLE along with Krylov subspace methods to compute eigendecomposition has been discussed in [@khoromskij2009application; @eiermann2007computational]. The specific details of our implementation of ${\cal{H}}$-matrix approach has already been presented in [@saibaba2012application] and will not be provided here. The assembly of the finite element matrix corresponding to the mesh is handled using the finite element software FEniCS [@LoggWells2010a]. Since the matrix $M$ is sparse and can easily be factorized, computing the dominant eigenmodes of the eigenvalue problem  can be efficiently computed by a transformation into a HEP. Instead, we only assume that $Mx$ and $M^{-1}x$ can be formed fast. We use this simple example to demonstrate the accuracy and speedup of the randomized algorithm for GHEP. We compare the performance of Algorithm \[alg:doublepass\] which is labeled “Two Pass”, Algorithm \[alg:singlepass\] labeled “Single Pass” and the solution of the GHEP using ARPACK that is accessed via SciPy and is labeled “ARPACK”. We warn the reader to exercise caution while interpreting the timing values, since the comparison is made across different programming environments (ARPACK is written in Fortran). Furthermore, any comparison with Krylov subspace methods is complicated by the fact that these methods require sophisticated algorithms for monitoring convergence and restarts. The GHEP listed in Equation  is solved corresponding to the covariance kernels defined in Equation . For the mesh, we started with a mesh available in the public domain [^1]. Then using the FEniCS command ‘refine’ twice, we ended up with a finer mesh with $43872$ nodes corresponding to the irregular domain in Figure \[fig:dolfin\]. The time to compute the eigendecomposition, and a summary of the number of matrix-vector products taken by each solver is summarized in Table \[tab:eigen\]. An oversampling factor $p=5$ is chosen. The eigenvalues are shown in Figure \[fig:kleeigen\]. We observe that even though the single pass algorithm takes half the time as the two pass algorithm (fewer matvecs with $A$), the accuracy of this algorithm deteriorates as the number of requested eigenvalues increase. The difference in computational costs between the randomized algorithms and Krylov subspace based eigensolvers will be much higher if the cost associated with forming $Bx$ or $B^{-1}x$ is much higher. In terms of the accuracy of the eigensolvers, we observe that in general, the “Nyström” and “Two Pass” algorithms are closer in accuracy compared to the “ARPACK” solver; in fact, they are often an order of magnitude more accurate than the “Single Pass” algorithms. The accuracy improves when the eigenvalues decay more rapidly, i.e. for covariance matrices $\kappa_\nu$ with increasing $\nu$. Furthermore, we observe that the first few eigenvalues are computed relatively accurately but the accuracy decays towards the tail. This accuracy can be improved by increased oversampling, i.e. using a higher value of $p$. The summary of the computational costs along with CPU time is provided in Table \[tab:eigen\]. [|c|c|c|c|c|c|]{} Method & $Ax$ & $Bx$ & $B^{-1}x$ & Time (s) & $\sum_k|\lambda_k-\tilde\lambda_k|/\sum_k|\lambda_k|$\ \ Single Pass & $55$ & $157$ & $60$ & $91.49 $ & $3.6\times 10^{-2}$\ Two Pass & $110$ & $156$ & $55$ & $186.50$ & $7.0\times 10^{-3}$\ Nyström & $110$ & $157$ & $157$ & $188.32$& $2.4\times 10^{-3}$\ ARPACK & $ 128$ & $ 256 $& $128$ & $205.37$ & $-$\ \ Single Pass & $55$ & $160$ & $55$ & $ 95.40$ & $1.0\times 10^{-3}$\ Two Pass & $110$ & $162$ & $55$ & $186.68$ & $1.1\times10^{-4} $\ Nyström & $110$ & $160$ & $160$ & $177.70$ & $3.5\times 10^{-5}$\ ARPACK & $ 102$ & $ 202 $& $102$ & $159.07$ & $-$\ \ Single Pass & $55$ & $161$ & $55$ & $86.25 $ & $3.39\times 10^{-5}$\ Two Pass & $110$ & $162$ & $55$ & $ 171.72$ & $4.31\times 10^{-6} $\ Nyström & $110$ & $162$ & $162$ & $172.64$ & $1.8\times 10^{-6}$\ ARPACK & $ 102$ & $ 201 $& $102$ & $ 155.91$ & $-$\ Finally, we conclude this section with a discussion on choosing between randomized algorithms and Krylov subspace methods for computing the dominant eigenmodes of the KLE. As can be seen from Table \[tab:eigen\], in general the single-pass algorithm is nearly twice as cheap compared to either two-pass algorithm or ARPACK since the dominant cost is forming matvecs with $A$. Although, on the whole two-pass algorithm is more accurate than the single-pass algorithm, in this application it is more expensive than ARPACK. Therefore, if an accurate eigendecomposition is desired then ARPACK is recommended. In finely discretized problems with complicated geometries in 3D, factorizing or inverting the mass matrix $M$ that is required by both randomized algorithms and ARPACK prove to be expensive. In such cases, the calculations can be simplified by observing that $B^{-1}A = {\Gamma_\text{prior}}M$ and as a result, there is no reason to invert $M$. This can be used to accelerate the randomized algorithms. The same trick can be used by Krylov subspace methods as well. The ultimate choice of algorithms would depend heavily on the architecture used, the specific problem and the desired accuracy. Parallel implementation ----------------------- In this section, we consider the parallel performance of the proposed algorithms for a large-scale KL expansion. The domain $x \in [0, 1]^3$ was discretized with uniformly distributed $N = 200^3 = 8,000,000$ grid points. The computations were performed for $k = 120$ eigenmodes with an oversampling factor $p = 8$. Parallel execution times were measured on a Linux workstation equipped with Intel Xeon E5-2687W running at 3.1 GHz (16 cores) and 128 GB memory. MATLAB was used to test single pass (Algorithm \[alg:doublepass\]) and two pass (Algorithm \[alg:singlepass\]) algorithms. Since the domain under consideration is a rectangle and the covariance kernel is stationary, the resulting covariance matrix is a recursive block-Toeplitz matrix and the dense matrix-vector products involving the matrix ${\Gamma_\text{prior}}$ were accelerated using FFT [@ambikasaran2013fast]. For the QR decomposition with weighted inner-products, we consider the ‘PreCholQR’ (Algorithm \[alg:precholqr\]) instead of ‘MGS-R’ (Algorithm \[alg:mgs\]). The reason for this is that, like Krylov subspace methods, MGS-R uses $W$-inner products in a sequential fashion. On the other hand, the computations in ‘PreCholQR’ can be readily parallelized. The matvecs $B^{-1}A = {\Gamma_\text{prior}}M$ are further parallelized further simply using MATLAB command ‘parfor’. This convenient parallel implementation underscores the coding efficiency and excellent scalability of the randomized algorithm since typical Krylov subspace methods have to execute matrix-vector multiplications sequentially. The matrix $M$ was constructed using FEniCS. Up to 16 processes were used for the tests and each test was executed 10 times to compute the average execution time. ![(left) Performance results with 16 processes of randomized algorithms Two-pass (Algorithm \[alg:singlepass\]) and Single pass (Algorithm \[alg:doublepass\]) for $k=120$ eigenvalues and oversampling factor $p = 20$. (right) Breakdown of costs of different parts of the algorithm, demonstrating the parts that are scalable. []{data-label="fig:parallel"}](figs/timing_breakup) [|c|c|c|c|c|]{} & &\ & randomized & eigs & randomized & eigs\ 1 & 20.15 & & 199.64 &\ 2 & 10.63 & & 102.53 &\ 4 & 6.99 & 30.98 ^\*^ & 59.05 & 254.56 ^\*^\ 8 & 4.18 & & 35.86 &\ 16 & 3.34 & & 25.94 &\ Figure \[fig:parallel\] shows the strong scaling for single pass and double pass algorithms. In these experiments, the dominant computation cost arose from matrix-vector products $B^{-1}A = {\Gamma_\text{prior}}M$ and simple embarrassingly parallel implementation on this step could reduce the overall computation costs significantly without losing its scalability. The remaining steps with smaller computation costs were executed using built-in MATLAB functions (sparse matrix multiplication $M*x$, chol and eig), which took around a minute on a single core. While one might reduce the computation time further using sophisticated parallelization on the entire algorithm, the computation costs for these steps become negligible for large-scale truncated KL expansion problems. We expect similar scaling when applied to clusters with distributed memory system. Finally, we demonstrate significant performance gains over a Krylov subspace implemented on the same computational environment. Table \[tab:parallel\] shows the comparison of computational costs of Single pass algorithm (Algorithm \[alg:singlepass\]) applied to the GHEP $M{\Gamma_\text{prior}}M x = \lambda M x$ with MATLAB function ‘eigs’ applied to the matrix ${\Gamma_\text{prior}}M$ for $N = 50^3 = 125,000$ and $100^3 = 1,000,000$ with $k = 120$ eigenmodes and p = 8 oversampling factor. As can be seen there is significant speed up in using the randomized approach even on a small problem size. It should be noted that the spectrum of the eigenvalue problem ${\Gamma_\text{prior}}Mx = \lambda x $ is identical to the GHEP $M{\Gamma_\text{prior}}M x = \lambda M x $, however the eigenvectors obtained using eigs are not $M$-orthonormal. Discussion and conclusions ========================== We have presented a few algorithms for computing the dominant eigenmodes of the generalized Hermitian eigenvalue problem $Ax =\lambda Bx$ using a randomized approach. The algorithms avoid the need to factorize $B$ (or form products with $B^{1/2}$ or its inverse). This is advantageous for certain classes of problems, where factorizing $B$ is computationally expensive. Instead, we provide a Hermitian low-rank decomposition by using $B$-inner products. We discussed various issues related to computational costs and factors controlling accuracy through an example application that involved computing the dominant eigenmodes of the Karhunen-Loève expansion. Out of the two algorithms proposed - although single pass algorithms are faster (on account of using half the number of matvecs with $A$), the accuracy that it provides may not be satisfactory unless the eigenvalues decay very rapidly. We conclude with an additional example application, in which we think randomized algorithms maybe computationally beneficial. Consider a linear inverse problem of estimating parameters $s\in\mathbb{R}^{n_s}$ from noisy measurements $y \in \mathbb{R}^{n_y}$ with $n_y \ll n_s$. Using a Bayesian approach to recover the unknowns from the measurements, often one has to solve the following regularized least-squares problem $$\hat{s} = \arg\min_s \quad {\lVert y-Hs \rVert_{{\Gamma_\text{noise}}^{-1}}}^2 + {\lVert s-\mu \rVert_{{\Gamma_\text{prior}}^{-1}}}^2$$ In addition to computing the best estimate $\hat{s}$, we would like to derive an efficient representation for the posterior covariance matrix ${\Gamma_\text{post}}{\stackrel{\text{def}}{=}}(H^T{\Gamma_\text{noise}}^{-1}H + {\Gamma_\text{prior}}^{-1})^{-1}$ since this gives us insight about quantifying the predictive uncertainty. For examples, the diagonals of the posterior covariance matrix ${\Gamma_\text{post}}$ is related to the variance of the estimate. As before, ${\Gamma_\text{prior}}$ is approximated as a ${\cal{H}}$-matrix which can be used form fast products of the form ${\Gamma_\text{prior}}x$ and ${\Gamma_\text{prior}}^{-1}x$ (using a Krylov subspace method). Forming and storing the posterior covariance matrix entry wise using the formula $\left ({\Gamma_\text{prior}}^{-1} + H^T{\Gamma_\text{noise}}^{-1}H\right)^{-1}$ is still out of the question. We consider the generalized Hermitian eigenvalue problem $$\label{eqn:postghep} H^T{\Gamma_\text{noise}}^{-1}Hu = \lambda {\Gamma_\text{prior}}^{-1}u$$ Using any of the randomized algorithms described previously, we get the decomposition $$H_\text{data} {\stackrel{\text{def}}{=}}H^T{\Gamma_\text{noise}}^{-1}H \quad \approx \quad {\Gamma_\text{prior}}^{-1}U_k \Lambda_k U_k^T {\Gamma_\text{prior}}^{-1}$$ where the columns of the matrix $U$ are the generalized eigenvectors and $\Lambda_k$ is a diagonal matrix with entries as the generalized eigenvalues. Plugging this decomposition into the expression for ${\Gamma_\text{post}}$, and applying the Woodbury identity $${\Gamma_\text{post}}= ( {\Gamma_\text{prior}}^{-1}U \Lambda U^T {\Gamma_\text{prior}}^{-1} + {\Gamma_\text{prior}}^{-1})^{-1} = {\Gamma_\text{prior}}- U_kD_kU_k^T + {{\cal{O}}}\left(\frac{\lambda_{k+1}}{1+\lambda_{k+1}} \right)$$ where, $D_k {\stackrel{\text{def}}{=}}\text{diag}(\frac{\lambda_i}{1+\lambda_i})$. For several inverse problems the eigenvalues of the eigenproblem  decay rapidly so that the low-rank approximation can be truncated for small $k$ resulting in an efficient representation of the posterior covariance matrix. We will discuss this application in an upcoming paper [@saibaba2013uncertainty]. Appendix: Error estimation ========================== In this Section, we derive a probabilistic error for the low-rank approximation described in Theorem \[thm:main\]. The proof follows the arguments of [@martinsson2011randomized; @liberty2007randomized] closely and uses several key results of [@halko2011finding]. First, we derive a deterministic bound for ${\lVert (I-QQ^*B)B^{-1}A \rVert_B}$. It can be shown that there exists a matrix $F$ such that $${\lVert (I-QQ^*B)C \rVert_B} \leq 2{\lVert C - C\Omega F \rVert_B} + 2{\lVert C\Omega G - QRG \rVert_B}$$ The proof of the above inequality follows [@martinsson2011randomized; @liberty2007randomized] If we choose $Q$ and $R$ such that $C\Omega = QR$, the second term drops out. Such a $Q$ and $R$ can be constructed using Algorithm \[alg:mgs\]. Now, we show that for any matrix $C{\stackrel{\text{def}}{=}}B^{-1}A$ and $\Omega$ with i.i.d. entries chosen from a Gaussian distribution with zero mean and unit variance, there exists a matrix $G$ such that $C\Omega F$ is a good approximation to $C$ in B-norm. In fact, we show by construction that such an $F$ exists. We denote the Generalized SVD of $C = U\begin{pmatrix}\Sigma_{B,1} & \\ & \Sigma_{B,2} \end{pmatrix}V^*$, where $\Sigma_{B,1}$ contain the $k$ largest singular values of $C$ in the generalized sense. For convenience, henceforth we drop the subscript $B$ on the singular values. Then, $$C\Omega G = U \begin{pmatrix}\Sigma_1 & \\ & \Sigma_2 \end{pmatrix} \begin{pmatrix}\Omega_1 \\ \Omega_2\end{pmatrix} F$$ where, we have $V^*\Omega = \begin{pmatrix}\Omega_1 \\ \Omega_2\end{pmatrix}$ is also a Gaussian random matrix, because they are invariant under rotation. Here $\Omega_1$ is $k\times (k+p)$ and $\Omega_2 $ is $(n-k)\times (k+p)$. Now, we choose $G{\stackrel{\text{def}}{=}}[\Omega_1^ \dagger \quad 0 ]V^*$ so that $$\begin{aligned} C\Omega G & = \quad U \begin{pmatrix}\Sigma_1 & \\ & \Sigma_2 \end{pmatrix} \begin{pmatrix}\Omega_1 \\ \Omega_2\end{pmatrix} G \\ \nonumber & = U \quad \begin{pmatrix}\Sigma_1 & \\ & \Sigma_2 \end{pmatrix} \begin{pmatrix}\Omega_1 \\ \Omega_2\end{pmatrix} [\Omega_1^ \dagger \quad 0 ]V^* \\ \nonumber & = \quad U \begin{pmatrix}\Sigma_1 & \\ & \Sigma_2 \end{pmatrix} \begin{pmatrix} I & 0\\ \Omega_2\Omega_1^\dagger & 0 \end{pmatrix} V^* \quad = \quad U \begin{pmatrix} \Sigma_1 & 0\\\Sigma_2\Omega_2\Omega_1^\dagger & 0 \end{pmatrix} V^* \\ \nonumber\end{aligned}$$ Then, $ C - C\Omega G = U \begin{pmatrix} 0 & 0 \\ -\Sigma_2\Omega_2\Omega_1^\dagger & \Sigma_2 \end{pmatrix} V^*$ and applying matrix-norm inequalities (see Proposition \[prop:random\]), we have $$\begin{aligned} {\lVert C-C\Omega G \rVert_B}^2 \quad \leq & \quad {\lVert B^{-1} \rVert_2}\lVert B^{1/2}U \begin{pmatrix} 0 & 0 \\ -\Sigma_2\Omega_2\Omega_1^\dagger & \Sigma_2 \end{pmatrix} V^* \rVert^2 \\ \nonumber \leq & \quad {\lVert B^{-1} \rVert_2}{\lVert B^{1/2}U \rVert_2}^2{\lVert V^* \rVert_2}^2 \left( {\lVert \Sigma_2 \rVert_2}^2 + {\lVert \Sigma_2\Omega_2\Omega_1^\dagger \rVert_2}^2 \right) \\ \nonumber =& \quad {\lVert B^{-1} \rVert_2}\left({\lVert \Sigma_2 \rVert_2}^2 + {\lVert \Sigma_2\Omega_2\Omega_1^\dagger \rVert_2}^2 \right) \\ \nonumber\end{aligned}$$ However, ${\lVert \Sigma_2 \rVert_2} = \sigma_{B,k+1}$. Now, applying the result in [@halko2011finding Theorem 10.6] we get the desired result. [^1]: http://fenicsproject.org/download/data.html
--- abstract: | The Web has been around and maturing for 25 years. The popular websites of today have undergone vast changes during this period, with a few being there almost since the beginning and many new ones becoming popular over the years. This makes it worthwhile to take a look at how these sites have evolved and what they might tell us about the future of the Web. We therefore embarked on a longitudinal study spanning almost the whole period of the Web, based on data collected by the Internet Archive starting in 1996, to retrospectively analyze how the popular Web as of now has evolved over the past 18 years. For our study we focused on the German Web, specifically on the top 100 most popular websites in 17 categories. This paper presents a selection of the most interesting findings in terms of *volume*, *size* as well as *age* of the Web. While related work in the field of Web Dynamics has mainly focused on change rates and analyzed datasets spanning less than a year, we looked at the evolution of websites over 18 years. We found that around 70% of the pages we investigated are younger than a year, with an observed exponential growth in age as well as in size up to now. If this growth rate continues, the number of pages from the popular domains will almost double in the next two years. In addition, we give insights into our data set, provided by the Internet Archive, which hosts the largest and most complete Web archive as of today. author: - | Helge Holzmann, Wolfgang Nejdl, Avishek Anand\ \ \ \ bibliography: - 'references.bib' subtitle: 'A Study of the Archived German Web over 18 Years[^1]' title: 'The Dawn of Today’s Popular Domains' --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10002951.10003260&lt;/concept\_id&gt; &lt;concept\_desc&gt;Information systems World Wide Web&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010405.10010476.10003392&lt;/concept\_id&gt; &lt;concept\_desc&gt;Applied computing Digital libraries and archives&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt; [^1]: This work is partly funded by the European Research Council under ALEXANDRIA (ERC 339233)
--- --- *To my mother,\ my father and my sister.*
--- abstract: 'We consider fractional differentiation operators in various senses and show that the strictly accretive property is the common property of fractional differentiation operators. Also we prove that the sectorial property holds for differential operators second order with a fractional derivative in the final term, we explore a location of the spectrum and resolvent sets and show that the spectrum is discrete. We prove that there exists a two-sided estimate for eigenvalues of the real component of operators second order with the fractional derivative in the final term.' address: 'Maksim V. Kukushkin International Committee “Continental”, Geleznovodsk 357401, Russia' author: - 'Maksim V. Kukushkin' title: Spectral properties of fractional differentiation operators --- [^1] \[section\] \[theorem\][Lemma]{} \[theorem\][Remark]{} Introduction ============ It is remarkable that the term accretive, which applicable to a linear operator $T$ acting in Hilbert space $H,$ is introduced by Friedrichs in the paper [@firstab_lit:fridrichs1958], and means that the operator $T$ has the following property: the numerical range $\Theta(T)$ (see [@firstab_lit:kato1966 p.335]) is a subset of the right half-plane i.e. $${\rm Re}\left( Tu,u\right)_{H}\geq0,\;u\in \mathfrak{D}(T).$$ Accepting the notation of the paper [@firstab_lit:kipriyanov1960] we assume that $\Omega$ is a convex domain of the $n$ - dimensional Euclidean space $\mathbb{E}^{n}$, $P$ is a fixed point of the boundary $\partial\Omega,$ $Q(r,\mathbf{e})$ is an arbitrary point of $\Omega;$ we denote by $\mathbf{e}$ a unit vector having the direction from $P$ to $Q,$ denote by $r=|P-Q|$ an Euclidean distance between the points $P$ and $Q.$ We use the shorthand notation $T:=P+\mathbf{e}t,\,t\in \mathbb{R}.$ We consider the Lebesgue classes $L_{p}(\Omega),\;1\leq p<\infty $ of complex valued functions. For the function $f\in L_{p}(\Omega),$ we have $$\label{1} \int\limits_{\Omega}|f(Q)|^{p}dQ=\int\limits_{\omega}d\chi\int\limits_{0}^{d(\mathbf{e})}|f(Q)|^{p}r^{n-1}dr<\infty,$$ where $d\chi$ is the element of the solid angle of the unit sphere surface (the unit sphere belongs to $\mathbb{E}^{n}$) and $\omega$ is a surface of this sphere, $d:=d(\mathbf{e})$ is the length of the segment of the ray going from the point $P$ in the direction $\mathbf{e}$ within the domain $\Omega.$ Without lose of generality, we consider only those directions of $\mathbf{e}$ for which the inner integral on the right side of equality exists and is finite. It is the well-known fact that these are almost all directions. We denote by ${\rm Lip}\, \mu,\;(0<\mu\leq1) $ the set of functions satisfying the Holder-Lipschitz condition $${\rm Lip}\, \lambda:=\left\{\rho(Q):\;|\rho(Q)-\rho(P)|\leq M r^{\lambda},\,P,Q\in \bar{\Omega}\right\}.$$ Consider the Kipriyanov fractional differential operator defined in the paper [@firstab_lit:1kipriyanov1960] by the formal expression $$\mathfrak{D}^{\alpha}(Q)=\frac{\alpha}{\Gamma(1-\alpha)}\int\limits_{0}^{r} \frac{[f(Q)-f(T)]}{(r - t)^{\alpha+1}} \left(\frac{t}{r} \right) ^{n-1} dt+ C^{(\alpha)}_{n} f(Q) r ^{ -\alpha}\!,\, P\in\partial\Omega,$$ where $ C^{(\alpha)}_{n} = (n-1)!/\Gamma(n-\alpha). $ In accordance with Theorem 2 [@firstab_lit:1kipriyanov1960], under the assumptions $$\label{2} lp\leq n,\;0<\alpha<l- \frac{n}{p} +\frac{n}{q}, \,q>p,$$ we have that for sufficiently small $\delta>0$ the following inequality holds $$\label{3} \|\mathfrak{D}^{\alpha}f\|_{L_{q}(\Omega)}\leq \frac{K}{\delta^{\nu}}\|f\|_{L_{p}(\Omega)}+\delta^{1-\nu}\|f\|_{L^{l}_{p}(\Omega)},\, f\in\stackrel{\circ}{W_p ^l} (\Omega),$$ where $$\label{4} \nu=\frac{n}{l}\left(\frac{1}{p}-\frac{1}{q} \right)+\frac{\alpha+\beta}{l}.$$ The constant $K$ does not depend on $\delta,\,f;$ the point $P\in\partial\Omega ;\;\beta$ is an arbitrarily small fixed positive number. Further, we assume that $\alpha \in (0,1).$ Using the notation of the paper [@firstab_lit:samko1987], we denote by $I_{a+}^{\alpha}(L_{p} ),\;I_{b-}^{\alpha}(L_{p} ),\;1\leq p\leq\infty$ the left-side, right-side classes of functions representable by the fractional integral on the segment $[a,b]$ respectively. Let $\mathfrak{d}:={\rm diam}\,\Omega ;\;C,C_{i}={\rm const},\,i\in \mathbb{N}_{0}.$ We use a shorthand notation $P\cdot Q=P^{i}Q_{i}=\sum^{n}_{i=1}P_{i}Q_{i}$ for the inner product of the points $P=(P_{1},P_{2},...,P_{n}),\,Q=(Q_{1},Q_{2},...,Q_{n})$ which belong to $\mathbb{E}^{n}.$ Denote by $D_{i}u$ the week derivative of the function $u$ with respect to a coordinate variable with index $1\leq i\leq n.$ We assume that all functions have a zero extension outside of $\bar{\Omega}.$ Denote by $ \mathrm{D} (L), \mathrm{R} (L)$ the domain of definition, range of values of the operator $L$ respectively. Everywhere further, unless otherwise stated, we use the notations of the papers [@firstab_lit:kipriyanov1960], [@firstab_lit:1kipriyanov1960], [@firstab_lit:samko1987]. Let us define the operators $$(\mathfrak{I}^{\alpha}_{0+}g)(Q ):=\frac{1}{\Gamma(\alpha)} \int\limits^{r}_{0}\frac{g (T)}{( r-t)^{1-\alpha}}\left(\frac{t}{r}\right)^{n-1}dt,\,(\mathfrak{I}^{\alpha}_{d-}g)(Q ):=\frac{1}{\Gamma(\alpha)} \int\limits_{r}^{d }\frac{g (T)}{(t-r)^{1-\alpha}}dt,$$ $$\;g\in L_{p}(\Omega),\;1\leq p\leq\infty.$$ These operators we call respectively the left-side, right-side directional fractional integral. We introduce the classes of functions representable by the directional fractional integrals. $$\label{5} \mathfrak{I}^{\alpha}_{0+}(L_{p} ):=\left\{ u:\,u(Q)=(\mathfrak{I}^{\alpha}_{0+}g)(Q ),\, g\in L_{p}(\Omega),\,1\leq p\leq\infty \right\},$$ $$\label{6} \mathfrak{I } ^{\alpha}_{ d -} (L_{p} ) =\left\{ u:\,u(Q)=(\mathfrak{I}^{\alpha}_{d-}g)(Q ),\;g\in L_{p}(\Omega),\;1\leq p\leq\infty \right\}.$$ Define the operators $\psi^{+}_{\varepsilon },\; \psi^{-}_{\varepsilon }$ depended on the parameter $\varepsilon>0.$ In the left-side case $$\label{7} (\psi^{+}_{ \varepsilon }f)(Q)= \left\{ \begin{aligned} \int\limits_{0}^{r-\varepsilon }\frac{ f (Q)r^{n-1}- f(T)t^{n-1}}{( r-t)^{\alpha +1}r^{n-1}} dt,\;\varepsilon\leq r\leq d ,\\ \frac{ f(Q)}{\alpha} \left(\frac{1}{\varepsilon^{\alpha}}-\frac{1}{ r ^{\alpha} } \right),\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; 0\leq r <\varepsilon .\\ \end{aligned} \right.$$ In the right-side case $$(\psi^{-}_{ \varepsilon }f)(Q)= \left\{ \begin{aligned} \int\limits_{r+\varepsilon }^{d }\frac{ f (Q)- f(T)}{( t-r)^{\alpha +1}} dt,\;0\leq r\leq d -\varepsilon,\\ \frac{ f(Q)}{\alpha} \left(\frac{1}{\varepsilon^{\alpha}}-\frac{1}{(d -r)^{\alpha} } \right),\;\;\;d -\varepsilon <r \leq d ,\\ \end{aligned} \right.$$ where $\mathrm{D}(\psi^{+}_{ \varepsilon }),\mathrm{D}(\psi^{-}_{ \varepsilon })\subset L_{p}(\Omega).$ Using the definitions of the monograph [@firstab_lit:samko1987 p.181] we consider the following operators. In the left-side case $$\label{8} ( \mathfrak{D} ^{\alpha}_{0+\!,\,\varepsilon}f)(Q)=\frac{1}{\Gamma(1-\alpha)}f(Q) r ^{-\alpha}+\frac{\alpha}{\Gamma(1-\alpha)}(\psi^{+}_{ \varepsilon }f)(Q).$$ In the right-side case $$( \mathfrak{D }^{\alpha}_{d-\!,\,\varepsilon}f)(Q)=\frac{1}{\Gamma(1-\alpha)}f(Q)(d-r)^{-\alpha}+\frac{\alpha}{\Gamma(1-\alpha)}(\psi^{-}_{ \varepsilon }f)(Q).$$ The left-side and right-side fractional derivatives are understood respectively as the following limits with respect to the norm $L_{p}(\Omega),\,(1\leq p<\infty)$ $$\label{8.1} \mathfrak{D }^{\alpha}_{0+}f=\lim\limits_{\stackrel{\varepsilon\rightarrow 0}{ (L_{p}) }} \mathfrak{D }^{\alpha}_{0+\!,\,\varepsilon} f ,\; \mathfrak{D }^{\alpha}_{d-}f=\lim\limits_{\stackrel{\varepsilon\rightarrow 0}{ (L_{p}) }} \mathfrak{D }^{\alpha}_{d-\!,\,\varepsilon} f .$$ We need auxiliary propositions, which are presented in the next section. Some lemmas and theorems ========================= We have the following theorem on boundedness of the directional fractional integral operators. \[T1\] The directional fractional integral operators are bounded in $L_{p}(\Omega),$ $1\leq p<\infty,$ the following estimates holds $$\label{9} \| \mathfrak{I}^{\alpha}_{0 +}u\|_{L_{p}(\Omega)}\leq C\|u \|_{L_{p}(\Omega)},\;\| \mathfrak{I} ^{\alpha}_{d -}u\|_{L_{p}(\Omega)}\leq C\|u \|_{L_{p}(\Omega)},\;C= \mathfrak{d} ^{\alpha}/ \Gamma(\alpha+1) .$$ Let us prove first estimate , the proof of the second one is absolutely analogous. Using the generalized Minkowski inequality, we have $$\| \mathfrak{I}^{\alpha}_{0 +}u\|_{L_{p}(\Omega)}=\frac{1}{\Gamma(\alpha)}\left( \int\limits_{\Omega}\left| \int\limits^{r}_{0}\frac{g (T)}{( r-t)^{1-\alpha}}\left(\frac{t}{r}\right)^{n-1}\!\!\!dt \right|^{p}dQ\right)^{1/p}$$ $$=\frac{1}{\Gamma(\alpha)}\left( \int\limits_{\Omega}\left| \int\limits^{r}_{0}\frac{g (Q-\tau \mathbf{e})}{\tau^{1-\alpha}}\left(\frac{r-\tau}{r}\right)^{n-1}\!\!\!d\tau \right|^{p}dQ\right)^{1/p}$$ $$\leq\frac{1}{\Gamma(\alpha)}\left( \int\limits_{\Omega}\left( \int\limits^{\mathfrak{d}}_{0}\frac{|g (Q-\tau \mathbf{e})|}{\tau^{1-\alpha}}d\tau \right)^{p}dQ\right)^{1/p}$$ $$\leq \frac{1}{\Gamma(\alpha)} \int\limits^{\mathfrak{d}}_{0}\tau^{\alpha-1} d\tau \left( \int\limits_{\Omega} |g (Q-\tau \mathbf{e})|^{p} dQ \right)^{1/p}\!\!\leq \frac{ \mathfrak{d} ^{\alpha}}{\Gamma(\alpha+1)}\, \| u\|_{L_{p}(\Omega)}.$$ \[T2\] Suppose $f\in L_{p}(\Omega),$ there exists $\lim\limits_{\varepsilon\rightarrow 0}\psi^{+}_{ \varepsilon }f$ or $\lim\limits_{\varepsilon\rightarrow 0}\psi^{-}_{ \varepsilon }f$ with respect to the norm $L_{p}(\Omega),\,(1\leq p<\infty);$ then $f\in \mathfrak{I} ^{\alpha}_{0 +}(L_{p}) $ or $f\in \mathfrak{I }^{\alpha}_{d -}(L_{p})$ respectively. Let $f\in L_{p}(\Omega)$ and $\lim\limits_{\stackrel{\varepsilon\rightarrow 0}{ (L_{p}) }}\psi^{+}_{ \varepsilon }f=\psi.$ Consider the function $$(\varphi^{+}_{ \varepsilon}f)(Q)=\frac{1}{\Gamma(1-\alpha)}\left\{\frac{f(Q)}{ r ^{\alpha}}+\alpha (\psi^{+}_{ \varepsilon }f)(Q) \right\}.$$ Taking into account , we can easily prove that $\varphi^{+}_{ \varepsilon }f\in L_{p}(\Omega).$ Obviously, there exists the limit $\varphi^{+}_{ \varepsilon }f\rightarrow \varphi\in L_{p}(\Omega),\,\varepsilon\downarrow 0.$ Taking into account Theorem \[T1\], we can complete the proof, if we show that $$\label{10.01} \mathfrak{I}^{\alpha}_{0+}\varphi^{+}_{ \varepsilon }f \stackrel{L_{p}}{\rightarrow} f,\,\varepsilon\downarrow0.$$ In the case $(\varepsilon\leq r\leq d),$ we have $$(\mathfrak{I}^{\alpha}_{0 +}\varphi^{+}_{ \varepsilon }f)(Q)\frac{\pi r^{n-1}}{\sin\alpha\pi}= \int\limits_{\varepsilon}^{r}\frac{f (P+y\mathbf{e})y ^{n-1-\alpha}}{( r-y)^{1-\alpha} } dy$$ $$+\alpha\int\limits_{\varepsilon}^{r}( r-y)^{\alpha-1} dy\int\limits_{0 }^{y-\varepsilon }\frac{ f (P+y\mathbf{e})y^{n-1}- f(T)t^{n-1}}{( y-t )^{\alpha +1}} dt$$ $$+\frac{1}{\varepsilon^{\alpha}}\int\limits_{0}^{\varepsilon }f (P+y\mathbf{e})( r-y)^{\alpha-1} y^{n-1} dy =I.$$ By direct calculation, we obtain $$\label{10} I= \frac{1}{\varepsilon^{\alpha}}\int\limits_{0}^{r }f (P+y\mathbf{e})( r-y)^{\alpha-1}y^{n-1} dy - \alpha\int\limits_{\varepsilon}^{r}( r-y)^{\alpha-1} dy\int\limits_{0 }^{y-\varepsilon }\frac{ f(T)}{( y-t)^{\alpha +1}}t^{n-1} dt .$$ Changing the variable of integration in the second integral, we have $$\alpha\int\limits_{\varepsilon}^{r}( r-y)^{\alpha-1} dy\int\limits_{0 }^{y-\varepsilon }\frac{ f(T)}{( y-t)^{\alpha +1}}t^{n-1} dt$$ $$=\alpha\int\limits_{0}^{r-\varepsilon}( r-y-\varepsilon)^{\alpha-1} dy\int\limits_{0 }^{y }\frac{ f(T)}{( y+\varepsilon-t)^{\alpha +1}}t^{n-1} dt$$ $$\label{11} =\alpha\int\limits_{0}^{r-\varepsilon}f(T)t^{n-1} dt\int\limits_{t }^{r-\varepsilon }\frac{( r-y-\varepsilon)^{\alpha-1} }{( y+\varepsilon-t)^{\alpha +1}}dy $$ $$ =\alpha\int\limits_{0}^{r-\varepsilon}f(T)t^{n-1} dt\int\limits_{t +\varepsilon}^{r } ( r-y )^{\alpha-1} ( y -t)^{-\alpha -1} dy .$$ Applying formula (13.18) [@firstab_lit:samko1987 p.184], we get $$\label{12} \int\limits_{t +\varepsilon}^{r } ( r-y )^{\alpha-1} ( y -t)^{-\alpha -1} dy= \frac{1}{\alpha \varepsilon^{\alpha}}\cdot\frac{(r-t-\varepsilon)^{\alpha}}{ r-t }.$$ Combining relations ,,, using the change of the variable $t=r-\varepsilon\tau, $ we get $$(\mathfrak{I}^{\alpha}_{0 +}\varphi^{+}_{ \varepsilon }f)(Q)\frac{\pi r^{n-1}}{\sin\alpha\pi}$$ $$=\frac{1}{\varepsilon^{\alpha}}\left\{ \int\limits_{0}^{r }f (P+y\mathbf{e})(r-y)^{\alpha-1}y^{n-1} dy- \int\limits_{0}^{r-\varepsilon } \frac{f(T)(r-t-\varepsilon)^{\alpha}}{ r-t } t^{n-1}dt \right\}$$ $$\label{13} =\frac{1}{ \varepsilon^{\alpha}} \int\limits_{0 }^{r } \frac{f(T)\left[(r -t)^{\alpha}-(r-t-\varepsilon)_{+}^{\alpha}\right]}{ r-t }t^{n-1}dt $$ $$ =\int\limits_{0 }^{r/\varepsilon }\frac{\tau^{\alpha}-(\tau-1)_{+}^{\alpha}}{ \tau } f(P+[r-\varepsilon \tau ]\mathbf{e})(r-\varepsilon \tau)^{n-1} d\tau,\;\tau_{+}=\left\{\begin{array}{cc}\tau,\;\tau\geq0;\\[0,25cm] 0,\;\tau<0\,.\end{array}\right. .$$ Consider the auxiliary function $\mathcal{K}$ defined in the paper [@firstab_lit:samko1987 p.105] $$\label{14} \mathcal{K}(t)= \frac{\sin\alpha\pi}{\pi }\cdot\frac{ t_{+}^{\alpha}-(t-1)_{+}^{\alpha}}{ t }\in L_{p}(\mathbb{R}^{1});\; \int\limits_{0 }^{\infty }\mathcal{K}(t)dt=1;\;\mathcal{K}(t)>0.$$ Combining , and taking into account that $f$ has the zero extension outside of $\bar{\Omega},$ we obtain $$\label{15} (\mathfrak{I}^{\alpha}_{0+}\varphi^{+}_{ \varepsilon }f)(Q)-f(Q) = \int\limits_{0 }^{\infty }\mathcal{K}(t) \left\{f(P+[r-\varepsilon t]\mathbf{e})(1-\varepsilon t/r)_{+}^{n-1}-f(P+ r \mathbf{e}) \right\}dt.$$ Consider the case $(0\leq r <\varepsilon).$ Taking into account , we get $$\label{16} (\mathfrak{I}^{\alpha}_{0+}\varphi^{+}_{ \varepsilon }f)(Q)-f (Q) = \frac{\sin\alpha\pi}{\pi\varepsilon^{\alpha}} \int\limits_{0}^{r }\frac{f (T)}{(r-t)^{1-\alpha} } \left(\frac{t}{r} \right)^{n-1} dt-f (Q) $$ $$ =\frac{\sin\alpha\pi}{\pi\varepsilon^{\alpha}} \int\limits_{0}^{r }\frac{f (P+[r-t]\mathbf{e})}{t^{1-\alpha} }\left(\frac{r-t}{r} \right)^{n-1} dt-f (Q).$$ Consider the domains $$\label{17} \Omega_{\varepsilon} :=\{Q\in\Omega,\,d(\mathbf{e})\geq\varepsilon \},\;\tilde{\Omega}_{ \varepsilon }=\Omega\setminus \Omega_{\varepsilon}.$$ In accordance with this definition we can divide the surface $\omega$ into two parts $ \omega_{\varepsilon}$ and $\tilde{\omega}_{ \varepsilon },$ where $ \omega_{\varepsilon}$ is the subset of $\omega$ such that $d(\mathbf{e})\geq\varepsilon$ and $\tilde{\omega}_{ \varepsilon }$ is the subset of $\omega$ such that $d(\mathbf{e}) <\varepsilon.$ Using ,, we get $$\label{18} \|(\mathfrak{I}^{\alpha}_{0+}\varphi^{+}_{ \varepsilon }f) -f\|^{p}_{L_{p}(\Omega)} $$ $$ = \int\limits_{\omega_{\varepsilon}}d\chi\int\limits_{\varepsilon}^{ d} \left|\int\limits_{0 }^{\infty }\mathcal{K}(t)[f(Q- \varepsilon t \mathbf{e})(1-\varepsilon t/r)_{+}^{n-1}-f(Q)]dt\right|^{p}r^{n-1}dr $$ $$ + \int\limits_{\omega_{\varepsilon}}d\chi\int\limits_{0}^{\varepsilon } \left| \frac{\sin\alpha\pi}{\pi\varepsilon^{\alpha}}\int\limits_{0}^{r } \frac{f (P+[r-t]\mathbf{e})}{t^{1-\alpha} }\left(\frac{r-t}{r} \right)^{n-1} dt-f (Q)\right|^{p}r^{n-1}dr $$ $$ + \int\limits_{\tilde{\omega}_{\varepsilon} }d\chi\int\limits_{0}^{d } \left| \frac{\sin\alpha\pi}{\pi\varepsilon^{\alpha}}\int\limits_{0}^{r }\frac{f (P+[r-t]\mathbf{e})}{t^{1-\alpha} }\left(\frac{r-t}{r} \right)^{n-1} dt-f (Q) \right|^{p}r^{n-1}dr =I_{1}+I_{2}+I_{3}.$$ Consider $ I_{1},$ using the generalized Minkovski inequality, we get $$I^{\frac{1 }{p}}_{1} \leq \int\limits_{0 }^{\infty }\mathcal{K}(t) \left(\int\limits_{\omega_{\varepsilon} }d\chi\int\limits_{\varepsilon}^{ d} |f(Q- \varepsilon t \mathbf{e})(1-\varepsilon t/r)_{+}^{n-1}-f(Q)|^{p}r^{n-1}dr \right)^{\frac{1 }{p}} dt.$$ We use the following notation $$h(\varepsilon,t):= \mathcal{K}(t)\left(\int\limits_{\omega_{\varepsilon} }d\chi\int\limits_{\varepsilon}^{ d} |f(Q- \varepsilon t \mathbf{e})(1-\varepsilon t/r)_{+}^{n-1}-f(Q)|^{p}r^{n-1}dr \right)^{\frac{1 }{p}} dt.$$ It can easily be checked that $$\label{19} |h(\varepsilon,t)|\leq 2\mathcal{K}(t) \| f\|_{L_{p}(\Omega)},\;\forall\varepsilon>0;$$ $$|h(\varepsilon,t)|\leq \left(\int\limits_{\omega_{\varepsilon} }d\chi\int\limits_{\varepsilon}^{ d} \left |(1-\varepsilon t/r)_{+}^{n-1}[f(Q- \varepsilon t \mathbf{e})-f(Q)]\right|^{p}r^{n-1}dr \right)^{\frac{1 }{p}} dt $$ $$+\left(\int\limits_{\omega_{\varepsilon} }d\chi\int\limits_{0}^{ d} \left|f(Q) [1-(1-\varepsilon t/r)_{+}^{n-1}]\right|^{p}r^{n-1}dr \right)^{\frac{1 }{p}} dt=I_{11}+I_{12}.$$ By virtue of the average continuity property in $L_{p}(\Omega),$ we have $\forall t>0:\, I_{11}\rightarrow 0,\;\varepsilon\downarrow 0.$ Consider $I_{12}$ and let us define the function $$h_{1}(\varepsilon,t,r):=\left|f(Q) [1-(1-\varepsilon t/r)_{+}^{n-1}]\right|.$$ Obviously, the following relations hold almost everywhere in $\Omega$ $$\forall t>0,\,h_{1}(\varepsilon,t,r) \leq |f(Q)|,\;h_{1}(\varepsilon,t,r)\rightarrow 0,\;\varepsilon \downarrow0.$$ Applying the Lebesgue dominated convergence theorem, we get $I_{12}\rightarrow 0,\;\varepsilon\downarrow 0.$ It implies that $$\label{20} \forall t>0,\,\lim\limits_{\varepsilon\rightarrow 0} h(\varepsilon,t)=0.$$ Taking into account , and applying the Lebesgue dominated convergence theorem again, we obtain $$I_{1}\rightarrow 0,\;\;\varepsilon\downarrow0 .$$ Consider $I_{2},$ using the Mincovski inequality, we get $$I^{\frac{1 }{p}}_{2} \leq \frac{\sin\alpha\pi}{\pi\varepsilon^{\alpha}}\left( \int\limits_{\omega_{\varepsilon} }d\chi\int\limits_{0}^{\varepsilon } \left| \int\limits_{0}^{r }\frac{f (Q-t\mathbf{e})}{t^{1-\alpha} }\left(\frac{r-t}{r} \right)^{n-1} dt\right|^{p}r^{n-1}dr\right)^{\frac{1 }{p}}$$ $$+\left(\int\limits_{\omega_{\varepsilon} }d\chi\int\limits_{0}^{\varepsilon} \left| f (Q) \right|^{p}r^{n-1}dr \right)^{\frac{1 }{p}}=I_{21} +I_{22}.$$ Applying the generalized Mincovski inequality, we obtain $$I_{21}\frac{\pi}{\sin\alpha\pi}=\frac{1}{\varepsilon^{\alpha}}\left(\int\limits_{\omega_{\varepsilon} }d\chi \int\limits_{0}^{\varepsilon } \left| \int\limits_{0}^{r }\frac{f (Q-t\mathbf{e})}{t^{1-\alpha} }\left(\frac{r-t}{r} \right)^{n-1} \!\!\! dt\right|^{p}r^{n-1}\! dr \right)^{\frac{1 }{p}}$$ $$\leq\frac{1}{\varepsilon^{\alpha}}\left\{\int\limits_{\omega_{\varepsilon} }\!\!\left[\int\limits_{0}^{\varepsilon }\!\!t ^{\alpha-1 }\!\! \left( \int\limits_{t}^{\varepsilon }\!\!|f (Q -t \mathbf{e})|^{p}\!\left(\frac{r-t}{r} \right)^{\!\!\!(p-1)(n-1)}\!\!\!(r-t)^{n-1} \!dr \right)^{\frac{1 }{p}}\!\!dt\right]^{p}\!\!d\chi \right\}^{\frac{1 }{p}}$$ $$\leq\frac{1}{\varepsilon^{\alpha}}\left\{\int\limits_{\omega_{\varepsilon} }\left[\int\limits_{0}^{\varepsilon } t ^{\alpha-1 } \left( \int\limits_{t}^{\varepsilon } \left|f (P+[r-t]\mathbf{e})\right|^{p} (r-t)^{n-1}dr \right)^{\frac{1 }{p}}\!\!dt\right]^{p}\!\!d\chi \right\}^{\frac{1 }{p}}$$ $$\leq\frac{1}{\varepsilon^{\alpha}}\left\{\int\limits_{\omega_{\varepsilon} }\left[\int\limits_{0}^{\varepsilon } t ^{\alpha-1 } \left( \int\limits_{0}^{\varepsilon } |f (P +r \mathbf{e})|^{p} r^{n-1}dr \right)^{\frac{1 }{p}}\!\!dt\right]^{p}\!\!d\chi \right\}^{\frac{1 }{p}}\!\!=\frac{1}{\alpha}\| f\| _{L_{p}( \Delta_{\varepsilon})},\;$$ $$\Delta_{\varepsilon} :=\{Q\in\Omega_{\varepsilon},\,r<\varepsilon \} .$$ Note that ${\rm mess}\, \Delta_{\varepsilon}\rightarrow 0,\,\varepsilon\downarrow0,$ hence $I_{21},I_{22}\rightarrow 0,\,\varepsilon\downarrow0 .$ It follows that $I_{2 }\rightarrow 0,\,\varepsilon\downarrow0.$ In the same way, we obtain $I_{3 }\rightarrow 0,\,\varepsilon\downarrow 0.$ Since we proved that $I_{1},I_{2},I_{3}\rightarrow 0,\,\varepsilon\downarrow0,$ then relation holds. This completes the proof corresponding to the left-side case. The proof corresponding to the right-side case is absolutely analogous. \[T3\] Suppose $f=\mathfrak{I}^{\alpha}_{0+} \psi$ or $f= \mathfrak{I} ^{\alpha}_{d -} \psi ,\;\psi\in L_{p}(\Omega),\;1\leq p<\infty;$ then $ \, \mathfrak{D }^{\alpha}_{0+}f =\psi $ or $ \mathfrak{D} ^{\alpha}_{d-}f =\psi $ respectively. Consider $$r^{n-1}f(Q)-(r-\tau)^{n-1}f(Q-\tau\mathbf{e})$$ $$=\frac{1}{\Gamma(\alpha)} \int\limits_{0}^{ r}\frac{\psi (Q-t\mathbf{e})}{t^{1-\alpha}}\left( r-t \right)^{n-1}dt-\frac{1}{\Gamma(\alpha)} \int\limits_{\tau}^{ r}\frac{\psi (Q-t\mathbf{e})}{(t-\tau)^{1-\alpha}}\left( r-t \right)^{n-1}dt$$ $$=\tau^{\alpha-1} \int\limits_{0}^{ r} \psi (Q-t\mathbf{e}) k\left(\frac{t}{\tau}\right)(r-t)^{n-1}dt,\;k(t)= \frac{1}{\Gamma(\alpha)}\left\{\begin{array}{cc}t^{\alpha-1},\;0<t<1;\\[0,25cm] t^{\alpha-1}-(t-1)^{\alpha-1},\;t>1.\end{array}\right.$$ Hence in the case $(\varepsilon\leq r\leq d),$ we have $$(\psi^{+}_{ \varepsilon }f)(Q)=\int\limits_{ \varepsilon }^{ r }\frac{r^{n-1} f(Q)-(r-\tau)^{n-1}f(Q-\tau\mathbf{e})}{ r^{n-1}\tau ^{\alpha +1}} d\tau$$ $$=\int\limits_{ \varepsilon }^{ r } \tau^{-2} d\tau\int\limits_{0}^{ r} \psi (Q-t\mathbf{e}) k\left(\frac{t}{\tau}\right)\left( 1-t/r \right)^{n-1}dt$$ $$=\int\limits_{ 0 }^{ r }\psi (Q-t\mathbf{e}) \left( 1-t/r \right)^{n-1} dt\int\limits_{\varepsilon}^{ r} k\left(\frac{t}{\tau}\right) \tau^{-2}d \tau$$ $$=\int\limits_{ 0 }^{ r }\psi (Q-t\mathbf{e}) \left( 1-t/r \right)^{n-1}t^{-1} dt\int\limits_{ t/ r }^{t/\varepsilon} k (s )ds.$$ Applying formula (6.12) [@firstab_lit:samko1987 p.106], we get $$(\psi^{+}_{ \varepsilon }f)(Q)\cdot\frac{\alpha}{\Gamma(1-\alpha)}=\int\limits_{ 0 }^{ r }\psi (Q-t\mathbf{e}) \left( 1-t/r \right)^{n-1}\left[ \frac{1}{\varepsilon}\mathcal{K}\left(\frac{t}{\varepsilon}\right) - \frac{1}{ r}\mathcal{K}\left(\frac{t}{ r}\right) \right]dt.$$ Since in accordance with , we have $$\mathcal{K}\left(\frac{t}{ r}\right)=\left[\Gamma(1-\alpha)\Gamma(\alpha) \right]^{-1} \left(\frac{t}{ r}\right)^{\alpha-1}\!\!\!,$$ then $$(\psi^{+}_{ \varepsilon }f)(Q)\cdot\frac{\alpha}{\Gamma(1-\alpha)}=\int\limits_{ 0 }^{ r /\varepsilon }\mathcal{K} ( t ) \psi (Q-\varepsilon t\mathbf{e})\left( 1-\varepsilon t/r \right)^{n-1} dt-\frac{f(Q)}{\Gamma(1-\alpha) r ^{ \alpha}}.$$ Taking into account ,, and that the function $\psi(Q)$ has the zero extension outside of $\bar{\Omega},$ we obtain $$( \mathfrak{D} ^{\alpha}_{0+,\varepsilon}f)(Q)-\psi(Q) =\int\limits_{ 0 }^{\infty} \mathcal{K} ( t ) \left[\psi (Q - \varepsilon t \mathbf{e})(1-\varepsilon t/r)_{+}^{n-1}-\psi (Q) \right]dt,\;\varepsilon\leq r\leq d .$$ Consider the case $ ( 0\leq r<\varepsilon).$ In accordance with , we have $$( \mathfrak{D} ^{\alpha}_{0+,\varepsilon}f)(Q)-\psi(Q)=\frac{f(Q)}{\varepsilon^{\alpha}\Gamma(1-\alpha)}-\psi(Q).$$ Using the generalized Mincovski inequality, we get $$\|( \mathfrak{D} ^{\alpha}_{0+,\varepsilon}f)(Q)-\psi(Q)\|_{L_{p}(\Omega)}\leq\!\! \int\limits_{ 0 }^{\infty} \!\! \mathcal{K}(t) \| \psi(Q - \varepsilon t \mathbf{e})(1-\varepsilon t/r)_{+}^{n-1}-\psi (Q)\|_{L_{p}(\Omega)}dt$$ $$+\frac{1}{ \Gamma(1-\alpha)\varepsilon^{\alpha}}\|f\|_{L_{p}(\Delta'_{\varepsilon})}+\|\psi\|_{L_{p}(\Delta'_{\varepsilon})},\;\Delta'_{\varepsilon}=\Delta_{\varepsilon}\cup \tilde{\Omega}_{\varepsilon},$$ here we use the denotations that were used in Theorem \[T2\]. Arguing as above (see Theorem \[T2\]), we see that all three summands of the right side of the previous inequality tend to zero, when $\varepsilon\downarrow 0.$ \[T4\] Suppose $\rho\in {\rm Lip}\,\lambda,\;\alpha<\lambda\leq 1,\;f\in H_{0}^{1}(\Omega) ;$ then $\rho f\in \mathfrak{I} ^{\alpha}_{\,0 +}(L_{2} ) \cap \mathfrak{I} ^{\alpha}_{d -}(L_{2} ).$ We provide a proof only for the left-side case, the proof corresponding to the right-side case is absolutely analogous. First, assume that $f\in C_{0}^{\infty}(\Omega) .$ Using the denotations that were used in Theorem \[T2\], we have $$\label{21} \|\psi^{+}_{ \varepsilon_1}f - \psi^{+}_{ \varepsilon_2}f \|_{L_{2}(\Omega )}\leq\!\!\|\psi^{+}_{ \varepsilon_1}f - \psi^{+}_{ \varepsilon_2}f \|_{L_{2}( \Omega_{\varepsilon_{1}})}+\|\psi^{+}_{ \varepsilon_1}f - \psi^{+}_{ \varepsilon_2}f \|_{L_{2}(\tilde{\Omega}_{ \varepsilon_{1}})},$$ where $\varepsilon_{1}>\varepsilon_{2}>0.$ We have the following reasoning $$\|\psi^{+}_{ \varepsilon_1}f - \psi^{+}_{ \varepsilon_2}f \|_{L_{2}(\Omega_{\varepsilon_{1}})}\leq \left(\int\limits_{\omega_{\varepsilon_{1}}}d\chi\int\limits_{\varepsilon_{1}}^{d }\left|\int\limits_{r-\varepsilon_{1}}^{r-\varepsilon_{2}}\frac{(\rho f)(Q)r^{n-1}- (\rho f)(T)t^{n-1}}{r^{n-1}(r-t)^{\alpha +1}} dt \right|^{2}r^{n-1} dr \right)^{\frac{1}{2}}$$ $$+\!\!\left(\int\limits_{\omega_{\varepsilon_{1}}}d\chi\int\limits_{ \varepsilon_{2}}^{ \varepsilon_{1}}\!\!\left|\int\limits_{0}^{r- \varepsilon_{1}}\frac{(\rho f)(Q)r^{n-1} }{r^{n-1}(r-t)^{\alpha +1}} dt-\!\! \int\limits_{ 0}^{r-\varepsilon_{2}}\frac{(\rho f)(Q)r^{n-1}-(\rho f)(T)t^{n-1}}{r^{n-1}(r-t)^{\alpha +1}} dt\right|^{2}r^{n-1} dr \right)^{\frac{1}{2}}$$ $$+\left(\int\limits_{\omega_{\varepsilon_{1}}}d\chi\int\limits_{0}^{\varepsilon_{2} }\left|\int\limits_{ 0}^{r-\varepsilon_{1}} \frac{(\rho f)(Q)r^{n-1} }{r^{n-1}(r-t)^{\alpha +1}} dt- \int\limits_{0}^{r- \varepsilon_{2}}\frac{(\rho f)(Q)r^{n-1} }{r^{n-1}(r-t)^{\alpha +1}} dt\right|^{2}r^{n-1} dr \right)^{\frac{1}{2}}$$ $$=I_{1}+I_{2}+I_{3}.$$ Since $f\in C_{0}^{\infty}(\Omega),$ then for sufficiently small $\varepsilon_{1}>0$ we have $f(Q)=0,\,r<\varepsilon_{1}.$ This implies that $I_{2}=I_{3}=0$ and that the second summand of the right side of inequality equals zero. Making the change the variable in $I_{1},$ then using the generalized Minkowski inequality, we get $$I_{1} =\left(\int\limits_{\omega_{\varepsilon_{1}}}d\chi\int\limits_{\varepsilon_{1}}^{d}\left|\int\limits_{ \varepsilon_{1}}^{ \varepsilon_{2}} \frac{(\rho f)(Q)r^{n-1}- (\rho f )(Q-\mathbf{e} t )(r-t)^{n-1}}{ r^{n-1}t ^{\alpha +1}} dt\right|^{2}r^{n-1} dr \right)^{\frac{1}{2}}$$ $$\leq\int\limits_{ \varepsilon_{2}}^{ \varepsilon_{1}}t^{-\alpha -1}\!\!\!\left(\int\limits_{\omega_{\varepsilon_{1}}}d \chi\int\limits_{\varepsilon_{1}}^{d}\left| (\rho f )(Q)-(1-t/r)^{n-1}(\rho f )(Q-\mathbf{e}t) \right|^{2} r^{n-1} dr \right)^{\frac{1}{2}}\!\! dt$$ $$\leq\int\limits_{ \varepsilon_{2}}^{ \varepsilon_{1}}t^{-\alpha -1}\!\!\!\left(\int\limits_{\omega_{\varepsilon_{1}}}d \chi\int\limits_{\varepsilon_{1}}^{d}\left| (\rho f )(Q)- (\rho f )(Q-\mathbf{e}t) \right|^{2} r^{n-1} dr \right)^{\frac{1}{2}} \!\!dt$$ $$+\int\limits_{ \varepsilon_{2}}^{ \varepsilon_{1}}t^{-\alpha -1}\left(\int\limits_{\omega_{\varepsilon_{1}}}d \chi\int\limits_{\varepsilon_{1}}^{d}\left[ 1-( 1-t/r )^{n-1}\right]\left| (\rho f )(Q-\mathbf{e}t) \right|^{2} r^{n-1} dr \right)^{\frac{1}{2}}\!\!dt$$ $$\leq C_{1}\!\!\int\limits_{ \varepsilon_{2}}^{ \varepsilon_{1}}t^{\lambda-\alpha-1 }d t+\int\limits_{ \varepsilon_{2}}^{ \varepsilon_{1}}t^{-\alpha }\left(\int\limits_{\omega_{\varepsilon_{1}}}d \chi\int\limits_{\varepsilon_{1}}^{d} \left|\frac{1}{r} \sum\limits_{i=0}^{n-2}\left( \frac{t}{r}\right)^{i}(\rho f )(Q-\mathbf{e}t) \right|^{2} r^{n-1} dr \right)^{\!\!\frac{1}{2}}\!\!dt.$$ Using the function $f$ property, we see that there exists a constant $\delta$ such that $ f (Q-\mathbf{e}t )=0,\;r<\delta.$ In accordance with the above reasoning, we have $$I_{1} \leq C_{1}\frac{ \varepsilon^{ \lambda-\alpha}_{1}-\varepsilon^{\lambda-\alpha}_{2} }{\!\!\!\!\lambda-\alpha }+ \|f\|_{L_{2}(\Omega)}\frac{ \varepsilon^{ 1- \alpha}_{1}-\varepsilon^{1- \alpha}_{2} }{\!\!\!\delta(1-\alpha) } (n-1) .$$ Applying Theorem \[T1\], we complete the proof for the case $ ( f\in C_{0}^{\infty}(\Omega)).$ Now assume that $f\in H^1_{0}(\Omega),$ then there exists the sequence $\{f_{n}\} \subset C_{0}^{\infty}(\Omega),\; f_{n}\stackrel{ H^1_{0}}{\longrightarrow} f.$ It is easy to prove that $ \rho f_{n}\stackrel{ L_{2}}{\longrightarrow} \rho f.$ In accordance with the proven above fact, we have $\rho f_{n}= \mathfrak{I} ^{\alpha}_{0+}\varphi_{n},\;\{\varphi_{n}\}\in L_{2}(\Omega),$ therefore $$\label{22} \mathfrak{I} ^{\alpha}_{0+}\varphi_{n}\stackrel{L_{2} }{\longrightarrow} \rho f.$$ To conclude the proof, it is sufficient to show that $\varphi_{n}\stackrel{L_{2} }{\longrightarrow}\varphi\in L_{2}(\Omega).$ Note that by virtue of Theorem \[T2\] we have $\mathfrak{ D} ^{\alpha}_{0+}\rho f_{n}=\varphi_{n}.$ Let $ c_{n,m}:=f_{n+m}-f_{n},$ we have $$\|\varphi_{n+m}-\varphi_{n}\|_{L_{2}(\Omega)}\leq\frac{\alpha}{\Gamma(1-\alpha)}\left(\int\limits_{\Omega} \left|\int\limits_{0 }^{r}\frac{(\rho c_{n,m})(Q)r^{n-1}- (\rho c_{n,m})(T)t^{n-1}}{r^{n-1}( t-r)^{\alpha +1}} dt\right|^{2} dQ \right)^{\frac{1}{2}}$$ $$+\frac{1}{\Gamma(1-\alpha)}\left(\int\limits_{\Omega}\left| \frac{(\rho c_{n,m})(Q) }{ r ^{\alpha }} \right|^{2}dQ \right)^{\frac{1}{2}}=I_3 +I_4.$$ Consider $I_{3}.$ It can be shown in the usual way that $$\frac{\Gamma(1-\alpha)}{\alpha} I_3 \leq \left\{\int\limits_{\Omega}\left|\int\limits_{0 }^{ r}\frac{ (\rho c_{n,m}) (Q)- (\rho c_{n,m}) (Q-\mathbf{e}t) }{ t^{\alpha +1}} dt\right|^{2} dQ \right\}^{\frac{1}{2}}$$ $$+ \left\{\int\limits_{\Omega} \left|\int\limits_{0 }^{ r}\frac{(\rho c_{n,m})(Q-\mathbf{e}t)[1-(1- t/r)^{n-1} ] }{ t^{1+\alpha }} dt\right|^{2} dQ \right\}^{\frac{1}{2}}= I_{01}+I_{02};$$ $$I_{01}\leq \sup\limits_{Q\in \Omega}|\rho(Q)|\left\{\int\limits_{\Omega}\left(\int\limits_{0 }^{ r}\frac{ | c_{n,m} (Q)- c_{n,m} (Q-\mathbf{e}t) |}{ t^{\alpha +1}} dt\right)^{2} dQ \right\}^{\frac{1}{2}}$$ $$+\left\{\int\limits_{\Omega}\left|\int\limits_{0 }^{ r}\frac{ c_{n,m} (Q-\mathbf{e}t) [ \rho (Q)- \rho (Q-\mathbf{e}t)] }{ t^{\alpha +1}} dt\right|^{2} dQ \right\}^{\frac{1}{2}}= I_{11}+I_{21} .$$ Applying the generalized Minkowski inequality, then representing the function under the inner integral by the directional derivative, we get $$I_{11} \leq C_{1}\int\limits_{ 0}^{ \mathfrak{d}}t^{-\alpha -1}\left(\int\limits_{\Omega} \left| c_{n,m}(Q)-c_{n,m}(Q-\mathbf{e}t) \right|^{2} dQ \right)^{\frac{1}{2}} dt$$ $$=C_{1}\int\limits_{ 0}^{ \mathfrak{d}}t^{-\alpha -1}\left(\int\limits_{\Omega} \left| \int\limits_{0}^{t} c'_{n,m} (Q-\mathbf{e}\tau) d\tau\right|^{2} dQ\right)^{\frac{1}{2}} dt.$$ Using the Cauchy-Schwarz inequality, the Fubini theorem, we have $$I_{11} \leq C_{1}\int\limits_{ 0}^{ \mathfrak{d}}t^{-\alpha -1}\left(\int\limits_{\Omega}dQ \int\limits_{0}^{t} \left|c'_{n,m} (Q-\mathbf{e}\tau)\right|^{2}d\tau \int\limits_{0}^{t}d\tau \right)^{\frac{1}{2}} dt$$ $$=C_{1}\int\limits_{ 0}^{ \mathfrak{d}}t^{-\alpha -1/2 }\left( \int\limits_{0}^{t}d\tau \int\limits_{\Omega}\left|c'_{n,m} (Q-\mathbf{e}\tau)\right|^{2} dQ \right)^{\frac{1}{2}} dt\leq C_{1} \frac{ \mathfrak{d}^{1-\alpha} }{1-\alpha } \, \|c'_{n,m} \|_{L_{2}(\Omega)}.$$ Arguing as above, using the Holder property of the function $\rho,$ we see that $$I_{21}\leq M \int\limits_{ 0}^{ \mathfrak{d}}t^{\lambda-\alpha -1}\left(\int\limits_{\Omega} \left| c_{n,m}(Q-\mathbf{e}t) \right|^{2} dQ \right)^{\frac{1}{2}} dt\leq M\frac{ \mathfrak{d}^{\lambda-\alpha} }{\lambda-\alpha } \, \|c _{n,m} \|_{L_{2}(\Omega)}.$$ It can be shown in the usual way that $$I_{02}\leq C_{1}\left\{\int\limits_{\Omega} \left|\int\limits_{0 }^{ r} | c_{n,m}(Q-\mathbf{e}t)| \sum\limits_{i=0}^{n-2}\left(\frac{t}{r}\right)^{i} r^{-1} t^{ -\alpha } dt\right|^{2} dQ \right\}^{\frac{1}{2}}$$ $$\leq C_{2}\left\{\int\limits_{\Omega} \left(\int\limits_{0 }^{ r} t^{ -\alpha }dt \int\limits_{t}^{r}\left| c'_{n,m}(Q-\mathbf{e}\tau)\right| d\tau \right)^{2}r^{-2} dQ \right\}^{\frac{1}{2}}$$ $$= C_{2}\left\{\int\limits_{\Omega} \left(\int\limits_{0 }^{ r}\left| c'_{n,m}(Q-\mathbf{e}\tau)\right| d\tau \int\limits_{0}^{\tau}t^{ -\alpha }dt \right)^{2}r^{-2} dQ \right\}^{\frac{1}{2}}$$ $$\leq\frac{C_{2}}{1-\alpha}\left\{\int\limits_{\Omega} \left(\int\limits_{0 }^{ r} \left| c'_{n,m}(Q-\mathbf{e}\tau)\right| \tau^{ -\alpha} d\tau \right)^{2} dQ \right\}^{\frac{1}{2}}.$$ Applying the generalized Minkowski inequality, we have $$I_{02}\leq C_{3}\int\limits_{0 }^{ \mathfrak{d}} \tau^{ -\alpha} d\tau \left(\int\limits_{\Omega} \left| c'_{n,m}(Q-\mathbf{e}\tau)\right|^{2} dQ \right)^{\frac{1}{2}} \leq C_{3}\frac{\mathfrak{d}^{1-\alpha}}{1-\alpha} \|c'_{n,m}\|_{L_{2}(\Omega)}.$$ Consider $I_{2},$ we have $$I_2 \leq\frac{C_{ 1}}{\Gamma(1-\alpha)}\left(\int\limits_{\Omega} \left| c_{n,m}(Q)\right|^{2} r ^{-2\alpha } dQ\right)^{ \frac{1}{2}}$$ $$=\frac{C_{ 1}}{\Gamma(1-\alpha)} \left(\int\limits_{\Omega} r ^{-2\alpha } \left|\int\limits_{0}^{r} c'_{n,m}(Q-\mathbf{e}t)dt\right|^{2}dQ \right)^{\frac{1}{2}}$$ $$\leq\frac{C_{ 1}}{\Gamma(1-\alpha)} \left(\int\limits_{\Omega} \left|\int\limits_{0}^{r} c'_{n,m}(Q-\mathbf{e}t)t^{-\alpha}dt\right|^{2}dQ \right)^{\frac{1}{2}}.$$ Using the generalized Minkowski inequality, then applying the trivial estimates, we get $$I_2\leq C_{ 4} \left\{\int\limits_{\omega}\left[\int\limits_{0}^{d} t^{-\alpha} dt \left(\int\limits_{t}^{d}|c'_{n,m}(Q-\mathbf{e}t)|^{2} r^{ n-1 }dr\right)^{\frac{1}{2}} \right]^{2}d\chi \right\}^{\frac{1}{2}}$$ $$\leq C_{ 4} \left\{\int\limits_{\omega}\left[\int\limits_{0}^{\mathfrak{d}} t^{-\alpha} dt \left(\int\limits_{0}^{d}|c'_{n,m}(Q-\mathbf{e}t)|^{2} r^{ n-1 }dr\right)^{\frac{1}{2}} \right]^{2}d\chi \right\}^{\frac{1}{2}}$$ $$= C_{ 4}\int\limits_{0}^{\mathfrak{d}} t^{-\alpha} dt \left(\int\limits_{\omega}d\chi\int\limits_{0}^{d}|c'_{n,m}(Q-\mathbf{e}t)|^{2} r^{ n-1 }dr\right)^{\frac{1}{2}} \leq C_{ 4}\frac{\mathfrak{d}^{1-\alpha}}{1-\alpha}\|c'_{n,m}\|_{L_{2}(\Omega)}.$$ Taking into account that the sequences $\{f_{n}\},\{f'_{n}\}$ are fundamental, we obtain $I_{1},I_{2}\rightarrow 0.$ Hence the sequence $\{\varphi_{n}\} $ is fundamental and $\varphi_{n}\stackrel{L_{2} }{\longrightarrow}\varphi\in L_{2}(\Omega).$ Note that by virtue of Theorem \[T1\] the directional fractional integral operator is bounded on the space $L_{2}(\Omega).$ Hence $$\mathfrak{I }^{\alpha}_{0 +}\varphi_{n}\stackrel{L_{2} }{\longrightarrow} \mathfrak{I} ^{\alpha}_{0+}\varphi.$$ Combining this fact with , we have $\rho f= \mathfrak{I} ^{\alpha}_{0+}\varphi.$ \[L1\] The operator $\mathfrak{D}^{ \alpha }$ is a restriction of the operator $\mathfrak{D}^{ \alpha }_{0+}.$ We need to show that the next equality holds $$\label{23} (\mathfrak{D}^{ \alpha }f)(Q)= \left(\mathfrak{D}^{ \alpha }_{0+} f\right)(Q), \,f\in \stackrel{\circ}{W_p ^l} (\Omega).$$ It can be shown in the usual way that $$\mathfrak{D}^{ \alpha }v=\frac{\alpha}{\Gamma(1-\alpha)}\int\limits_{0}^{r} \frac{v(Q)-v(T)}{(r- t)^{\alpha+1}} \left(\frac{t}{r}\right) ^{n-1} dt+ C_{n}^{(\alpha)} v(Q) r ^{ -\alpha}$$ $$= \frac{\alpha}{\Gamma(1-\alpha)}\int\limits_{0}^{r} \frac{r^{n-1}v(Q)-t^{n-1}v(T)}{r^{n-1}(r- t)^{\alpha+1}}dt-\frac{\alpha\,v(Q)}{\Gamma(1-\alpha)}\int\limits_{0}^{r} \frac{ r^{n-1} -t^{n-1} }{r^{n-1}(r- t)^{\alpha+1}} dt$$ $$+C_{n}^{(\alpha)} v(Q) r ^{ -\alpha}= ( \mathfrak{D} ^{\alpha}_{0+}v)(Q)- \frac{\alpha \, v(Q) }{\Gamma(1-\alpha)} \sum\limits_{i=0}^{n-2} r ^{ -1-i}\int\limits_{0}^{r} \frac{t^{i}}{(r- t)^{\alpha }} dt$$ $$\label{24} +C_{n}^{(\alpha)} v(Q) r ^{ -\alpha}-\frac{v(Q) r ^{ -\alpha}}{\Gamma(1-\alpha)} = ( \mathfrak{D} ^{\alpha}_{0+}v)(Q)- I_1 +I_2 -I_3.$$ Using the formula of the fractional integral of a power function (2.44) [@firstab_lit:samko1987 p.47], we have $$I_1 = \frac{\alpha\, v(Q)\,r ^{-1 } }{\Gamma(1-\alpha)}\int\limits_{0}^{r} \frac{dt}{(r- t)^{\alpha }} +\frac{\alpha\, v(Q) }{\Gamma(1-\alpha)} \sum\limits_{i=1}^{n-2} r ^{-1-i}\int\limits_{0}^{r} \frac{t^{i}}{(r- t)^{\alpha }} dt$$ $$= v(Q)\frac{\alpha }{ \Gamma(2-\alpha)}r ^{ -\alpha } + v(Q) \alpha \sum\limits_{i=1}^{n-2} r ^{-1-i} (I^{1-\alpha}_{0+}t^{i})(r)$$ $$=v(Q)\frac{\alpha }{ \Gamma(2-\alpha)}r ^{ -\alpha } + v(Q) \alpha \sum\limits_{i=1}^{n-2} r ^{ -\alpha}\frac{i!}{\Gamma(2-\alpha+i)}.$$ Hence $$I_1 +I_3 =\frac{v(Q)r ^{ -\alpha }}{ \Gamma(2-\alpha)} + v(Q)r ^{ -\alpha } \alpha \sum\limits_{i=1}^{n-2} \frac{i!}{\Gamma(2-\alpha+i)}= \frac{2 v(Q)r ^{ -\alpha }}{ \Gamma(3-\alpha)}$$ $$+ v(Q)r ^{ -\alpha } \alpha \sum\limits_{i=2}^{n-2} \frac{i!}{\Gamma(2-\alpha+i)}= \frac{3!v(Q)r ^{ -\alpha }}{ \Gamma(4-\alpha)} + v(Q)r ^{ -\alpha } \alpha \sum\limits_{i=3}^{n-2} \frac{i!}{\Gamma(2-\alpha+i)}$$ $$\label{25} =\frac{(n-2)!v(Q)r ^{ -\alpha }}{ \Gamma(n-1-\alpha)} + v(Q)r ^{ -\alpha } \alpha \frac{(n-2)!}{\Gamma(n-\alpha )}=C_{n}^{(\alpha)}v(Q)r ^{ -\alpha }.$$ Therefore $I_{2}-I_{1}-I_{3}=0$ and obtain equality . Let us prove that the considered operators do not coincide with each other. For this purpose consider the function $f= \mathfrak{I}^{\alpha}_{0+} \varphi,\;\varphi \in L_{p}(\Omega) ,$ then in accordance with Theorem \[T2\], we have $ \mathfrak{D} _{0+} ^{ \alpha } \mathfrak{I}^{\alpha}_{0+} \varphi =\varphi.$ Hence $\mathfrak{I}^{\alpha}_{0+} \left( L_{p}\right)\subset\mathrm{D}\left(\mathfrak{D}^{ \alpha }_{0+}\right).$ Now it is sufficient to notice that $$\exists f\in \mathfrak{I}^{\alpha}_{0+} \left( L_{p}\right) ,\;f(\Lambda)\neq 0,$$ where $\Lambda \subset\partial\Omega,\;{\rm mess}\, \Lambda\neq 0.$ On the other hand, we know that $$f(\partial \Omega)= 0\;a.e.,\;\forall f\in \mathrm{D}\left(\mathfrak{D}^{ \alpha } \right) .$$ \[L2\] The following identity holds $$(\mathfrak{D} _{0+} ^{ \alpha })^{*} = \mathfrak{D }^{\alpha}_{d-},$$ where limits are understood as the limits with respect to the $L_{2}(\Omega)$ norm. Let us show that the next relation is true $$\label{26} (\mathfrak{D}^{ \alpha }_{0+} f , g )_{L_{2}(\Omega)} =(f , \mathfrak{D }^{\alpha}_{d-} g )_{L_{2}(\Omega)},$$ $$f\in \mathfrak{I}^{\alpha}_{0+} \left( L_{2}\right),\,g\in \mathfrak{I } ^{\alpha}_{d-}\left( L_{2}\right) .$$ Note that by virtue of Theorem \[T3\], we have $\,\mathfrak{D}^{ \alpha }_{0+} \mathfrak{I}^{\alpha}_{0+} \varphi =\varphi ,\, \mathfrak{D }^{\alpha}_{d-} \mathfrak{I } ^{\alpha}_{d-} \psi =\psi ,$ where $\psi,\psi\in L_{2}(\Omega).$ Hence, using Theorem \[T1\], we have that the left and right side of are finite. Therefore using the Fubini theorem, we have $$(\mathfrak{D}^{ \alpha }_{0+} f , g )_{L_{2}(\Omega)}=\int\limits_{\omega}d\chi \int\limits_{0}^{d}\varphi(Q) \overline{\left(\mathfrak{I } ^{\alpha}_{d-}\psi\right)(Q)}r^{n-1}dr$$ $$=\frac{1}{\Gamma(\alpha)}\int\limits_{\omega}d\chi\int\limits_{0}^{d}\varphi(Q)r^{n-1}dr\int\limits_{r}^{d}\frac{\overline{\psi(T)}}{(t-r)^{1-\alpha}}\,dt$$ $$\label{27} =\frac{1}{\Gamma(\alpha)}\int\limits_{\omega}d\chi\int\limits_{0}^{d}\overline{\psi(T)}t^{n-1}dt\int\limits_{0}^{t}\frac{\varphi(Q)}{(t-r)^{1-\alpha}}\left( \frac{r}{t}\right)^{n-1}dr $$ $$ =\int\limits_{\Omega} \left(\mathfrak{I} ^{\alpha}_{0+} \varphi\right)(Q)\,\overline{\psi(Q)} \, dQ=(f , \mathfrak{D }^{\alpha}_{d-} g )_{L_{2}(\Omega)}.$$ Thus inequality is proved. It follows that $\mathrm{D}(\mathfrak{D }^{\alpha}_{d-})\subset\mathrm{D}\left( [\mathfrak{D}_{0+}^{ \alpha}]^{*}\right).$ Let us prove that $\mathrm{D}\left( [\mathfrak{D}_{0+}^{ \alpha}]^{*}\right)\subset \mathrm{D}(\mathfrak{D }^{\alpha}_{d-}).$ In accordance with the definition of adjoint operator, we have $$\left(\mathfrak{D}^{ \alpha }_{0+} f,g \right)_{L_{2}(\Omega)}=\left( f, [\mathfrak{D}_{0+}^{ \alpha} g]^{*} \right)_{L_{2}(\Omega)},\,f\in \mathrm{D}(\mathfrak{D}^{ \alpha }_{0+}),\,g\in \mathrm{D}\left( [\mathfrak{D}_{0+}^{ \alpha}]^{*}\right).$$ Note that since $\mathrm{R}(\mathfrak{D }^{\alpha}_{d-})=L_{2}(\Omega),$ then $\mathrm{R} \left([\mathfrak{D}_{0+}^{ \alpha}]^{*}\right)=L_{2}(\Omega).$ Using the Fubini theorem, it can be easily shown that $$\left( \mathfrak{D}^{ \alpha }_{0+} f,g- \mathfrak{I } ^{\alpha}_{d-}[\mathfrak{D}_{0+}^{ \alpha}]^{*}g \right)_{L_{2}(\Omega)}=0.$$ By virtue of Theorem \[T3\], we have $\mathrm{R}(\mathfrak{D}^{ \alpha }_{0+})=L_{2}(\Omega).$ Hence $ g= \mathfrak{I } ^{\alpha}_{d-}[\mathfrak{D}_{0+}^{ \alpha}]^{*}g $ a.e. It implies that $\mathrm{D}\left([\mathfrak{D}_{0+}^{ \alpha}]^{*}\right)\subset\mathrm{D}\left(\mathfrak{D }^{\alpha}_{d-}\right).$ Strictly accretive property ============================ The following theorem establishes the strictly accretive property (see [@firstab_lit:kato1966 p. 352]) of the Kipriyanov fractional differential operator.\ \ \[T5\] Suppose $\rho(Q)$ is a real non-negative function, $\rho\in{\rm Lip}\, \lambda,\; \lambda>\alpha;$ then the following inequality holds $$\label{28} {\rm Re} ( f,\mathfrak{D}^{\alpha}f)_{L_2(\Omega,\rho)}\geq\mu\|f\|^{2}_{L_2(\Omega,\rho)},\;f\in H^{1}_{0} (\Omega),$$ where $$\mu=\frac{1}{2}\mathfrak{d}^{-\alpha} \left( \Gamma^{-1}(1-\alpha) +C_{n}^{(\alpha)}\right)- \frac{\alpha M \mathfrak{d}^{\lambda-\alpha} }{2\Gamma(1-\alpha)(\lambda-\alpha)\inf \rho}\,.$$ Moreover, if we have in additional that for any fixed direction $\mathbf{e}$ the function $\rho$ is monotonically non-increasing, then $$\mu=\frac{1}{2}\mathfrak{d}^{-\alpha} \left( \Gamma^{-1}(1-\alpha) +C_{n}^{(\alpha)}\right).$$ Consider a real case and let $f\in C_{0}^{\infty}(\Omega),$ we have $$\label{28.0} \rho(Q)f(Q)( \mathfrak{D}^{\alpha}f )(Q)=\frac{1}{2}( \mathfrak{D}^{\alpha}\rho f^{2} )(Q) $$ $$ +\frac{\alpha}{2\Gamma(1-\alpha)} \int\limits_{0}^{r} \frac{\rho(Q)[f (Q)- f(T)]^{2}}{(r - t)^{\alpha+1}} \left(\frac{t}{r}\right)^{n-1}\!\!\!dt $$ $$ +\frac{\alpha}{2\Gamma(1-\alpha)} \int\limits_{0}^{r} \frac{f^{2}(T)[\rho (T)- \rho(Q)] }{(r - t)^{\alpha+1}} \left(\frac{t}{r}\right)^{n-1}\!\!\!dt+ \frac{C_{n}^{\alpha}}{2}(\rho f^{2})(Q)r^{-\alpha} $$ $$ =I_{0}(Q)+I_{1}(Q)+I_{2}(Q)+I_{3}(Q).$$ Applying Theorem \[4\], we have $$\label{28.1} \int\limits_{\Omega} I_{0}(Q)dQ=\frac{1}{2}\int\limits_{\Omega}(\mathfrak{D}^{\alpha}_{d-}1)(Q) (\rho f^{2} )(Q)dQ $$ $$ =\frac{1}{2\Gamma(1-\alpha)}\int\limits_{\Omega}(d(\mathbf{e})-r)^{-\alpha} (\rho f^{2} )(Q)dQ\geq \frac{\mathfrak{d}^{-\alpha}}{2\Gamma(1-\alpha)}\|f\|^{2}_{L_{2}(\Omega,\rho)}.$$ Using the Fubini theorem, it can be shown in the usual way that $$\label{28.2} \left|\int\limits_{\Omega} I_{2}(Q)dQ\right|\leq\frac{\alpha}{2\Gamma(1-\alpha)}\int\limits_{\omega}d \chi \int\limits_{0}^{d(\mathbf{e})}r^{n-1}dr \int\limits_{0}^{r} \frac{f^{2}(T)|\rho (T)- \rho(Q)| }{(r - t)^{\alpha+1}} \left(\frac{t}{r}\right)^{n-1}\!\!\!dt $$ $$ =\frac{\alpha}{2\Gamma(1-\alpha)}\int\limits_{\omega}d \chi \int\limits_{0}^{d(\mathbf{e})}f^{2}(T) t^{n-1}dt \int\limits_{t}^{d(\mathbf{e})} \frac{|\rho (T)- \rho(Q)| }{(r - t)^{\alpha+1}} dr $$ $$ =\frac{\alpha}{2\Gamma(1-\alpha)}\int\limits_{\omega}d \chi \int\limits_{0}^{d(\mathbf{e})}f^{2}(T) t^{n-1}dt \int\limits_{0}^{d(\mathbf{e})-t} \frac{|\rho (Q-\tau \mathbf{e})- \rho(Q)| }{\tau^{\alpha+1}} d\tau $$ $$ \leq\frac{\alpha M}{2\Gamma(1-\alpha)}\int\limits_{\omega}d \chi \int\limits_{0}^{d(\mathbf{e})}f^{2}(T) t^{n-1}dt \int\limits_{0}^{d(\mathbf{e})-t} \tau^{\lambda-\alpha+1} d\tau $$ $$ \leq\frac{\alpha M \mathfrak{d}^{\lambda-\alpha}}{2\Gamma(1-\alpha)(\lambda-\alpha)}\|f\|^{2}_{L_{2}(\Omega)} .$$ Consider $$\label{28.3} \int\limits_{\Omega} I_{3}(Q)dQ=C^{(\alpha)}_{n}\int\limits_{\Omega}(\rho f^{2})(Q)r^{-\alpha}dQ\geq\frac{ C^{(\alpha)}_{n} \mathfrak{d}^{-\alpha}}{2}\|f\|^{2}_{L_{2}(\Omega,\rho)}.$$ Combining ,,,, and the fact that $I_{1} $ is non-negative, we obtain $$\label{28.4} ( f,\mathfrak{D}^{\alpha}f)_{L_2(\Omega,\rho)}\geq\mu\|f\|^{2}_{L_2(\Omega,\rho)},\;f\in C^{\infty}_{0} (\Omega).$$ In the case when for any fixed direction $\mathbf{e}$ the function $\rho$ is monotonically non-increasing, we have $I_{2}\geq 0.$ Hence is fulfilled. Now assume that $f\in H_{0}^{1} (\Omega),$ then there exists a sequence $\{f_k\}\in C^{\infty}_{0}(\Omega),\, f_k\stackrel {H_{0}^1 }{\longrightarrow}f. $ Using this fact, it is not hard to prove that $ f_k\stackrel {L_{2}(\Omega,\rho) }{\longrightarrow}f. $ Using inequality , we prove that $ \|\mathfrak{D}^{\alpha} f \|_{L_{2}(\Omega,\rho)} \leq C \| f \|_{H_0 ^1(\Omega)}. $ Therefore $ \mathfrak{D}^{\alpha}f_k\stackrel {L_{2}(\Omega,\rho) }{\longrightarrow} \mathfrak{D}^{\alpha}f. $ Hence using the continuity property of the inner product, we get $$( f_k,\mathfrak{D}^{\alpha}f_k)_{L_{2}(\Omega,\rho)}\rightarrow ( f ,\mathfrak{D}^{\alpha}f )_{L_{2}(\Omega,\rho)} .$$ Passing to the limit on the left and right side of inequality , we obtain $$\label{28.5} ( f,\mathfrak{D}^{\alpha}f)_{L_2(\Omega,\rho)}\geq\mu\|f\|^{2}_{L_2(\Omega,\rho)},\;f\in H^{1}_{0} (\Omega).$$ Now let us consider the complex case. Note that the following equality is true $$\label{31} {\rm Re} ( f,\mathfrak{D}^{\alpha}f)_{L_2(\Omega,\rho)}= ( u,\mathfrak{D}^{\alpha}u)_{L_2(\Omega,\rho)}+ ( v,\mathfrak{D}^{\alpha}v)_{L_2(\Omega,\rho)},\;u={\rm Re}f,\,v={\rm Im}f.$$ Combining , , we obtain . Sectorial property ==================== Consider a uniformly elliptic operator with real coefficients and the Kipriyanov fractional derivative in the final term $$Lu:=- D_{j} ( a^{ij} D_{i}u) +\rho\, \mathfrak{D}^{ \alpha }u,\;\; (i,j=\overline{1,n}) ,$$ $$\; \mathrm{D}(L)=H^{2}(\Omega)\cap H^{1}_{0}(\Omega),$$ $$\label{33} a^{ij}(Q)\in C^{1}(\bar{\Omega}) ,\,a^{ij}\xi _{i} \xi _{j} \geq a_{0} |\xi|^{2} ,\,a_{0}>0,$$ $$\label{34} \rho(Q)>0,\;\rho(Q)\in {\rm Lip\,\lambda},\,\alpha<\lambda\leq1.$$ We assume in additional that $\mu>0,$ here we use the denotation that is used in Theorem . We also consider the formal adjoint operator $$L^{+}u:=- D_{i} ( a^{ij} D_{j}u) + \mathfrak{D} ^{\alpha}_{ d -}\rho u ,$$ $$\;\mathrm{D}(L^{+})=\mathrm{D}(L),$$ and the operator $$H=\frac{1}{2}(L+L^{+}).$$ We use a special case of the Green formula $$\label{35} -\int\limits_{\Omega}D_{j}(a^{ij}D_{i}u)\,\bar{v}\, dQ=\int\limits_{\Omega}a^{ij}D_{i}u\, \overline{D_{j}v}\, dQ\,,\;u\in H^{2}(\Omega),v\in H_{0}^{1}(\Omega) .$$ \[L3\] The operators $L,L^{\!+}\!,H$ are closeable. We can easily check this fact, if we apply Theorem 3.4 [@firstab_lit:kato1966 p.337]. We have the following lemma. \[T6\] The operators $\tilde{L},\,\tilde{L}^{+}$ are strictly accretive, their numerical range belongs to the sector $$\mathfrak{S}:= \{\zeta\in\mathbb{C}: \,|{\rm arg }\,(\zeta-\gamma)|\leq\theta\},$$ where $\theta$ and $\gamma$ are defined by the coefficients of the operator $L.$ Consider the operator $L .$ It is not hard to prove that $$\label{37} -{\rm Re} \left( D_{j} [a^{ij} D_{i}f] ,f\right)_{L_{2}(\Omega)}\geq a_{0}\| f \|^{2}_{L^{1}_{2}(\Omega)},\;f\in \mathrm{D}(L).$$ Hence $$\label{40} {\rm Re} (f_{n}, L f_{n} )_{L_{2}(\Omega)}\geq a_{0}\| f_{n} \|^{2}_{L^{1}_{2}(\Omega)}+ {\rm Re}(f_{n},\mathfrak{D}^{\alpha}f_{n})_{L_2(\Omega,\rho)} ,\;\{f_{n}\}\subset\mathrm{D}(L).$$ Assume that $ f\in \mathrm{D}(\tilde{L}).$ In accordance with the definition, there exists a sequence $\{f_{n}\}\subset\mathrm{D}(L),\,f_{n}\xrightarrow[ L ]{}f.$ By virtue of , we easily prove that $f\in H_{0}^{1}(\Omega).$ Using the continuity property of the inner product, we pass to the limit on the left and right side of inequality . Thus, we have $$\label{41} {\rm Re}( f , \tilde{L} f )_{L_{2}(\Omega)}\geq a_{0}\| f \|^{2}_{L^{1}_{2}(\Omega)}+ {\rm Re}( f ,\mathfrak{D}^{\alpha}f )_{L_2(\Omega,\rho)} ,\; f \in\mathrm{D}(\tilde{L}).$$ By virtue of Theorem \[T5\], we can rewrite the previous inequality as follows $$\label{42} {\rm Re}( f,\tilde{L}f )_{L_{2}(\Omega)}\geq a_{0}\|f\|^{2}_{L^{1}_{2}(\Omega)}+\mu\|f\|^{2}_{L_{2}(\Omega,\rho)} ,\;f\in \mathrm{D}(\tilde{L}).$$ Applying the Friedrichs inequality to the first summand of the right side, we get $$\label{43} {\rm Re} ( f,\tilde{L}f )_{L_{2}(\Omega)}\geq \mu_{1}\|f\|^{2}_{L_{2}(\Omega)},\; f \in\mathrm{D}(\tilde{L}),\; \mu_{1}=a_{0}+\mu\inf\rho(Q) .$$ Consider the imaginary component of the form, generated by the operator $L$ $$\label{44} \left|{\rm Im} ( f,Lf )_{L_{2}(\Omega)}\right|\leq \left|\int\limits_{\Omega} \left(a^{ij} D_{i}u D_{j}v-a^{ij} D_{i}v D_{j}u\right)dQ\right| $$ $$ +\left| ( u,\mathfrak{D}^{\alpha}v )_{L_{2}(\Omega,\rho)} -( v,\mathfrak{D}^{\alpha}u )_{L_{2}(\Omega,\rho)}\right|= I_{1}+I_{2}.$$ Using the Cauchy Schwarz inequality for sums, the Young inequality, we have $$\label{45} a^{ij} D_{i}u D_{j}v\leq a |Du| |Dv| \leq \frac{a}{2} \left( |Du|^{2} +|Dv|^{2} \right),a(Q)\!=\!\left(\sum\limits_{i,j=1}^{n} |a_{ij}(Q)|^{2} \right)^{\!\!1/2}\!\!.$$ Hence $$\label{45.1} I_{1}\leq a_{1}\|f\|^{2}_{L^{1}_{2}(\Omega)},\;a_{1}=\sup a(Q) .$$ Applying inequality , the Young inequality, we get $$\label{46} \left|( u,\mathfrak{D}^{\alpha}v )_{L_{2}(\Omega,p)}\right|\leq C_{1}\|u\|_{L_{2}(\Omega)}\|\mathfrak{D}^{\alpha}v\|_{L_{q}(\Omega)} $$ $$ \leq C_{1} \|u\|_{L_{2}(\Omega)}\left\{\frac{K}{\delta^{\nu}}\|v\|_{L_{2}(\Omega)}+\delta^{1-\nu}\|v\|_{L^{1}_{2}(\Omega)} \right\} $$ $$ \leq\frac{1}{\varepsilon} \|u\|^{2}_{L_{2}(\Omega)} + \varepsilon\left(\frac{ KC_{1}}{\sqrt{2}\delta^{\nu}}\right)^{2} \|v\|^{2}_{L_{2}(\Omega)} + \frac{\varepsilon}{2}\left( C_{1}\delta^{1-\nu}\right)^{2} \|v\|^{2}_{L^{1}_{2}(\Omega)},$$ where $ 2<q< 2n/(2\alpha-2+n).$ Hence $$I_2 \leq\left|( u,\mathfrak{D}^{\alpha}v )_{L_{2}(\Omega,\rho)}\right|+\left|( v,\mathfrak{D}^{\alpha}u )_{L_{2}(\Omega,\rho)}\right|\leq \frac{1}{\varepsilon}\left(\|u\|^{2}_{L_{2}(\Omega)}+\|v\|^{2}_{L_{2}(\Omega)}\right)$$ $$+\varepsilon\left(\frac{ KC_{1}}{\sqrt{2}\delta^{\nu}}\right)^{2}\left(\|u\|^{2}_{L_{2}(\Omega)}+\|v\|^{2}_{L_{2}(\Omega)} \right)+\frac{\varepsilon}{2}\left( C_{1} \delta^{1-\nu}\right)^{2}\left(\|u\|^{2}_{L^{1}_{2}(\Omega)}+\|v\|^{2}_{L^{1}_{2}(\Omega)} \right)$$ $$\label{48} = \left(\varepsilon \delta^{-2\nu}C_{2} +\frac{1}{\varepsilon}\right) \|f\|^{2}_{L_{2}(\Omega)} + \varepsilon \delta^{2-2\nu} C_{3} \|f\|^{2}_{L^{1}_{2}(\Omega)} .$$ Taking into account and combining , , we easily prove that $$\left|{\rm Im} ( f,\tilde{L}f )_{L_{2}(\Omega)}\right|\leq \left(\varepsilon \delta^{-2\nu}C_{2} +\frac{1}{\varepsilon}\right) \|f\|^{2}_{L_{2}(\Omega)} + \left(\varepsilon \delta^{2-2\nu} C_{3}+a_{1}\right) \| f\|^{2}_{L_{2}^{1}(\Omega)}, f\in \mathrm{D}(\tilde{L}).$$ Thus by virtue of for an arbitrary number $k>0,$ the next inequality holds $${\rm Re}( f,\tilde{L}f )_{L_{2}(\Omega)}-k \left|{\rm Im} ( f,\tilde{L}f )_{L_{2}(\Omega)}\right|\geq \left(a_{0}-k [\varepsilon\delta^{2-2\nu} C_{3}+a_{1}] \right)\| f\|^{2}_{L_{2}^{1}(\Omega)}$$ $$+\left( \mu \inf \rho(Q)- k\left[\varepsilon \delta^{-2\nu}C_{2} +\frac{1}{\varepsilon}\right] \right)\|f\|^{2}_{L_{2}(\Omega)}.$$ Choose $k= a_{0}\left(\varepsilon \delta^{2-2\nu} C_{3}+ a_{1} \right)^{-1}\!\!,$ we get $$\label{49} \left|{\rm Im} ( f,(\tilde{L}-\gamma) f )_{L_{2}(\Omega)}\right| \leq \frac{1}{k}{\rm Re}( f,(\tilde{L}-\gamma)f )_{L_{2}(\Omega)}, $$ $$ \;\gamma= \mu \inf \rho(Q)- k\left[\varepsilon \delta^{-2\nu}C_{2} +\frac{1}{\varepsilon}\right].$$ This inequality shows that the numerical range $\Theta(\tilde{L})$ belongs to the sector with the top $\gamma$ and the semi-angle $\theta=\arctan(1/k).$ The prove corresponding to the operator $\tilde{L}^{+}$ is analogous. We do not study in detail the conditions under which $\gamma>0,$ but we just note that relation gives us an opportunity to formulate them in an easy way. Further, we assume that the coefficients of the operator $L$ such that $\gamma>0.$ \[T7\] The operators $\tilde{L},\tilde{L}^{+},\tilde{H}$ is m-sectorial, the operator $\tilde{H}$ is selfadjoint. By virtue of Theorem \[T6\] we have that the operator $\tilde{L}$ is sectorial i.e. $\Theta(L)\subset \mathfrak{S}.$ Applying Theorem 3.2 [@firstab_lit:kato1966 p. 336] we conclude that $\mathrm{R}(\tilde{L}-\zeta)$ is a closed space for any $\zeta\in \mathbb{C}\setminus\mathfrak{S}$ and that the next relation holds $$\label{50} {\rm def}(\tilde{L}-\zeta)=\eta,\; \eta={\rm const} .$$ Using , it is not hard to prove that $ \|\tilde{L}f\|_{L_{2}(\Omega)}\geq \sqrt{\mu_{1}}\|f\|_{L_{2}(\Omega)},\, f \in\mathrm{D}(\tilde{L}). $ Hence the inverse operator $(\tilde{L}+\zeta)^{-1}$ is defined on the subspace $\mathrm{R}(\tilde{L}+\zeta),\,{\rm Re}\zeta>0.$ In accordance with condition (3.38) [@firstab_lit:kato1966 p.350], we need to show that $$\label{51} {\rm def}(\tilde{L}+\zeta)=0,\;\|(\tilde{L}+\zeta)^{-1}\|\leq ({\rm Re}\zeta)^{-1},\;{\rm Re}\zeta>0 .$$ Since $\gamma>0,$ then the left half-plane is included in the the set $\mathbb{C}\setminus \mathfrak{S}.$ Note that by virtue of inequality , we have $$\label{52} {\rm Re} ( f,(\tilde{L}-\zeta )f )_{L_{2}(\Omega)}\geq (\mu- {\rm Re} \zeta ) \|f\|^{2}_{L_{2}(\Omega)} .$$ Let $\zeta_{0}\in\mathbb{C}\setminus \mathfrak{S},\;{\rm Re}\zeta_{0} <0.$ Since the operator $\tilde{L}-\zeta_{0}$ has a closed range $\mathrm{R} (\tilde{L}-\zeta_{0}),$ then we have $$L_{2}(\Omega)=\mathrm{R} (\tilde{L}-\zeta_{0})\oplus \mathrm{R} (\tilde{L}-\zeta_{0})^{\perp} .$$ Note that $ C^{\infty}_{0}(\Omega)\cap \mathrm{R} (\tilde{L}-\zeta_{0})^{\perp}=0,$ because if we assume the contrary, then applying inequality for any element $u\in C^{\infty}_{0}(\Omega)\cap \mathrm{R} (\tilde{L}-\zeta_{0})^{\perp},$ we get $$(\mu-{\rm Re}\zeta_{0}) \|u\|^{2}_{L_{2}(\Omega)} \leq {\rm Re} ( u,(\tilde{L}-\zeta_{0})u )_{L_{2}(\Omega)}=0,$$ hence $u=0.$ Thus this fact implies that $$\left(g,v\right)_{L_{2}(\Omega)}=0,\,g\in \mathrm{R} (\tilde{L}-\zeta_{0})^{\perp},\, \in C^{\infty}_{0}(\Omega).$$ Since $ C^{\infty}_{0}(\Omega)$ is a dense set in $L_{2}(\Omega),$ then $\mathrm{R} (\tilde{L}-\zeta_{0})^{\perp}=0.$ It follows that ${\rm def} (\tilde{L}-\zeta_{0}) =0.$ Now if we note then we came to the conclusion that ${\rm def} (\tilde{L}-\zeta )=0,\;\zeta\in \mathbb{C}\setminus\mathfrak{S}.$ Hence ${\rm def} (\tilde{L}+\zeta )=0,\;{\rm Re}\zeta>0.$ Thus the proof of the first relation of is complete. To prove the second relation we should note that $$(\mu +{\rm Re}\zeta ) \|f\|^{2}_{L_{2}(\Omega)} \leq {\rm Re} ( f,(\tilde{L}+\zeta )f )_{L_{2}(\Omega)}\leq \|f\|_{L_{2}(\Omega)}\|(\tilde{L}+\zeta )\|_{L_{2}(\Omega)},$$ $$\;f\in \mathrm{D}(\tilde{L}),\; {\rm Re}\zeta>0 .$$ Using first relation , we have $$\|(\tilde{L}+\zeta )^{-1}g\|_{L_{2}(\Omega)} \leq(\mu +{\rm Re}\,\zeta ) ^{-1} \|g\|_{L_{2}(\Omega)}\leq ( {\rm Re}\,\zeta ) ^{-1} \|g\|_{L_{2}(\Omega)},\,g\in L_{2}(\Omega).$$ This implies that $$\|(\tilde{L}+\zeta )^{-1} \| \leq( {\rm Re}\,\zeta ) ^{-1},\,{\rm Re}\zeta>0.$$ This concludes the proof corresponding to the operator $\tilde{L}.$ The proof corresponding to the operator $\tilde{L}^{+}$ is analogous. Consider the operator $\tilde{H}.$ It is obvious that $ \tilde{H} $ is a symmetric operator. Hence $ \Theta(\tilde{H})\subset \mathbb{R}. $ Using and arguing as above, we see that $$\label{52.0} ( f,\tilde{H}f )_{L_{2}(\Omega)}\geq \mu_{1} \|f\|^{2}_{L_{2}(\Omega)} .$$ Continuing the used above line of reasoning and applying Theorem 3.2 [@firstab_lit:kato1966 p.336], we see that $$\label{52.1} {\rm def}(\tilde{H}-\zeta )=0,\,{\rm Im }\zeta\neq 0;$$ $$\label{52.2} {\rm def}(\tilde{H}+\zeta)=0,\;\|(\tilde{H}+\zeta)^{-1}\|\leq ({\rm Re}\zeta)^{-1},\;{\rm Re}\zeta>0 .$$ Combining with Theorem 3.16 [@firstab_lit:kato1966 p.340], we conclude that the operator $\tilde{H}$ is selfadjoint. Finally, note that in accordance with the definition, relation implies that the operator $\tilde{H}$ is m-accretive. Since we already know that the operators $\tilde{L},\tilde{L}^{+},\tilde{H}$ are sectorial and m-accretive, then in accordance with the definition they are m-sectorial. Main theorems ============= In this section we need using the theory of sesquilinear forms. If it is not stated otherwise, we use the definitions and the notation of the monograph [@firstab_lit:kato1966]. Consider the forms $$\label{53} t[u,v] = \int\limits_{\Omega} a^{ij}D_{i}u\, \overline{D_{j}v}dQ+ \int\limits_{\Omega} \rho\,\mathfrak{D}^{\alpha }u \, \bar{v } dQ, \; u,v\in H^{1}_{0}(\Omega) ,$$ $$t^{*}[u,v]: =\overline{t [v,u]} =\int\limits_{\Omega} a^{ij}D_{j}u\, \overline{D_{i}v}dQ+ \int\limits_{\Omega} u \rho\, \overline{\mathfrak{D}^{\alpha }v }dQ,$$ $$\mathfrak{Re} t :=\frac{1}{2}(t+t^{*}).$$ For convenience, we use the shorthand notation $h:=\mathfrak{Re} t.$ \[L4\] The form $t$ is a closed sectorial form, moreover $t=\mathfrak{\tilde{f}}, $ where $$\mathfrak{ f }[u,v]=(\tilde{L}u,v)_{L_{2}(\Omega)},\;u,v\in \mathrm{D}(\tilde{L}).$$ First we shall show that the following inequality holds $$\label{54} C_{0}\|f\|^{2} _{H^{1}_{0}(\Omega)}\leq \left|t[f ]\right|\leq C_{1}\|f\|^{2} _{H^{1}_{0}(\Omega)},\,f\in H^{1}_{0}(\Omega).$$ Using , Theorem \[T5\], we obtain $$\label{55} C_{0}\|f\|^{2} _{H^{1}_{0}(\Omega)}\leq {\rm Re} t[f ] \leq\left|t[f ]\right|,\;f\in H^{1}_{0} (\Omega).$$ Applying ,, we get $$\label{56} |t[f]|\leq\left|\left(a^{ij}D_if,D_jf\right)_{\!L_{2}(\Omega)}\!\right|+\left|\left(\rho \,\mathfrak{D}^{\alpha} f, f\right)_{\!L_{2}(\Omega)}\!\right| \leq C_{1}\|f\|^{2}_{H^{1}_{0}(\Omega)},\,f\in H^{1}_{0} (\Omega).$$ Note that $ H^{1}_{0}(\Omega) \subset \mathrm{D}( \tilde{t}) .$ If $f\in \mathrm{D}( \tilde{t} ),$ then in accordance with the definition, there exists a sequence $ \{f_{n}\}\subset \mathrm{D}( t),\, f_{n}\xrightarrow[t ]{ }f. $ Applying , we get $ f_{n}\xrightarrow[ ]{ H^{1}_{0}}f .$ Since the space $H^{1}_{0}(\Omega)$ is complete, then $\mathrm{D}( \tilde{t}) \subset H^{1}_{0}(\Omega).$ It implies that $\mathrm{D}( \tilde{t})=\mathrm{D}( t).$ Hence $t$ is a closed form. The proof of the sectorial property contains in the proof of Theorem \[T6\]. Let us prove that $t=\mathfrak{\tilde{f}}.$ First, we shall show that $$\label{57} \mathfrak{ \mathfrak{f} }[u,v]=t[u,v],\;u,v\in \mathrm{D}(\mathfrak{f}).$$ Using formula , we have $$\label{58} ( L u ,v )_{L_{2}(\Omega)}=t[u ,v ],\;u,v\in \mathrm{D}(L).$$ Hence we can rewrite relation in the following form $$\label{59} C_{0}\|f\|^{2} _{H^{1}_{0}(\Omega)}\leq \left|( L f,f)_{L_{2}(\Omega)}\right|\leq C_{1}\|f\|^{2} _{H^{1}_{0}(\Omega)},\;f\in \mathrm{D}(L).$$ Assume that $f \in \mathrm{D}(\tilde{L}),$ then there exists a sequence $\{f_{n}\}\in \mathrm{D}( L ),\,f_{n}\xrightarrow[L]{}f.$ Combining ,, we obtain $f_{n}\xrightarrow[t]{}f.$ These facts give us an opportunity to pass to the limit on the left and right side of . Thus, we obtain . Combining ,, we get $$\label{60} C_{0}\|f\|^{2} _{H^{1}_{0}(\Omega)}\leq \left|\mathfrak{ f }[f ]\right|\leq C_{1}\|f\|^{2} _{H^{1}_{0}(\Omega)},\;f\in \mathrm{D}(\mathfrak{f}).$$ Note that by virtue of Theorem \[T6\] the operator $\tilde{L}$ is sectorial, hence due to Theorem 1.27 the form $\mathfrak{f}$ is closable. Using the facts established above, Theorem 1.17 , passing to the limit on the left and right side of inequality , we get $$\mathfrak{ \mathfrak{\tilde{f}} }[u,v]=t[u,v],\;u,v\in H_{0}^{1}(\Omega).$$ This concludes the proof. \[L5\] The form h is a closed symmetric sectorial form, moreover $h=\mathfrak{\tilde{k}}, $ where $$\mathfrak{ k }[u,v]=(\tilde{H}u,v)_{L_{2}(\Omega)},\;u,v\in \mathrm{D}(\tilde{H}).$$ To prove the symmetric property (see(1.5) ) of the form $h,$ it is sufficient to note that $$h[u,v]=\frac{1}{2}\left(t[u,v]+ \overline{t[v,u]} \right)=\frac{1}{2}\overline{\left( t[v,u] +\overline{t[u,v]}\right)}=\overline{h[v,u]},\;u,v\in \mathrm{D}(h).$$ Obviously, we have $ h[f]= {\rm Re}\,t[f]. $ Hence applying , , we have $$\label{61} C_{0}\|f\| _{H^{1}_{0}(\Omega)}\leq h[f ] \leq C_{1}\|f\| _{H^{1}_{0}(\Omega)},\;f\in H^{1}_{0}(\Omega).$$ Arguing as above, using , it is easy to prove that $\mathrm{D}(\tilde{h})=H_{0}^{1}(\Omega). $ Hence the form $h$ is a closed form. The proof of the sectorial property of the form $h$ contains in the proof of Theorem \[T6\]. Let us prove that $h=\mathfrak{\tilde{k}}.$ We shall show that $$\label{62} \mathfrak{ \mathfrak{k} }[u,v]=h[u,v],\,u,v\in \mathrm{D}(\mathfrak{k}).$$ Applying \[L1\], Lemma \[L2\], we have $$(\rho\,\mathfrak{D}^{\alpha}f,g)_{L_{2}(\Omega)}=(f,\mathfrak{D}_{d-}^{\alpha}\rho g)_{L_{2}(\Omega)},\,f,g\in H_{0}^{1}(\Omega).$$ Combining this fact with formula , it is not hard to prove that $$\label{63} ( H u,v)_{L_{2}(\Omega)}=h[u,v],\;u,v\in \mathrm{D}(H).$$ Using , we can rewrite estimate as follows $$\label{64} C_{0}\|f\| _{H^{1}_{0}(\Omega)}\leq ( H f,f)_{L_{2}(\Omega)} \leq C_{1}\|f\| _{H^{1}_{0}(\Omega)},\;f\in \mathrm{D}(H).$$ Note that in consequence of Remark \[L3\] the operator $H$ is closeable. Assume that $f \in \mathrm{D}(\tilde{H}),$ then there exists a sequence $\{f_{n}\}\subset \mathrm{D}(H),\,f_{n}\xrightarrow[H]{}f .$ Combining ,, we obtain $f_{n}\xrightarrow[h]{}f.$ Passing to the limit on the left and right side of , we get . Combining ,, we obtain $$\label{65} C_{0}\|f\| _{H^{1}_{0}(\Omega)}\leq \mathfrak{ k }[f ] \leq C_{1}\|f\| _{H^{1}_{0}(\Omega)},\;f\in \mathrm{D}(\mathfrak{k}).$$ Note that in consequence of Theorem \[T6\] the operator $\tilde{H}$ is sectorial. Hence by virtue of Theorem 1.27 the form $\mathfrak{k}$ is closable. Using the proven above facts, Theorem 1.17 , passing to the limits on the left and right side of inequality , we get $$\mathfrak{ \mathfrak{\tilde{k}} }[u,v] = h[u,v],\,u,v\in H_{0}^{1}(\Omega).$$ This completes the proof. \[T8\] The operator $\tilde{H}$ has a compact resolvent, the following estimate holds $$\label{66} \lambda_{n}(L_{0})\leq\lambda_{n}(\tilde{H})\leq\lambda_{n}(L_{1}),\,n\in\mathbb{N},$$ where $\lambda_{n}(L_{k}),\,k=0,1$ are respectively the eigenvalues of the following operators with real constant coefficients $$\label{67} L_{k}f=- a_{ k }^{ij} D _{j}D_{i}f +\rho_{k}f,\,\mathrm{D}(L_{k})=\mathrm{D}(L), $$ $$ a_{ k }^{ij}\xi_{i}\xi_{j}>0,\,\rho_{k}>0.$$ First, we shall prove the following propositions\ i) [*The operators $\tilde{H},L_{k}$ are positive-definite.*]{} Using the fact that the operator $H$ is selfadjoint, relation , we conclude that the operator $\tilde{H}$ is positive-definite. Using the definition, we can easily prove that the operators $L_{k}$ are positive-definite.\ ii) [*The space $H^{1}_{0}(\Omega)$ coincides with the energetic spaces $\mathfrak{H}_{\tilde{H}},\mathfrak{H}_{L_{k}}$ as a set of elements.*]{} Using Lemma \[L5\], we have $$\label{68} \|f\| _{\mathfrak{H}_{\tilde{H}}}=\tilde{\mathfrak{k}}[f]=h[f],\;f\in H_{0}^{1}(\Omega).$$ Hence the space $\mathfrak{H}_{\tilde{H}}$ coincides with $H^{1}_{0}(\Omega)$ as a set of elements. Using this fact, we obtain the coincidence of the spaces $H^{1}_{0}(\Omega)$ and $\mathfrak{H}_{L_{k}}$ as the particular case.\ iii)[*We have the following estimates $$\label{69} \|f\| _{\mathfrak{H} _{L_{0}}}\leq \|f\| _{\mathfrak{H}_{\tilde{H}}}\leq \|f\| _{\mathfrak{H} _{L_{1}}},\,f\in H^{1}_{0}(\Omega).$$* ]{} We obtain the equivalence of the norms $\|\cdot\|_{H_{0}^{1}}$ and $\|\cdot\|_{\mathfrak{H} _{L_{k}}}$ as the particular case of relation . It is obvious that there exist such operators $L_{k}$ that the next inequalities hold $$\label{70} \|f\| _{\mathfrak{H} _{L_{0}}}\leq C_{0}\|f\| _{H^{1}_{0}(\Omega)},\;C_{1}\|f\| _{H^{1}_{0}(\Omega)}\leq\|f\| _{\mathfrak{H} _{L_{1}}},\,f\in H^{1}_{0}(\Omega).$$ Combining ,,, we get . Now we can prove the proposal of this theorem. Note that the operators $\tilde{H},$ $L_{k}$ are positive-definite, the norms $\|\cdot\|_{H_{0}^{1}}, \|\cdot\|_{\mathfrak{H} _{L_{k}}}, \|\cdot\|_{\mathfrak{H}_{\tilde{H}}}$ are equivalent. Applying the Rellich-Kondrashov theorem, we have that the energetic spaces $\mathfrak{H}_{\tilde{H}},\;\mathfrak{H} _{L_{k}}$ are compactly embedded into $L_{2}(\Omega).$ Using Theorem 3 [@firstab_lit:mihlin1970 p.216], we obtain the fact that the operators $ L_{0} ,L_{1},\tilde{H}$ have a discrete spectrum. Taking into account (i),(ii),(iii), in accordance with the definition [@firstab_lit:mihlin1970 p.225], we have $$L_{0}\leq \tilde{H} \leq L_{1}.$$ Applying Theorem 1 [@firstab_lit:mihlin1970 p.225], we obtain . Note that by virtue of Theorem \[T7\] the operator $\tilde{H}$ is m-accretive. Hence $0\in P(\tilde{H}).$ Due to Theorem 5 [@firstab_lit:mihlin1970 p.222] the operator $\tilde{H}$ has a compact resolvent at the point zero. Applying Theorem 6.29 [@firstab_lit:kato1966 p.237], we conclude that the operator $\tilde{H}$ has a compact resolvent. Operator $\tilde{L}$ has a compact resolvent, discrete spectrum. Note that in accordance with Theorem \[T7\] the operators $\tilde{L},\tilde{H}$ are m-sectorial, the operator $\tilde{H}$ is self-adjoint. Applying Lemma \[L4\], Lemma \[L5\], Theorem 2.9 [@firstab_lit:kato1966 p.409], we get $T_{t}=\tilde{L},\;T_{h}=\tilde{H},$ where $T_{t},T_{h}$ are the Fridrichs extensions of the operators $\tilde{L}, \tilde{H}$ (see [@firstab_lit:kato1966 p.409]) respectively. Since in accordance with the definition [@firstab_lit:kato1966 p.424] the operator $\tilde{H}$ is a real part of the operator $\tilde{L},$ then due to Theorem \[T8\], Theorem 3.3 [@firstab_lit:kato1966 p.424] the operator $\tilde{L}$ has a compact resolvent. Applying Theorem 6.29 [@firstab_lit:kato1966 p.237], we conclude that the operator $\tilde{L}$ has a discrete spectrum. It can easily be checked that the Kypriaynov operator is reduced to the Marchaud operator in the one-dimensional case. At the same time, the results of this work are only true for the dimensions $ n\geq2.$ However, using Corollary 1 [@firstab_lit:1kukushkin2017], which establishes the strictly accretive property of the Marchaud operator, we can apply the obtained technique to the one-dimensional case. Conclusions =========== The paper presents the results obtained in the spectral theory of fractional differential operators. A number of propositions of independent interest in the fractional calculus theory are proved, the new concept of a multidimensional directional fractional integral is introduced. The sufficient conditions of the representability by the directional fractional integral are formulated. In particular, the inclusion of the Sobolev space to the class of functions that are representable by the directional fractional integral is established. Note that the technique of the proofs, which is analogous to the one-dimensional case, is of particular interest. It should be noted that the extension of the Kipriyanov fractional differential operator is obtained, the adjoint operator is found, and the strictly accretive property is proved. These all create a complete description reflecting qualitative properties of fractional differential operators. As the main results, the following theorems establishing the properties of an uniformly elliptic operator with the Kipriyanov fractional derivative in the final term are proved: the theorem on the strictly accretive property, the theorem on the sectorial property, the theorem on the m-accretive property, the theorem establishing a two-sided estimate for the eigenvalues and discreteness of the spectrum of the real component. Using the sesquilinear forms theory, we obtained the major theoretical results. We consider the proofs corresponding to the multidimensional case, however the reduction to the one-dimensional case is possible. For instance, the one-dimensional case is described in the paper [@firstab_lit:2kukushkin2017]. We also note that the results in this direction can be obtained for the real axis. It is worth noticing that the application of the sesquilinear forms theory, as a tool to study second order differential operators with a fractional derivative in the final term, gives an opportunity to analyze the major role of the senior term in the functional properties of the operator. This technique is novel and can be used for studying the spectrum of perturbed fractional differential operators. Therefore, the idea of the proof may be of interest regardless of the results. Acknowledgments {#acknowledgments .unnumbered} --------------- The author thanks Professor Alexander L. Skubachevskii for valuable remarks and comments made during the report, which took place 31.10.2017 at Peoples’ Friendship University of Russia, Moscow. [99]{} T.S. Aleroev; *Spectral analysis of one class of non-selfadjoint operators*, Differential Equations, **20**, No.1 (1984), 171-172. T.S. Aleroev, B.I. Aleroev; *On eigenfunctions and eigenvalues of one non-selfadjoint operator*, Differential Equations, **25**, No.11 (1989), 1996-1997. T.S. Aleroev; *On eigenvalues of one class of non-selfadjoint operators*, Differential Equations, **30**, No.1 (1994), 169-171. T.S. Aleroev; *On the eigenfunctions system completeness of one dierential operator of fractional order*, Differential Equations, **36**, No.6 (2000), 829-830. K. Friedrichs *Symmetric positive linear differential equations*, Comm. Pure Appl. Math., **11** (1958), 238-241. M.M. Jrbashyan; *Boundary value problem for the fractional differential operator of the Sturm-Liouville type*, Proceedings of еру Academy of Sciences of the Armenian SSR, **5**, No.2 (1970), 37-47. T. Kato; *Fractional powers of dissipative operators*, J. Math. Soc. Japan, **13**, No.3 (1961), 246-274. T. Kato; *Perturbation theory for linear operators*, Springer-Verlag Berlin, Heidelberg, New York, 1966. I.A. Kipriyanov; *On spaces of fractionally differentiable functions*, Proceedings of the Academy of Sciences of USSR, **24** (1960), 665-882. I.A. Kipriyanov; *The operator of fractional differentiation and powers of elliptic operators*, Proceedings of the Academy of Sciences of USSR, **131** (1960), 238-241. I.A. Kipriyanov; *On some properties of the fractional derivative in the direction*, Proceedings of the universities. Math., USSR, No.2 (1960), 32-40. I.A. Kipriyanov; *On the complete continuity of embedding operators in the spaces of fractionally differentiable functions*, Russian Mathematical Surveys, **17** (1962), 183-189. M.V. Kukushkin; *Evaluation of the eigenvalues of the Sturm-Liouville problem for a differential operator with the fractional derivative in the final term*, Belgorod State University Scientific Bulletin, Math. Physics., **46**, No.6 (2017), 29-35. M.V. Kukushkin; *Theorem on bounded embedding of the energetic space generated by the Marchaud fractional differential operator on the axis*, Belgorod State University Scientific Bulletin, Math. Physics., **48**, No.20 (2017), 24-30. M.V. Kukushkin; *On some qulitative properties of the Kipriyanov fractional differential operator*, Vestnik of Samara University, Natural Science Series, Math., **23**, No.2 (2017), 32-43. L.N. Lyakhov, I.P. Polovinkin, E.L.Shishkina; *On one problem of I. A. Kipriyanov for a singular ultrahyperbolic equation*, Differential Equations, **50**, No.4 (2014) 513-525. S.G. Mihlin; *Variational methods in mathematical physics*, Moscow Science, 1970. A.M. Nakhushev; *The Sturm-Liouville problem for ordinary differential equation of second order with fractional derivatives in the final terms*, Proceedings of the Academy of Sciences of the USSR, **234**, No.2 (1977), 308-311. S.Yu. Reutskiy; *A novel method for solving second order fractional eigenvalue problems*, Journal of Computational and Applied Mathematics, **306**, (2016), 133-153. S.G. Samko, A.A. Kilbas, O.I. Marichev; *Integrals and derivatives of fractional order and some of their applications*, Nauka i Tekhnika, Minsk, 1987. [^1]: Submitted October 10, 2017. Published January 29, 2018.
--- abstract: 'Let $G_1,...,G_n$ be graphs on the same vertex set of size $n$, each graph having minimum degree $\delta(G_i)\geq \frac{n}{2}$. A recent conjecture of Aharoni asserts that there exists a rainbow Hamiltonian cycle i.e. a cycle with edge set $\{e_1,...,e_n\}$ such that $e_i\in E(G_i)$ for $1\leq i \leq n$. This can be seen as a rainbow variant of the well-known Dirac theorem. In this paper, we prove this conjecture asymptotically, namely, we show that for every $\varepsilon>0$, there exists an integer $N>0$, such that when $n>N$ for any graphs $G_1,...,G_n$ on the same vertex set of size $n$, each graph having $\delta(G_i)\geq (\frac{1}{2}+\varepsilon)n$, there exists a rainbow Hamiltonian cycle. Our main tool is the absorption technique. We also prove that with $\delta(G_i)\geq \frac{n+1}{2}$, one can find rainbow cycles of length $i$ where $3\leq i \leq n-1$.' author: - | Yangyang Cheng$^{a,}$[^1], Guanghui Wang$^{a,}$[^2], Yi Zhao$^{b,}$[^3]\ \ [$^a$School of Mathematics,]{}\ [Shandong University, 250100, Jinan, Shandong, P. R. China]{}\ [$^b$Department of Mathematics and Statistics,]{}\ [Georgia State University, Atlanta, GA 30303, USA]{}\ title: Rainbow Pancyclicity in Graph Systems --- 0.65cm [**Keywords:**]{} Dirac’s theorem; rainbow Hamiltonian cycle; absorption technique; pancyclicity Introduction ============ Let $t$ be a positive integer and $G_i(i=1,...,t)$ be the $t$ graphs on the same vertex set $V$ of size $n$. We denote $E(G_i)$ for the edge set of $G_i$ where $1\leq i \leq t$. Let $S$ be an edge set which is a subset of $\cup_{i=1}^t E(G_i)$, $S$ is *rainbow* if each of whose edges is chosen from different $E(G_i)$ i.e. we can label the edges in $S$ with $\{e_1,...,e_k\}$ for some $k$ such that there exist distinct $i_1,...,i_k$ with $e_j\in E(G_{i_j})$ for $1\leq j \leq k$. One of the motivations to study such rainbow structures in a graph system is the following famous conjecture due to Aharoni and Berger [@Aharoni2]: \[A-B\] Let $M_i$ $(i=1,...,n)$ be $n$ matchings of size at least $n+1$ on the same vertex set $V=X\cup Y$ where $X$ and $Y$ are disjoint and all edges of $M_i$ are between $X$ and $Y$, then there exists a rainbow matching of size $n$. The Aharoni-Berge conjecture generalizes the Brualdi-Stein Conjecture which asserts that every $n\times n$ Latin square has a partial transversal of size $n-1$. Recall that a *Latin square* of order $n$ is an $n\times n$ array filled with $n$ different symbols, where no symbol appears in the same row or column more than once. A *partial transversal* of size $m$ in a Latin square is a set of $m$ entries such that no two entries are in the same row, same column, or have the same symbol. Now consider a Latin square $S$ whose set of symbols is $\{1,...,n\}$ with the $(i,j)$ symbol $S_{i,j}$. We construct an edge-colouring of $K_{n,n}$ with colours $\{1,...,n\}$. Set $V(K_{n,n})=\{x_1,...,x_n ; y_1,...,y_n\}$ and we colour the edge $x_iy_j$ with colour $S_{i,j}$. Notice that this colouring is proper i.e. adjacent edges are coloured with different colours. We call a matching $M$ rainbow if any two different edges of it are coloured with distinct colours. Now it is immediate to see that $S$ contains a Latin Square of order $m$ if and only if $K_{n,n}$ contains a rainbow matching of size $m$. Thus Brualdi-Stein Conjecture can be restated as every proper edge-coloured $K_{n,n}$ using $n$ colours contains a rainbow matching of size $n-1$. This is a special case of Conjecture \[A-B\] for the reason that when $M_i(1\leq i \leq n-1)$ are $n$ edge-disjoint matchings defined in $K_{n,n}$, there is a rainbow matching of size $n-1$. The Aharoni-Berge conjecture has been confirmed asymptotically by Pokrovskiy [@Pok]. For more details about this topic, see [@Wanless]. If we insist that the size of each matching is $n$, then how many such matchings can guarantee a rainbow matching of size $n$? Drisko [@drisko] proved that $2n-1$ such matchings suffice and this bound is sharp. Barát, Gyárfás and Sárközy [@Bar] proposed the conjecture that for $n$ even any $2n$ matchings of size $n$ on the same vertex set have a rainbow matching of size $n$, and for $n$ odd any $2n-1$ matchings of size $n$ on the same vertex set have a rainbow matching of size $n$, which can be seen as a corresponding version of this problem in the case of general graphs. Recently, Aharoni et al. [@Aharoni3] showed that $3n-2$ matchings of size $n$ on the same vertex set have a rainbow matching of size $n$, which is the best known bound up to now. In [@Aharoni], Aharoni et al. initiated the study of “rainbow extremal graph theory”. They proved that if $G_1,G_2,G_3$ are graphs on a common vertex set of size $n$ and $|E(G_i)|>\frac{1+\tau^2}{4}n^2$ for $1\leq i \leq 3$ where $\tau=\frac{4-\sqrt{7}}{9}$, then there exists a rainbow triangle. They also showed that $\tau^2$ cannot be replaced by any $\varepsilon>0$ with $\varepsilon<\tau^2$, this asymptotically answered a question of D.Mubayi [@mubayi] about a three-colored version of Mantel’s theorem. They further asked for every positive integer $r$, what is the smallest $\delta_r$ such that whenever $G_1,...,G_{\binom{r}{2}}$ are graphs on a common set of $n$ vertices with $|E(G_i)|\geq \delta_rn^2$ for every $1\leq i \leq \binom{r}{2}$ there exists a rainbow $K_r$. In the same paper, Aharoni gave the following elegant conjecture which is a natural generalization of Dirac’s theorem to rainbow case: \[rhc\] Given graphs $G_1,...,G_n$ on the same vertex set of size $n$, each graph having minimum degree at least $\frac{n}{2}$, there exists a rainbow Hamiltonian cycle. There have been some other generalizations of Dirac’s theorem to rainbow case. Let $G$ be an edge-coloured graph, $G$ is *$k$-bounded* if no colour appears more than $k$ times on its edges. A recent result of Coulson and Perarnau in [@Coulson], which asserts that there exists $\mu>0$ and positive integer $n_0$ such that if $n\geq n_0$ and $G$ is a $\mu n$-bounded edge-coloured graph on $n$ vertices with minimum degree at least $\frac{n}{2}$, then $G$ contains a rainbow Hamiltonian cycle. This theorem generalized a previous result in [@Albert] proving the same conclusion when $G$ is replaced by a complete graph. Our first result shows that given $n$ graphs $G_i(i=1,...,n)$ where each one has minimum degree at least $\frac{n+1}{2}$, we can find rainbow cycles with arbitrary lengths except the the one with length $n$ i.e. the rainbow Hamiltonian cyle: \[thm1\] Given graphs $G_1,...,G_n$ on the same vertex set of size $n$, each graph having minimum degree at least $\frac{n+1}{2}$, there exist rainbow cycles with size $i$ for $3\leq i \leq n-1$. The lower bound of Theorem \[thm1\] is tight since one can take $n$ copies of $K_{\frac{n}{2}, \frac{n}{2}}$ when $n$ is even and there does not exist any odd rainbow cycle in such a system. The main result of this paper is to prove an asymptotical version of Conjecture \[rhc\] by showing that the minimum degree $(\frac{1}{2}+o(1))n$ is sufficient. \[main\] For every $\varepsilon>0$, there exists an integer $N>0$, such that when $n>N$ for any graphs $G_1,...,G_n$ on the same vertex set of size $n$, each graph having minimum degree at least $(\frac{1}{2}+\varepsilon)n$, there exists a rainbow Hamiltonian cycle. To prove Theorem \[thm1\], we first find a rainbow Hamiltonian path and a rainbow cycle of length $n-1$ following the classical proof of Dirac’s theorem. Then we obtain a rainbow cycle of length $n-2$ or $n-3$ and use it to build cycles of other lengths. In order to prove Theorem \[main\], we use the absorbing method introduced by Rödl, Ruciński and Szemerédi [@rodl]. We first construct a short rainbow cycle $C$ such that for any rainbow path $P=v_1...v_p$ disjoint from $C$, and any colour $s$, we can absorb $P$ into $C$, that is, replace some edge $u_iu_{i+1}$ of $C$ by a path $u_iPu_{i+1}$, where $u_iu_1$ is coloured with $s$ and $v_pu_{i+1}$ is coloured with the colour of $u_iu_{i+1}$ in $C$. Next, we find a rainbow Hamiltonian path $P$ on $V(G)\backslash V(C)$ and absorb $P$ into $C$ by the property of $C$ (such approach was used by Lo [@lo]). Preliminaries and Notation ========================== Let $G_1,...,G_n$ be $n$ graphs on the same vertex set $V$ where $|V|=n$. Let $\delta(G_i)$ be the minimum degree of each $G_i$ where $1\leq i \leq n$. Without loss of generality, we identify this graph system with an edge-coloured multigraph $G$ where $E(G)$ is the disjoint union of $E(G_i)$ for $i=1,...,n$ and each edge in $E(G_i)$ is coloured by $i$ for $1\leq i \leq n$. For any subgraph $H$ of $G$, let $Col(H)$ be the set of colours used by the edges of $H$. A subgraph $H$ of $G$ is *rainbow* if any two different edges of $E(H)$ are coloured with distinct colours. It is easily to see that to find a rainbow graph in the graph system is equivalent to find a corresponding rainbow subgraph in $G$. For every vertex $v\in V(G)$ and any colour $1\leq c \leq n$, let $N_c(v)$ be the set of neighbours of $v$ which are adjacent to $v$ by an edge coloured by $c$. Let $S$ be any subset of $V$, we denote $N_c(v)\cap S$ by $N_c(v,S)$ and $|N_c(v)\cap S|$ by $d_c(v,S)$. For every two vertices $v_1,v_2 \in V(G)$, let $C(v_1,v_2)$ be the set of colours used in the edges between $v_1$ and $v_2$. For three real numbers $x_1,x_2$ and $x_3$, if $x_2-x_3 \leq x_1\leq x_2+x_3$, then we denote $x_1$ by $x_1=x_2\pm x_3$. We first prove the following useful lemma: \[lemma1\] Let $P=v_1...v_p$ be a rainbow path and let $c, c^{\prime}$ be two colours not used on $P$. If $d_c(v_1,V(P)) + d_{c^{\prime}}(v_p,V (P))\geq p$, then there is a rainbow cycle of length $p$. Suppose not, then $\{c,c^{\prime}\}\cap C(v_1,v_p)= \emptyset$ otherwise $C=v_1...v_pv_1$ will be a rainbow cycle by choosing the colour of $v_1v_p$ to be an element in $\{c,c^{\prime}\}\cap C(v_1,v_p)$. Now for each vertex $v_i$ with $v_{i+1}\in N_c(v_1,V(P))$ where $2\leq i \leq p-2$, we get that $v_i\notin N_{c^{\prime}}(v_p,V(P))$ since otherwise the cycle $v_1v_2...v_iv_pv_{p-1}...v_{i+1}v_1$ must be an rainbow cycle where the colours of $v_1v_{i+1}$ and $v_pv_i$ are chosen to be $c$ and $c^{\prime}$. Thus we get $d_c(v_1,V(P))-1+d_{c^{\prime}}(v_p,V(P))\leq p-2,$ which implies $p \leq p-1$, a contradiction. Our first result is a weaker version of Conjecture \[rhc\], we show that a rainbow Hamiltonian path exists under a slightly weaker condition: \[pro1\] Given graphs $G_1,...,G_n$ on the same vertex set of size $n$, where $\delta(G_i)\geq \frac{n-1}{2}$ for $1\leq i \leq n$, then there exists a rainbow Hamiltonian path. Suppose not, let $P=v_1...v_k$, where $k\leq n-1$, be the rainbow path in $G$ with the maximum length. Thus there exist at least two colors $c, c^{\prime}$ which are not used by the edges in $P$. Now consider the neighbourhood $N_{c}(v_1)$ and $N_{c^{\prime}}(v_k)$, we have $$d_{c}(v_1)+d_{c^{\prime}}(v_k)\geq \frac{n-1}{2}+\frac{n-1}{2}=n-1.$$ For each vertex $u\in V(G)-V(P)$, we have $u\notin N_{c}(v_1)\cup N_{c^{\prime}}(v_k)$ otherwise we can extend $P$ into a larger rainbow path, a contradiction. Thus we get $N_{c}(v_1), N_{c^{\prime}}(v_k)\subseteq V(P)$. However, since $|V(P)|\leq n-1$ and $d_c(v_1,V(P))+d_{c^{\prime}}(v_k,V(P))\geq n-1 \geq |V(P)|$, by Lemma \[lemma1\], we get a rainbow cycle $C$ of length $k$. Suppose that the colour $c^{\prime\prime}$ is not used by this cycle. Since the monochromatic graph coloured by $c^{\prime\prime}$ is connected, at least one edge $e_0$ coloured by $c^{\prime\prime}$ is between $V(C)$ and $V(G)-V(C)$. Therefore, $V(C)\cup \{e_0\}$ contains a rainbow path with length $k+1$, a contradiction. The lower bound here is still best possible. One can take $n$ copies of $K_{\frac{n}{2}-1, \frac{n}{2}+1}$ when $n$ is even and there does not exist a rainbow Hamiltonian path since $K_{\frac{n}{2}-1, \frac{n}{2}+1}$ does not contain a Hamiltonian path. Proof of Theorem \[thm1\] ========================= \[cla1\] $G$ contains a rainbow cycle of size $n-1$. By Proposition \[pro1\], we firstly find a rainbow Hamiltonian path $P=v_1v_2...v_{n}$. Without loss of generality, suppose the colour of edge $v_iv_{i+1}$ is $i$ for $1\leq i \leq n-1$ and the only colour that does not appear in $P$ is $n$. Now consider the subpath $P^{\prime}=v_1v_2...v_{n-1}$. Since $|N_{n-1}(v_1)\backslash \{v_n\}| \geq \frac{n-1}{2}$ and $N_{n}(v_{n-1})\backslash \{v_n\} \geq \frac{n-1}{2}$, we get $d_{n-1}(v_1,V(P^{\prime}))+d_{n}(v_n,V(P^{\prime}))\geq n-1=|V(P^{\prime})|$, by Lemma \[lemma1\] we can find a rainbow cycle of size $n-1$. $G$ contains either a rainbow cycle of size $n-2$ or a rainbow cycle of size $n-3$. Suppose that $G$ neither contain a cycle of size $n-3$ nor $n-2$. By Proposition \[pro1\], we can find a rainbow path $P_1=v_1v_2...v_{n-3}$ whose order is $n-3$ and without loss of generality, suppose the colour of edge $v_iv_{i+1}$ is $i$ for $1\leq i \leq n-4$ and the set of colours which are not used in $P_1$ is $S=\{n-3,n-2,n-1,n\}$. One can easily deduce that $N_{n-1}(v_1)\cap N_{n}(v_{n-3})\cap (V(G)-V(P_1))=\emptyset$ since else we already find a rainbow cycle of size $n-2$, a contradiction. Now we get $d_{n-1}(v_1,V(P_1))+d_{n}(v_{n-3},V(P_1))\geq n-2 \geq |V(P_1)|$, by Lemma \[lemma1\], we can find a rainbow cycle of size $n-3$, a contradiction. Let $C$ be a rainbow cycle of size $n-2$ or $n-3$. We give it an orientation to make it be a directed cycle $\overrightarrow{C}=v_1...v_{p}$ where we suppose $v_i=v_{i-p}$ for $i>p$. \[lemma2\] Let $C=v_1...v_pv_1$ be a rainbow cycle of length $p$, and let $c,c^{\prime}$ be two colours not used on $C$. If $d_c(x,V(C))+d_{c^{\prime}}(x,V(C))\geq p$ for some $x\notin V(C)$ and there is no rainbow cycles of length $i$ for some $3\leq i \leq p+1$, then we can partition $\{v_1,...,v_p\}$ into $S_1$ and $S_2$, where $S_1=\{v_{i+j-2}: v_j\in N_c(x,V(C))\}$ and $S_2=N_{c^{\prime}}(x,V(C))$. Especially, if $d_c(x,V(C))+d_{c^{\prime}}(x,V(C))>p$ , then there are rainbow cycles of length $3,...,p+1$. Suppose that $d_c(x,V(C))+d_{c^{\prime}}(x,V(C))\geq p$ and there is no rainbow cycles of length $i$ for some $3\leq i \leq p+1$, thus for each vertex $v_j\in N_c(x,V(C))$, we have $v_{i+j-2}\notin N_{c^{\prime}}(x,V(C))$ otherwise the cycle $xv_jv_{j+1}...v_{j+i-2}x$ is a rainbow cycle of size $i$ by choosing the colours of $xv_j$ and $xv_{j+i-2}$ to be $c$ and $c^{\prime}$. Therefore, we get $S_1\cap S_2=\emptyset$ by definition. However, since $|S_1|+|S_2|\geq p$ and $S_1\cup S_2 \subseteq V(C)$ it follows that $V(C)$ is partitioned into $S_1$ and $S_2$, which completes the proof. \[case1\] $p=n-2$. Suppose the set $V(G)-V(C)=\{v_{n-1}, v_n\}$ and the colours not used by $C$ are $n-1$ and $n$. Suppose that for some $3\leq j \leq n-1$, there does not exist a rainbow cycle of size $j$ in $G$. By Lemma \[lemma2\], we conclude that $d_{n-1}(v_{n-1}, V(C))+d_{n}(v_{n-1},V(C))\leq n-2$ which implies that $d_{n-1}(v_{n-1},v_n)+d_n(v_{n-1},v_n)\geq \frac{n+1}{2}\times 2-(n-2)=3$. This is a contradiction. Therefore, $G$ contains rainbow cycles of all size $3\leq j \leq n-1$. $p=n-3$. Suppose the set $V(G)-V(C)=\{v_{n-2},v_{n-1},v_n\}$ and the colours not used by $C$ are $n-2, n-1$ and $n$. Suppose that for some $3\leq i \leq n-2$, there does not exist a rainbow cycle of size $i$ in $G$. We know that $d_{n-1}(v_n, V(C))+d_{n}(v_n,V(C))\geq 2(\frac{n+1}{2}-2)\geq n-3$, thus by Lemma \[lemma2\] we get $d_{n-1}(v_n, V(C))+d_{n}(v_n,V(C))=n-3$ which implies that $d_{n-1}(v_n,\{v_{n-1},v_{n-2}\})=d_{n}(v_n,\{v_{n-1},v_{n-2}\})=2$ and hence $\{n-1,n\}\subseteq C(v_{n},v_{n-1})\cap C(v_{n},v_{n-2})$. By symmetry we now suppose that $C(v_{a}, v_{b})=\{n-2,n-1,n\}$ for every $n-2\leq a<b \leq n$. Let $T_1=\{v_{j+i-3} \mid v_j\in N_{n-2}(v_{n-2}, V(C))\}$ and $T_2=N_{n-1}(v_{n-1},V(C))$, using an analogue of the proof of Lemma \[lemma2\], we can get that $T_1\cap T_2=\emptyset$ since otherwise suppose $v_{j+i-3}\in T_2$ for some $j$, thus $v_{n-1}v_{n-2}v_jv_{j+1}...v_{j+i-3}v_{n-1}$ is a rainbow cycle with length $i$ by choosing the colours of $v_{n-1}v_{n-2}$, $v_{n-2}v_j$ and $v_{j+i-3}v_{n-1}$ to be $n$, $n-2$ and $n-1$, which is a contradiction. We actually get: $$n-3\leq \frac{n+1}{2}-2+\frac{n+1}{2}-2 \leq d_{n-2}(v_{n-2},V(C))+d_{n-1}(v_{n-1},V(C))\leq |V(C)|=n-3,$$ thus all the inequalities above must be equalities and we get $$|T_1|+d_{n-1}(v_{n-1},V(C))=n-3,$$ which implies that $V(C)$ is partitioned into $T_1$ and $T_2$. Since all colours and vertices are symmetric, the similar conclusion follows by considering $T_1$ and $N_{a}(v_{b},V(C))$ for any $n-1\leq a,b \leq n$. Thus, we finally get $T_2=N_{a}(v_{b},V(C))$ for any $n-1\leq a,b \leq n$. Now let $T_1^{\prime}=\{v_{j+i-3} \mid v_j\in N_{n}(v_{n-2}, V(C))\}$ and $T_2=N_{n-1}(v_{n-1},V(C))$. Therefore, we get the similar conclusion that $T_2=N_{a}(v_{b},V(C))$ for any $n-2\leq a \leq n-1$ and $n-1 \leq b \leq n$ by considering $T_1^{\prime}$ and $T_2$. This implies $T_2=N_{a}(v_{b},V(C))$ for any $n-2\leq a \leq n$ and $n-1 \leq b \leq n$. Now since all the vertices are symmetric, we actually get $T_2=N_{a}(v_{b},V(C))$ for any $n-2\leq a,b \leq n$. Now we claim that there exists some $v_{j_0}\in T_2$ such that $v_{j_0-1}\notin T_2$. Suppose not, for each $v_j\in T_2$ we have $v_{j-1}\in T_2$. This implies $T_2=V(C)$ and this is impossible since $T_1\neq \emptyset$. Now since $v_{j_0-1}\notin T_2$ we get $v_{j_0-1}\in T_1$ and there exists some $v_{j_1}\in N_{n-2}(v_{n-2},V(C))$ such that $j_0-1=j_1+i-3$ which implies $j_0=j_1+i-2$. However, if so the cycle $v_{n-2}v_{j_1}v_{j_1+1}...v_{j_0}v_{n-2}$ is a rainbow cycle of length $i$ by choosing the colours of $v_{n-2}v_{j_1}$ and $v_{n-2}v_{j_0}$ to be $n-2$ and $n-1$, a contradiction. That finishes the proof. Proof of Theorem \[main\] ========================= For any pair two not necessarily distinct vertices $x_1,x_2\in V(G)$ and four distinct colours $1\leq s,i,j,k \leq n$, we define the absorbing paths set $A_{s,i,j,k}(x_1,x_2)$ to be the family of edge-coloured $3$-paths $P$ which satisfy the following conditions: \(i) $P=v_1v_2v_3v_4$ where $\{v_1,v_2,v_3,v_4\}\cap \{x_1,x_2\}=\emptyset$; \(ii) The edges $v_1v_2,v_2v_3,v_3v_4$ are coloured respectively by $i,j,k$; \(iii) $s\in C(x_1,v_2)$ and $j\in C(x_2,v_3)$. For every path $P$ in $A_{s,i,j,k}(x_1,x_2)$, we say that $P$ is an *absorbing path* for $(x_1,x_2)$ with colour pattern $(s,i,j,k)$. In practice, we always choose $x_1$ and $x_2$ to be two endpoints of some path $Q$ and thus $P$ indeed absorbs $Q$. \[cl1\] For each pair $(x_1,x_2)$ and four distinct colours $s,i,j,k$, we have $|A_{s,i,j,k}(x_1,x_2)|$ $\geq \frac{\varepsilon n^4}{8}$ when $n$ is sufficiently large. Firstly, choose a vertex $v_2\in N_s(x_1)\backslash\{x_2\}$. Pick another vertex $v_3\in (N_j(v_2)\cap N_j(x_2))$ $\backslash \{x_1\}$. The total number of such $v_2,v_3$ is at least $(\frac{n}{2}+\varepsilon n-1)(2\varepsilon n-1)$. Now fix $v_2$ and $v_3$. Choose $v_1\in N_i(v_2)\backslash \{x_1,x_2,v_3\}$. Choose another vertex $v_4\in N_k(v_3) \backslash \{x_1,x_2,v_1,v_2\}$. Note that the total number of such $v_1,v_4$ is at least $(\frac{n}{2}+\varepsilon n-3)(\frac{n}{2}+\varepsilon n-4)$ and hence we derive that there exist at least $$\left(\frac{n}{2}+\varepsilon n-1\right) \left(2\varepsilon n-1 \right) \left(\frac{n}{2}+\varepsilon n-3\right)\left(\frac{n}{2}+\varepsilon n-4\right)\geq \frac{\varepsilon n^4}{8}$$ absorbing paths for $(x_1,x_2)$ when $n$ is sufficiently large. We will use the following version of Chernoff’s bound, see [@Tao]. Let $X=t_1+...+t_n$ where $t_i$ are independent boolean random variables for $1\leq i \leq n$. Then for any $\varepsilon >0$, $$P(|X-E(X)|\geq \varepsilon E(X))\leq 2e^{-\min(\frac{\varepsilon^2}{4}, \frac{\varepsilon}{2})E(X)}.$$ Especially, when $\varepsilon=\frac{1}{2}$, we conclude that $$P\left(\frac{1}{2}E(X)<X<\frac{3}{2}E(X)\right)\geq 1-2e^{-\frac{E(X)}{16}}.$$ \[lm1\] There exists a rainbow cycle $C$ with size at most $\frac{\varepsilon n}{10^5}$ such that for every rainbow path $P$ with $V(P)\cap V(C)=\emptyset$ and $Col(P)\cap Col(C)=\emptyset$, if $s$ is a colour that is not used by $C$ and $P$, there exists a rainbow cycle $C^{\prime}$ such that (i)$C^{\prime}$ with $V(C^{\prime})=V(C)\cup V(P)$; (ii) $Col(P \cup C)\cup \{s\} $=$Col(C^{\prime})$. Let $\ell=\lceil\frac{\varepsilon n}{10^6}\rceil\pm 1$ such that $\ell$ is divisible by $3$. For simplicity, we assume that $\ell=\frac{\varepsilon n}{10^6}$. We fix $\ell/3$ groups of colours $C_i=\{3i-2,3i-1,3i\}$ where $i=1,..., \ell/3$. Let $\mathcal{P}_{C_i}$ be the set of all the paths $P=v_0v_1v_2v_3$ in $G$ where the colours of $v_0v_1,v_1v_2, v_2v_3$ are $3i-2, 3i-1, 3i$ for all $1\leq i \leq \ell/3$. Now we randomly and uniformly select an element from each $\mathcal{P}_{C_i}$ where each element of $\mathcal{P}_{C_i}$ is chosen by probability $\frac{1}{|\mathcal{P}_{C_i}|}$ for every $1\leq i \leq \ell/3$. Thus we get a random set with size $\ell/3$, we denote it by $W$. For any colour $s$ and any pair $(x_1,x_2)$, set $A_s(x_1,x_2)=\bigcup_{i=1}^{\ell/3}(A_{s,3i-2,3i-1,3i}(x_1,x_2)\cap W)$. Now for each $1\leq i \leq \ell/3$, let the random variable $X_i$ be the indicative variable of the event that $W \cap A_{s,3i-2,3i-1,3i}(x_1,x_2)\neq \emptyset$. Hence we get $|A_s(x_1,x_2)|=\sum_{i=1}^{\ell/3}X_i$ and all these $X_i's$ are independent. Using Claim \[cl1\], we get $$P(X_i=1)=|A_{s,3i-2,3i-1,3i}(x_1,x_2)|/ |\mathcal{P}_{C_i}| \geq \frac{\varepsilon n^4}{8}/ n^4\geq \frac{\varepsilon}{8}$$ for $1\leq i \leq \ell/3$ and hence $$E(|A_s(x_1,x_2)|)=\sum_{i=1}^{\ell/3}E(X_i)\geq \frac{\varepsilon \ell}{24}.$$ By the Chernoff’s bound, we see that $$P(|A_s(x_1,x_2)|<\frac{\varepsilon \ell}{48})\leq 2e^{-\frac{\varepsilon \ell}{384}}\leq 2e^{-\frac{\varepsilon^2 n}{10^9}}.$$ Now let $Y$ be the number of pairs of 3-paths in $W$ which are intersecting with each other. For some distinct $1\leq i,j \leq \ell/3$, let $Y_{i,j}$ be the indicative variable of the event that the path we choose in $A_{C_i}$ intersects with the path we choose in $A_{C_j}$. Thus we have $Y=\sum_{i,j}Y_{i,j}$. We claim that the size of set $\{\{P_1,P_2\}\mid P_1\in A_{C_i}, P_2\in A_{C_j}, P_1, P_2$ are intersecting with each other$\}$ is at most $16n^7$ for fixed $i,j$. Since the number of $P_1$ is at most $n^4$, and when $P_1$ is fixed, the number of $P_2$ that we can choose is at most $16n^3$. Besides, it is obvious that when $P_1,P_2$ are fixed, the probability that we have chosen $P_1,P_2$ together is $\frac{1}{|A(C_i)||A(C_j)|} \leq \frac{1}{(n^4/8)^2}$ since $|A_{C_i}|,|A_{C_j}| \geq \frac{n^4}{8}$ when $n$ is sufficiently large. Therefore, we get $$E(Y) \leq \binom{\ell/3}{2}\cdot16\cdot n^7 \cdot \frac{1}{(\frac{n^4}{8})^2}\leq \frac{\varepsilon^2 n }{10^{10}}.$$ Using Markov’s inequality, we get that $$P(Y\geq \frac{\varepsilon^2 n}{10^9})\leq \frac{E(Y)}{\frac{\varepsilon^2 n}{10^9}} \leq \frac{\frac{\varepsilon^2 n }{10^{10}}}{\frac{\varepsilon^2 n}{10^9}}=\frac{1}{10}.$$ Now choose $n$ large enough such that the following is true: $$2n^3e^{-\frac{\varepsilon^2 n}{10^9}}+\frac{1}{10}<1.$$ Thus with positive possibility, for each $s$ and any pair $(x_1,x_2)$ the following is true: (i) $|A_s(x_1,x_2)|\geq \frac{\varepsilon \ell}{48} \geq \frac{\varepsilon^2 n}{48\times 10^6}$; (ii) $Y< \frac{\varepsilon^2 n}{10^9}$. Fix such $W$, for each intersecting pair of $W$, we delete one 3-paths in it, suppose that the remaining path family is $W^{\prime}$. Thus $W^{\prime}$ is a family containing mutually disjoint 3-paths and for every $s$ and any pair $(x_1,x_2)$ we get that $$|\bigcup_{i=1}^{\frac{\ell}{3}}(A_{s,3i-2,3i-1,3i}(x_1,x_2)\cap W^{\prime})|\geq \frac{\varepsilon^2 n}{48\times 10^6}-\frac{\varepsilon^2 n}{10^9}\geq \frac{9\varepsilon^2 n}{10^9}.$$ Let $W^{\prime}=\{P_1,...,P_t\}$ be the path family we found before and let $S$ be the set of the colours that do not appear in any path in $W^{\prime}$. Let $V^{\prime}=V(G)-\bigcup_{i=1}^tV(P_i)$. Without loss of generality, we suppose that $P_i=v_1^{(i)}v_2^{(i)}v_3^{(i)}v_4^{(i)}$ for $1\leq i \leq t$. Now for $P_1,P_2$, it is obvious that we can find a vertex $u_1\in V^{\prime}$ such that $u_1v_4^{(1)}$ and $u_1v_1^{(2)}$ are two edges coloured with distinct colours in $S$. Delete these two colours from $S$ and the vertex $u_1$ from $V^{\prime}$. Repeat the above process for the path pair $\{P_2,P_3\},...,\{P_t,P_1\}$, and at last we find $u_1,...,u_t$ and a rainbow cycle $C$ with size at most $\frac{\varepsilon n}{10^5}$ which contains all the vertices in $\bigcup_{i=1}^t V(P_i)$ and those $u_i$ where $1\leq i \leq t$. For every rainbow paths $P \subseteq V(G)-V(C)$ such that the colour set of $P$ is disjoint with the colour set of $C$, if $x_1,x_2$ are two endpoints of $P$ and $s$ is a colour that does not appear in $C$ and $P$, then the pair $(x_1,x_2)$ has at least one absorbing path in $C$ with colour pattern $(s,3i-1,3i-2,3i)$ for some $1\leq i \leq \ell/3$ since $\frac{9\varepsilon^2 n}{10^9}\geq 1$ when $n$ is sufficiently large. Therefore, we insert the path $P$ into the cycle $C$ which completes our proof. Let $C$ be the absorbing cycle we get in Lemma \[lm1\], now let $G^{\prime}=G-V(C)$, thus for each maximal monochromatic graph $H$ in $G^{\prime}$ we find that $\delta(H) \geq \frac{1+\varepsilon}{2}|V(G^{\prime})|$. Let $S_1$ be the set of colours that do not appear on any edge of $C$, thus one can construct a rainbow Hamiltonian path $P_1=v_0v_1...v_t$ in $G^{\prime}$ using exactly $|S_1|-1$ colours of $S_1$ by Proposition \[pro1\]. Suppose $s_1$ is the unique colour in $S_1$ that is not used by $P_1$. Now consider an absorbing path in $\bigcup_{i=1}^{\ell/3}(A_{s,3i-2,3i-1,3i}(v_1,v_t))$ that is also contained in $C$, we can insert $P_1$ into $C$ to be a rainbow Hamiltonian cycle which completes our proof. Remark ====== Shortly after we finished this preprint, Joos and Kim [@Kim] informed us that they have settled the Conjecture \[rhc\]. Acknowledgements ================ The first two authors are supported by the National Natural Science Foundation of China (11631014, 11871311) and Qilu Scholar award of Shandong University. The third author is partially supported by NSF grant DMS 1700622. [99]{} R. Aharoni, E. Berger, Rainbow matchings in r-partite r-graphs, Electron. J. Combin. 16, 2009. R. Aharoni, F. Berger, M. Chudnovsky, D. Howard, P. Seymour, Large rainbow matchings in general graphs, Euro. J. Combin., 79: 222-227, 2019. R. Aharoni, S. G. H. de la Maza, A. Montejano, R. Šámal, A rainbow version of Mantel’s Theorem, arXiv:1812.11872. M. Albert, A. Frieze, B. Reed, Multicoloured Hamilton cycles, Electron. J. Combin. 2, 1995. J. Barát, A. Gyárfás, G. N. Sárközy, Rainbow matchings in bipartite multigraphs, Period. Math. Hungar. 74 (1): 108-111, 2017. M. Coulson, G. Perarnau, A Rainbow Dirac’s theorem, arXiv:1809.06392. A. A. Drisko, Transversals in row-latin rectangles, J. Combin. Theory Ser. A, 84: 181-195, 1998. A. Diwan, D. Mubayi, Turáns theorem with colors. http://www.math.cmu.edu/ mubayi/papers/webturan.pdf, 2006. F. Joos, J. Kim, Spanning transversals in graphs, in preparation. A. Lo, An edge-colored version of Dirac’s theorem, SIAM J. Discrete Math., 28(1): 18-36, 2014. A. Pokrovskiy, An approximate version of a conjecture of Aharoni and Berger, Advances in Mathematics, 333: 1197-1241, 2018. V. Rödl, A. Ruciński, E. Szemerédi, An approximate Dirac-type theorem for $k$-uniform hypergraphs, Combinatorica, 28: 229-260, 2008. T. Tao, V. Vu, Additive combinatorics, Cambridge University Press (2006). I. M. Wanless, Transversals in Latin squares: A survey, in: R. Chapman (Ed.), Surveys in Combinatorics 2011, Cambridge University Press, 2011. [^1]: *E-mail address:* mathsoul@mail.sdu.edu.cn [^2]: Corresponding author. *E-mail address:* ghwang@sdu.edu.cn [^3]: *E-mail address:* yzhao6@gsu.edu
--- abstract: 'In this paper, we introduce new concepts of weak rigidity matrix and infinitesimal weak rigidity for planar frameworks. The weak rigidity matrix is used to directly check if a framework is infinitesimally weakly rigid while previous work can check a weak rigidity of a framework indirectly. An infinitesimal weak rigidity framework can be uniquely determined up to a translation and a rotation (and a scaling also when the framework does not include any edge) by its inter-neighbor distances and angles. We apply the new concepts to a three-agent formation control problem with a gradient control law, and prove instability of the control system at any incorrect equilibrium point and convergence to a desired target formation. Also, we propose a modified Henneberg construction, which is a technique to generate minimally rigid (or weakly rigid) graphs. Finally, we extend the concept of the weak rigidity in $\mathbb{R}^{2}$ to the concept in $\mathbb{R}^{3}$.' author: - 'Seong-Ho Kwon${}^{\dagger}$, Minh Hoang Trinh${}^{\dagger}$, Koog-Hwan Oh${}^{\dagger}$, Shiyu Zhao${}^{\ddagger}$, and Hyo-Sung Ahn${}^{\dagger}$[^1] [^2]' bibliography: - 'Weak\_rigidity\_2018.bib' title: '**Infinitesimal Weak Rigidity, Formation Control of Three Agents, and Extension to $3$-dimensional Space** ' --- INTRODUCTION {#Sec:intro} ============ Rigid formation shape is an important requirement in many formation control and network localization problems. Specific or fixed formation shape may be useful for sensing agents, localizing agents, moving agents from one location to another and moving objects. A lot of control methods to achieve a target formation shape have been reported in the literature [@oh2015survey; @krick2009stabilisation; @helmke2014geometrical; @sun2015rigid; @zhao2016bearing]. One of the formation control methods is [distance-constrained (distance-based)]{} formation control [@krick2009stabilisation; @helmke2014geometrical], where the target formation is achieved by ​obtaining the inter-agent distances. Another one is [bearing-constrained (bearing-based) formation control]{} [@zhao2016bearing; @bishop2011stabilization] where the target formation is achieved by obtaining the inter-agent bearings. Also, there is a mixed method of distance and bearing constrained formation control [@bishop2015distributed]. Another one is to make use of only relative angles [@buckley2017infinitesimally] where maintains the target formation by sensing relative angle measurements. In the distance-constrained formation control problem, one approach to characterize a unique formation shape (at least locally) is the (distance) [rigidity]{} of a framework [@asimow1979rigidity]. In the bearing-constrained formation control, the theory to characterize unique formation shape is the [bearing rigidity]{} of a framework [@zhao2016bearing; @zelazo2014rigidity]. In a mixed method of distance and bearing constrained formation control, there is no specific rigidity theory. In [@bishop2015distributed], the authors developed a control law using inter-agent bearing and distance constraints. Recently, in particular, the only angle constrained formation control [@buckley2017infinitesimally] and new rigidity theory with distance and subtended-angle constraints, named [weak rigidity]{} [@park2017rigidity], were introduced. In [@buckley2017infinitesimally], they make use of a shape-similarity matrix to preserve a formation shape by only using relative angle measurements. If the null space of the shape-similarity matrix includes trivial motions only up to a translation, a rotation and a scaling, then the formation shape is preserved. This concept is similar to the (distance) rigidity and bearing rigidity. In [@park2017rigidity], a formation shape ​whose​ shape can be (locally) uniquely determined specified by inter-agent distance and subtended-angle constraints is considered to be weakly rigid even though it is non-rigid in the viewpoint of (distance) rigidity. However, whether the formation is weakly rigid cannot be determined directly from the original framework. The method proposed in [@park2017rigidity] requires to transform the original framework into another framework with distance-only constraints. Then, if this transformed framework is rigid, we can conclude that the original framework is weakly rigid. Thus, it is inconvenient to check the weak rigidity based on the proposed method in [@park2017rigidity]. In this paper, our main contributions are summarized as follows. First, we provide new concepts of [weak rigidity matrix]{} and [infinitesimal weak rigidity]{} in the two-dimensional space. For a given framework in $\mathbb{R}^{2}$, we propose a method to construct a corresponding weak rigidity matrix from the set of mixed distance- and angle-contraints. The rank of the weak rigidity matrix can be used to check infinitesimal weak rigidity of the framework. A framework defined by a set of mixed distance- and angle-constraints is infinitesimally weakly rigid if the null space of its weak rigidity matrix is spanned by only rigid body translations and rotations. Moreover, if an infinitesimally weakly rigid framework is specified by only some angle constraints, the null space of the weak rigidity matrix contains also scalings. As a result, the existing distance rigidity and bearing rigidity theories in the literature could be unified into the weak rigidity theory. Second, we apply the concept of the infinitesimal weak rigidity to a formation control with three agents in the two-dimensional space. We prove that the three-agent formation at any incorrect equilibrium is unstable by investigating the eigenvalues of the Jacobian of the formation system. We prove that the system converges to a desired target formation from almost global initial positions. Also, we introduce a modified Henneberg construction using an angle extension. The construction is used to grow minimally rigid formations, which are useful in designing a formation control strategy [@mou2015target; @trinh2016bearing]. Finally, we extend the concept of the weak rigidity [@park2017rigidity] in the two-dimensional space to the concept in the three-dimensional space. The rest of this paper is organized as follows. Section \[Sec:weakRigidity\] briefly reviews the background of the weak rigidity in $\mathbb{R}^{2}$. Section \[Sec:Infinitesimally\_weakRigidity\] provides the new concepts of the weak rigidity matrix and infinitesimal weak rigidity. The relation between infinitesimal weak rigidity and the rank of the weak rigidity matrix is also established. In Section \[Sec:Formation Control Problem\], we provide the analysis of the instability of incorrect equilibria and the convergence result of a three-agent formation system. In Section \[Sec:MHenneberg\], we discuss and define the modified Henneberg construction. In Section \[Sec:weakRigidity\_3dim\], the weak rigidity is extended from the two-dimensional space to the concept in the three-dimensional space. Lastly, conclusion and summary are provided in Section \[Sec:conclusion\]. [Preliminaries and Notations]{}: The notation $\norm{\cdot}$ means the Euclidean norm of a vector and the notation $\card{\mathcal{S}}$ means the cardinality of a set $\mathcal{S}$. Let $K_n$ denote a complete graph with $n$ vertices s.t. $K_n = (\mathcal{V}_K,\mathcal{E}_K)$, then an undirected graph $\mathcal{G}$ is defined as $\mathcal{G} = (\mathcal{V},\mathcal{E},\mathcal{A})$, where a vertex set $\mathcal{V}=\set{1,2,...,n}$, an edge set $\mathcal{E} \subseteq \mathcal{V} \times \mathcal{V}$ with $m=\card{\mathcal{E}}$ and an angle set $\mathcal{A} = \set{(k,i,j) {}\theta_{ij}^{k} \text{ is assigned to } (i,k), (j,k) \in \mathcal{E}_K, \theta_{ij}^{k} \in [0,\pi]}$ with $q=\card{\mathcal{A}}$. We assume that duplicated edges between any two vertices do not exist, e.g., $(i,j) = (j,i)$ for all $i,j \in \mathcal{V}$. The $\theta_{ij}^{k}$ means the angle subtended by the adjacent edges $(i,k)$ and $(j,k)$. The set of neighbors of vetex $i$ is denoted as $\mathcal{N}_i = \set{j \in \mathcal{V} {}(i,j) \in \mathcal{E}}$. For a position vector ${\mathbf}{p}_i \in \mathbb{R}^{2}$, a configuration of $\mathcal{G}$ in $\mathbb{R}^{2}$ is defined as ${\mathbf}{p} = [{\mathbf}{p}_{1}^\top,...,{\mathbf}{p}_{n}^\top]^\top \in \mathbb{R}^{2n}$, and a framework is defined as $(\mathcal{G},{\mathbf}{p})$. Two frameworks $(\mathcal{G},{\mathbf}{p})$ and $(\mathcal{G},{\mathbf}{q})$ are said to be [congruent]{} if $\norm{{\mathbf}{p}_{i}-{\mathbf}{p}_{j}}=\norm{{\mathbf}{q}_{i}-{\mathbf}{q}_{j}}$ for all $i,j \in \mathcal{V}$. Also, two frameworks $(\mathcal{G},{\mathbf}{p})$ and $(\mathcal{G},{\mathbf}{q})$ are said to be [equivalent]{} if $\norm{{\mathbf}{p}_{i}-{\mathbf}{p}_{j}}=\norm{{\mathbf}{q}_{i}-{\mathbf}{q}_{j}}$ for all $(i,j) \in \mathcal{E}$. For a framework $(\mathcal{G},{\mathbf}{p})$, the relative position vector and the relative distance are defined as ${\mathbf}{z}_{ij} = {\mathbf}{p}_{i} - {\mathbf}{p}_{j}$ and $d_{ij} = \norm{{\mathbf}{z}_{ij}}$, respectively, for all $(i,j)\in \mathcal{E}$. Let Null$(\cdot)$ and rank$(\cdot)$ be the null space and the rank of a matrix, respectively. Denote $I_N \in \mathbb{R}^{N \times N}$ as an identity matrix, and $\mathds{1} = [1, ..., 1]^\top$. The perpendicular operator $J \in \mathbb{R}^{2 \times 2}$ is denoted as $J \triangleq \begin{bmatrix} 0 &-1\\ 1 & 0 \end{bmatrix}.$ We assume that i) there is no ${\emph}{self-loop}$, i.e. $(i,i) \notin \mathcal{E}$ for any vertex $i \in \mathcal{V}$, ii) formations are undirected, and iii) there are no position vectors collocated at one point. Background of the Weak Rigidity Theory {#Sec:weakRigidity} ====================================== In this section, we briefly review the concepts of the weak rigidity in [@park2017rigidity]. The weak rigidity theory is concerned with frameworks defined by distance constraints and additional subtended-angle constraints in $\mathbb{R}^2$. Distance constraints and additional subtended-angle constraints are required to achieve a unique formation shape under the weak rigidity theory. \[Def:strongEquiv\] With $n \ge 3$, two frameworks $(\mathcal{G},{\mathbf}{p})$ and $(\mathcal{G},{\mathbf}{q})$ are said to be [strongly equivalent]{} if the following two conditions hold: - $\norm{{\mathbf}{p}_{v}-{\mathbf}{p}_{w}} = \norm{{\mathbf}{q}_{v}-{\mathbf}{q}_{w}}, \forall (v,w) \in \mathcal{E}$, - ${\theta_{ij}^{k}}_{\in(\mathcal{G},{\mathbf}{p})} = {\theta_{ij}^{k}}_{\in(\mathcal{G},{\mathbf}{q})}, \forall (k,i,j) \in \mathcal{A}$, where ${\theta_{ij}^{k}}_{\in(\mathcal{G},{\mathbf}{p})}$ and ${\theta_{ij}^{k}}_{\in(\mathcal{G},{\mathbf}{q})}$ denote the subtended angles in $(\mathcal{G},{\mathbf}{p})$ and $(\mathcal{G},{\mathbf}{q})$, respectively. \[Def:weakRigidity\] A framework $(\mathcal{G},{\mathbf}{p})$ is [weakly rigid]{} in $\mathbb{R}^{2}$ if there exists a neighborhood $\mathcal{B}_{{\mathbf}{p}} \subseteq \mathbb{R}^{2n}$ of ${\mathbf}{p}$ such that each framework $(\mathcal{G},{\mathbf}{q})$, ${\mathbf}{q} \in \mathcal{B}_{{\mathbf}{p}}$, strongly equivalent to $(\mathcal{G},{\mathbf}{p})$ is congruent to $(\mathcal{G},{\mathbf}{p})$. Two congruent frameworks are illustrated in Fig. \[Triangular\_formations\]. Fig. \[Formation\_3e\] is defined by three edge lengths while the other in Fig. \[Formation\_3e1a\] is defined by two edge lengths and a subtended angle with the condition $({d}_{23})^{2} = ({d}_{12})^{2} + ({d}_{13})^{2} - 2{d}_{12}{d}_{13}\cos\theta_{23}^{1}$ induced from the law of cosines. The two formations can be changed to each other with the condition induced by the law of cosines. That is, either three distance constraints or two distance constraints with a subtended angle can define the same triangular formation. Infinitesimal Weak Rigidity {#Sec:Infinitesimally_weakRigidity} =========================== In this section, we introduce the weak rigidity matrix and infinitesimal weak rigidity, and provide a rank condition of the weak rigidity matrix to determine if a framework is infinitesimally weakly rigid in $\mathbb{R}^{2}$ in a straightforward way. In [@park2017rigidity], an angle $\theta_{ij}^{k}$ must be defined with adjacent two edges, i.e. $(i,k)$, $(j,k)\in \mathcal{E}$. However, with the weak rigidity matrix, the adjacent edges do not need to be defined. For example, we can check whether a framework with only angle constraints is infinitesimally weakly rigid or not by a rank condition of weak rigidity matrix. Weak Rigidity Matrix -------------------- For any edge $(i,j) \in \mathcal{E}$ and any angle $(k,i,j) \in \mathcal{A}$, consider the associated relative position vector (edge vector) and cosine defined as ${\mathbf}{z}_{g} \triangleq {\mathbf}{z}_{ij}, \forall g\in \{ 1,..., m\}$ and $A_{h} \triangleq \cos{\theta_{h}}, \forall h\in \{ 1,..., q\}$, respectively, where $\theta_h = \theta_{ij}^{k}$ and $\cos{\theta_{ij}^{k}} = \left[\frac{\norm{{\mathbf}{z}_{ik}}^{2} + \norm{{\mathbf}{z}_{jk}}^{2} - \norm{{\mathbf}{z}_{ij}}^{2}}{2\norm{{\mathbf}{z}_{ik}}\norm{{\mathbf}{z}_{jk}}}\right]$ induced by the law of cosines. The [weak rigidity function]{} $F_W: \mathbb{R}^{2n} \rightarrow \mathbb{R}^{(m+q)}$ is defined as follows: $$F_{W}({\mathbf}{p}) \triangleq [ \norm{{\mathbf}{z}_{1}}^2, ... ,\norm{{\mathbf}{z}_{m}}^2, A_{1}, ... ,A_{q}]^\top \in \mathbb{R}^{(m+q)}.$$ The weak rigidity function describes the length of edges and subtended angles in the framework. The [weak rigidity matrix]{} is defined as the Jacobian of the weak rigidity function: $$\label{Weak_rigidity_matrix} R_{W}({\mathbf}{p}) \triangleq \frac{\partial F_{W}({\mathbf}{p})}{\partial {\mathbf}{p}} = \begin{bmatrix} \frac{\partial \mathcal{D}}{\partial {\mathbf}{p}}\\ \\ \frac{\partial {\mathbf}{A}}{\partial {\mathbf}{p}} \end{bmatrix} \in \mathbb{R}^{(m+q) \times 2n},$$ where $\mathcal{D} = [\norm{{\mathbf}{z}_{1}}^2,\norm{{\mathbf}{z}_{2}}^2, ... ,\norm{{\mathbf}{z}_{m}}^2]^\top \in \mathbb{R}^{m}$ and ${\mathbf}{A} = [A_1,A_2,...,A_q]^\top \in \mathbb{R}^{q}$. Denote $\delta{\mathbf}{p}$ as a variation of the configuration ${\mathbf}{p}$. If $R_W({\mathbf}{p})\delta{\mathbf}{p}=0$, then $\delta{\mathbf}{p}$ is called an infinitesimal weak motion of $(\mathcal{G},{\mathbf}{p})$. This concept is similar to infinitesimal motions in distance-based rigidity and bearing-based rigidity. Distance preserving motions based on distance rigidity include rigid-body translations and rotations, and bearing preserving motions based on bearing rigidity include rigid-body translations and scalings. On the other hand, the infinitesimal weak motions include not only translations and rotations but also scalings. Figures \[inf\_ex01\] – \[inf\_ex05\] show that the infinitesimal weak motions include translations and rotations, and Fig. \[inf\_ex06\] shows that the motions include a scaling as well as translations and rotations. \[trivial\] An infinitesimal weak motion is called [trivial]{} if it corresponds to a translation or a rotation (or a scaling in case of $\mathcal{E} = \emptyset$, for example, see Fig. \[inf\_ex06\]) of the entire framework. Infinitesimal Weak Rigidity {#infinitesimal-weak-rigidity} --------------------------- \[weak\_rigidity\_trivial\] A given framework $(\mathcal{G},{\mathbf}{p})$ is [infinitesimally weakly rigid]{} in $\mathbb{R}^{2}$ if all the infinitesimal weak motions are trivial. Consider a graph $\mathcal{G'} = (\mathcal{V'},\mathcal{E'},\mathcal{A'})$ induced from $\mathcal{G}$ in such a way that: - $\mathcal{V'} = \mathcal{V}$, - $\mathcal{E'} = \\ \set{(i,j) {}(i,j) \in \mathcal{E} \lor \exists k \in \mathcal{V}\text{ s.t. } (k,i,j) \in \mathcal{A}} \\ \cup \set{(i,k) {}\exists k \in \mathcal{V}\text{ s.t. } (k,i,j) \in \mathcal{A}} \\ \cup \set{(j,k) {}\exists k \in \mathcal{V}\text{ s.t. } (k,i,j) \in \mathcal{A}}$\ (if $\mathcal{E} = \emptyset$, then $\mathcal{E'} = \emptyset$), - $\mathcal{A'} = \mathcal{A}$. For any edge $(i,j) \in \mathcal{E'}$, we consider a new associated relative position vector defined as $${\mathbf}{z'}_{s} \triangleq {\mathbf}{z'}_{ij}, \forall s\in \{ 1,..., l\}, l \geq m,$$ where ${\mathbf}{z'}_{ij} = {\mathbf}{p}_{i} - {\mathbf}{p}_{j}$ for all $(i,j)\in \mathcal{E'}$ and $l=\card{\mathcal{E'}}$. The new associated relative position vector satisfies the following condition: $${\mathbf}{z'}_{u} = {\mathbf}{z}_{u}, \forall u\in \{ 1,..., m\}.$$ Let ${\mathbf}{z}' = \big[{\mathbf}{z'}_{1}^\top, {\mathbf}{z'}_{2}^\top,...,{\mathbf}{z'}_{l}^\top \big]^\top \in \mathbb{R}^{2l}$ denote a new associated column vector composed of relative position vectors. The oriented incidence matrix $H' \in \mathbb{R}^{l \times n}$ of the new graph $\mathcal{G'}$ is the $\{0, \pm1\}$-matrix with rows indexed by edges and columns indexed by vertices as follows: $$[H']_{ui}=\begin{cases} 1 & \text{if the $u$-th edge sinks at vertex $i$}\,, \\ -1 & \text{if the $u$-th edge leaves vertex $i$}\,, \\ 0 & \text{otherwise}\,, \end{cases}$$ where $[H']_{ui}$ is an element at row $u$ and column $i$ of the matrix $H'$. Note that ${\mathbf}{z'}$ satisfies ${\mathbf}{z'}=\bar{H'}{\mathbf}{p}$ where $\bar{H'}\triangleq H'\otimes I_2$. We first prove a useful expression which will be used later in Lemma \[lem\_null\_of\_rigid matrix\]. \[partial\_deriv\] Let ${\mathbf}{z'}_a$, ${\mathbf}{z'}_b$ and ${\mathbf}{z'}_c$ denote relative position vectors to define a cosine $A_{h}$ s.t. $\frac{\norm{{\mathbf}{z'}_{a}}^{2} + \norm{{\mathbf}{z'}_{b}}^{2} - \norm{{\mathbf}{z'}_{c}}^{2}}{2\norm{{\mathbf}{z'}_{a}}\norm{{\mathbf}{z'}_{b}}}$. The following equations hold. $$\begin{aligned} \frac{\partial A_h}{\partial {\mathbf}{z'}_a}{\mathbf}{z'}_{a} &=\frac{\norm{{\mathbf}{z'}_{a}}^{2} - \norm{{\mathbf}{z'}_{b}}^{2} + \norm{{\mathbf}{z'}_{c}}^{2}}{2\norm{{\mathbf}{z'}_{a}}\norm{{\mathbf}{z'}_{b}}}, \\ \frac{\partial A_h}{\partial {\mathbf}{z'}_b}{\mathbf}{z'}_{b} &=\frac{-\norm{{\mathbf}{z'}_{a}}^{2} + \norm{{\mathbf}{z'}_{b}}^{2} + \norm{{\mathbf}{z'}_{c}}^{2}}{2\norm{{\mathbf}{z'}_{a}}\norm{{\mathbf}{z'}_{b}}}, \\ \frac{\partial A_h}{\partial {\mathbf}{z'}_c}{\mathbf}{z'}_{c} &= -\frac{\norm{{\mathbf}{z'}_{c}}^{2}}{\norm{{\mathbf}{z'}_{a}}\norm{{\mathbf}{z'}_{b}}},\end{aligned}$$ where $a\neq b\neq c$ and $a,b,c \in \{ 1,..., l\}$. \ Since $\cos{\theta_{ij}^{k}} = \left[\frac{\norm{{\mathbf}{z'}_{ik}}^{2} + \norm{{\mathbf}{z'}_{jk}}^{2} - \norm{{\mathbf}{z'}_{ij}}^{2}}{2\norm{{\mathbf}{z'}_{ik}}\norm{{\mathbf}{z'}_{jk}}}\right]$ and $(k,i,j) \in \mathcal{A'}$, with reference to Fig. \[Fig:triangle\_ijk\], $A_h$ can be expressed as $$\begin{aligned} A_{h} = \cos{\theta_{ij}^{k}} = \frac{\norm{{\mathbf}{z'}_{a}}^{2} + \norm{{\mathbf}{z'}_{b}}^{2} - \norm{{\mathbf}{z'}_{c}}^{2}}{2\norm{{\mathbf}{z'}_{a}}\norm{{\mathbf}{z'}_{b}}}, \nonumber \\ \forall a,b,c \in \{ 1,..., l\}, a\neq b\neq c. \nonumber\end{aligned}$$ As a result, the following equations are calculated as $$\begin{aligned} \frac{\partial A_h}{\partial {\mathbf}{z'}_a}{\mathbf}{z'}_{a} &= \frac{1}{4\norm{{\mathbf}{z'}_a}^2\norm{{\mathbf}{z'}_b}^2} \bigg[2{{\mathbf}{z'}_a^\top}(2\norm{{\mathbf}{z'}_a}\norm{{\mathbf}{z'}_b})- \nonumber \\ &(\norm{{\mathbf}{z'}_a}^2+\norm{{\mathbf}{z'}_b}^2-\norm{{\mathbf}{z'}_c}^2)(2\frac{{{\mathbf}{z'}_a^\top}}{\norm{{\mathbf}{z'}_a}}\norm{{\mathbf}{z'}_b})\bigg]{\mathbf}{z'}_a \nonumber \\ &=\frac{\norm{{\mathbf}{z'}_{a}}^{2} - \norm{{\mathbf}{z'}_{b}}^{2} + \norm{{\mathbf}{z'}_{c}}^{2}}{2\norm{{\mathbf}{z'}_{a}}\norm{{\mathbf}{z'}_{b}}},\nonumber \\ \frac{\partial A_h}{\partial {\mathbf}{z'}_c}{\mathbf}{z'}_{c} &= \frac{1}{4\norm{{\mathbf}{z'}_a}^2\norm{{\mathbf}{z'}_b}^2} \big[ -2{{\mathbf}{z'}_c^\top}(2\norm{{\mathbf}{z'}_a}\norm{{\mathbf}{z'}_b})\big]{\mathbf}{z'}_c \nonumber \\ &= -\frac{\norm{{\mathbf}{z'}_{c}}^{2}}{\norm{{\mathbf}{z'}_{a}}\norm{{\mathbf}{z'}_{b}}}, \nonumber\end{aligned}$$ where ${{\mathbf}{z'}_a^\top}{\mathbf}{z'}_a = \norm{{\mathbf}{z'}_a}^2$ and ${{\mathbf}{z'}_c^\top}{\mathbf}{z'}_c = \norm{{\mathbf}{z'}_c}^2$. $\frac{\partial A_h}{\partial {\mathbf}{z'}_b}{\mathbf}{z'}_{b}$ can be also calculated similarly. \[Lem:linearly\_independence\] If ${\mathbf}{p} \neq 0$, the vectors in the set $L_i=\{\mathds{1}\otimes I_2, (I_n\otimes J){\mathbf}{p}, {\mathbf}{p} \}$ are linearly independent. Let $\{a_1,a_2,a_3,a_4\}$, where $a_1, a_2, a_3, a_4 \in \mathbb{R}^{2n}$, be defined as $\{\mathds{1}\otimes I_2, (I_n\otimes J){\mathbf}{p}, {\mathbf}{p} \}$. Then, we can set the following equation to determine the linear independence. $$k_1a_1+k_2a_2+k_3a_3+k_4a_4 = 0, \label{eq:linearly_independence}$$ where $k_1, k_2, k_3$ and $k_4$ are scalars. By row-reducing the augmented matrix of equation (\[eq:linearly\_independence\]) and the assumptions that ${\mathbf}{p} \neq 0$ and there are no position vectors collocated at one point, the matrix can be transformed to the reduced row echelon form as follows $$\begin{bmatrix} 1 &0 &0 &0 &0 &\cdots &0\\ 0 &1 &0 &0 &0 &\cdots &0\\ 0 &0 &1 &0 &0 &\cdots &0\\ 0 &0 &0 &1 &0 &\cdots &0 \end{bmatrix}^\top.$$ From the above result, we know that the solution, $k_1= k_2= k_3= k_4= 0$, of equation (\[eq:linearly\_independence\]) is unique. Thus, by the definition of the linearly independence, we can see that the vectors in the set $\{\mathds{1}\otimes I_2, (I_n\otimes J){\mathbf}{p}, {\mathbf}{p} \}$ are linearly independent. \[lem\_null\_of\_rigid matrix\] A framework $(\mathcal{G},{\mathbf}{p})$ in $\mathbb{R}^{2}$ satisfies span$\{\mathds{1}\otimes I_2, (I_n\otimes J){\mathbf}{p} \} \subseteq$ Null$(R_{W}({\mathbf}{p}))$ and $\operatorname{{rank}}(R_{W}({\mathbf}{p}))\leq 2n-3$ if $\mathcal{E} \neq \emptyset$. If $\mathcal{E} = \emptyset$, then the framework $(\mathcal{G},{\mathbf}{p})$ in $\mathbb{R}^{2}$ satisfies span$\{\mathds{1}\otimes I_2, (I_n\otimes J){\mathbf}{p}, {\mathbf}{p} \}\subseteq$ Null$(R_{W}({\mathbf}{p}))$ and $\operatorname{{rank}}(R_{W}({\mathbf}{p}))\leq 2n-4$. If $\mathcal{E} \neq \emptyset$ (for example, from Fig. \[inf\_ex01\] to Fig. \[inf\_ex05\]), then the equation (\[Weak\_rigidity\_matrix\]) can be expressed as follows $$R_{W}({\mathbf}{p}) = \frac{\partial F_{W}({\mathbf}{p})}{\partial {\mathbf}{p}}= \begin{bmatrix} \frac{\partial \mathcal{D}}{\partial {\mathbf}{z'}} \frac{\partial {\mathbf}{z'}}{\partial {\mathbf}{p}}\\ \\ \frac{\partial {\mathbf}{A}}{\partial {\mathbf}{z'}} \frac{\partial {\mathbf}{z'}}{\partial {\mathbf}{p}} \end{bmatrix} = \begin{bmatrix} \frac{\partial \mathcal{D}}{\partial {\mathbf}{z'}} \bar{H'} \\ \\ \frac{\partial {\mathbf}{A}}{\partial {\mathbf}{z'}} \bar{H'} \end{bmatrix} = \begin{bmatrix} \frac{\partial \mathcal{D}}{\partial {\mathbf}{z'}}\\ \\ \frac{\partial {\mathbf}{A}}{\partial {\mathbf}{z'}} \end{bmatrix} \bar{H'},$$ where ${\mathbf}{A} = [A_1,A_2,...,A_q]^\top \in \mathbb{R}^{q}$. First, it is clear that span$\{\mathds{1}\otimes I_2\}$ $\subseteq$ Null$(\bar{H'})$ $\subseteq$ Null$(R_{W}({\mathbf}{p}))$. Second, $\bar{H'}(I_n\otimes J){\mathbf}{p}$ can be expressed as $$\begin{aligned} \bar{H'}(I_n\otimes J){\mathbf}{p} &= (H'\otimes I_2)(I_n\otimes J){\mathbf}{p} =(H'\otimes J){\mathbf}{p} \nonumber \\ &= (I_l H')\otimes(J I_2){\mathbf}{p} = (I_l \otimes J)(H'\otimes I_2){\mathbf}{p} \nonumber \\ &=(I_l \otimes J){\mathbf}{z'}= \begin{bmatrix} J{\mathbf}{z'}_1 \\ \vdots\\ J{\mathbf}{z'}_l \end{bmatrix}. \nonumber\end{aligned}$$ Let $A_h$ be an element of vector ${\mathbf}{A}$ for $h\in \{ 1,..., q\}$ as mentioned in Lemma \[partial\_deriv\]. Then, the elements of $\frac{\partial A_h}{\partial {\mathbf}{z'}}$ are zero except for $\frac{\partial A_h}{\partial {\mathbf}{z'}_a}$, $\frac{\partial A_h}{\partial {\mathbf}{z'}_b}$ and $\frac{\partial A_h}{\partial {\mathbf}{z'}_c}$, where $a,b,c \in \{ 1,..., l\}$. With reference to the calculation of $\frac{\partial A_h}{\partial {\mathbf}{z'}_a}$ in Lemma \[partial\_deriv\], $\frac{\partial A_h}{\partial {\mathbf}{z'}}\bar{H'}(I_n\otimes J){\mathbf}{p}$ is calculated as $$\begin{aligned} \frac{\partial A_h}{\partial {\mathbf}{z'}}\bar{H'}(I_n\otimes J){\mathbf}{p}=\frac{\partial A_h}{\partial {\mathbf}{z'}}(I_l \otimes J){\mathbf}{z'} = \frac{\partial A_h}{\partial {\mathbf}{z'}}\begin{bmatrix} J{\mathbf}{z'}_1 \\ \vdots\\ J{\mathbf}{z'}_l \end{bmatrix} \nonumber \\ =\frac{\partial A_h}{\partial {\mathbf}{z'}_a}J{\mathbf}{z'}_{a}+\frac{\partial A_h}{\partial {\mathbf}{z'}_b}J{\mathbf}{z'}_{b}+\frac{\partial A_h}{\partial {\mathbf}{z'}_c}J{\mathbf}{z'}_{c} = 0, \nonumber\end{aligned}$$ where ${{\mathbf}{z'}_a}^\top J {\mathbf}{z'}_a = 0$, ${{\mathbf}{z'}_b}^\top J {\mathbf}{z'}_b = 0$ and ${{\mathbf}{z'}_c}^\top J {\mathbf}{z'}_c = 0$. Thus, $\frac{\partial {\mathbf}{A}}{\partial {\mathbf}{z'}}\bar{H'}(I_n\otimes J){\mathbf}{p} = 0$. Also, the following equation is calculated as $$\begin{aligned} \frac{\partial \mathcal{D}}{\partial {\mathbf}{z'}}\bar{H'}(I_n\otimes J){\mathbf}{p} &= \frac{\partial \mathcal{D}}{\partial {\mathbf}{z'}}(I_l \otimes J){\mathbf}{z'} = \frac{\partial \mathcal{D}}{\partial {\mathbf}{z'}}\begin{bmatrix} J{\mathbf}{z'}_1 \\ \vdots\\ J{\mathbf}{z'}_l \end{bmatrix} \nonumber \\ &= \begin{bmatrix} 2D^\top & 0_{m,(2l-2m)} \end{bmatrix} \begin{bmatrix} J{\mathbf}{z'}_1 \\ \vdots\\ J{\mathbf}{z'}_l \end{bmatrix} =0, \nonumber\end{aligned}$$ where $D=$diag$({\mathbf}{z'}_1,...,{\mathbf}{z'}_m) \in \mathbb{R}^{2m \times m}$ and $0_{m,(2l-2m)}$ is a $m \times (2l-2m)$ zero matrix. Using the above results, the following equation can be calculated as $$\begin{aligned} R_{W}({\mathbf}{p})(I_n\otimes J){\mathbf}{p} &= \begin{bmatrix} \frac{\partial \mathcal{D}}{\partial {\mathbf}{z'}}\\ \\ \frac{\partial {\mathbf}{A}}{\partial {\mathbf}{z'}} \end{bmatrix} \bar{H'} (I_n\otimes J){\mathbf}{p} \nonumber \\ &=\begin{bmatrix} \frac{\partial \mathcal{D}}{\partial {\mathbf}{z'}} \\ \\ \frac{\partial {\mathbf}{A}}{\partial {\mathbf}{z'}} \end{bmatrix} \begin{bmatrix} J{\mathbf}{z'}_1 \\ \vdots\\ J{\mathbf}{z'}_l \end{bmatrix} = 0. \nonumber \end{aligned}$$ Therefore, we have span$\{(I_n\otimes J){\mathbf}{p} \} \subseteq$ Null$(R_{W}({\mathbf}{p}))$. Also, with span$\{\mathds{1}\otimes I_2\}\subseteq$ Null$(R_{W}({\mathbf}{p}))$ and Lemma \[Lem:linearly\_independence\], span$\{\mathds{1}\otimes I_2, (I_n\otimes J){\mathbf}{p} \} \subseteq$ Null$(R_{W}({\mathbf}{p}))$ holds and the inequality $\operatorname{{rank}}(R_{W}({\mathbf}{p}))\leq 2n-3$ is expressed from span$\{\mathds{1}\otimes I_2, (I_n\otimes J){\mathbf}{p} \} \subseteq$ Null$(R_{W}({\mathbf}{p}))$ directly. However, if $\mathcal{E} = \emptyset$ (for example, Fig. \[inf\_ex06\]), then the equation (\[Weak\_rigidity\_matrix\]) can be expressed as $$R_{W}({\mathbf}{p}) = \frac{\partial F_{W}({\mathbf}{p})}{\partial {\mathbf}{p}}= \frac{\partial {\mathbf}{A}}{\partial {\mathbf}{z'}} \bar{H'}.$$ Then, $R_{W}({\mathbf}{p}){\mathbf}{p} = \frac{\partial {\mathbf}{A}}{\partial {\mathbf}{z'}} \bar{H'}{\mathbf}{p} = \frac{\partial {\mathbf}{A}}{\partial {\mathbf}{z'}}{\mathbf}{z'}$. The elements of $\frac{\partial A_h}{\partial {\mathbf}{z'}}$ are zero except for $\frac{\partial A_h}{\partial {\mathbf}{z'}_a}$, $\frac{\partial A_h}{\partial {\mathbf}{z'}_b}$ and $\frac{\partial A_h}{\partial {\mathbf}{z'}_c}$. With Lemma \[partial\_deriv\], $\frac{\partial A_h}{\partial {\mathbf}{z'}}{\mathbf}{z'}$ is calculated as follows $$\begin{aligned} \frac{\partial A_h}{\partial {\mathbf}{z'}}{\mathbf}{z'} &= \frac{\partial A_h}{\partial {\mathbf}{z'}}\begin{bmatrix} {\mathbf}{z'}_1 \\ \vdots\\ {\mathbf}{z'}_l \end{bmatrix} = \frac{\partial A_h}{\partial {\mathbf}{z'}_a}{\mathbf}{z'}_{a}+\frac{\partial A_h}{\partial {\mathbf}{z'}_b}{\mathbf}{z'}_{b}+\frac{\partial A_h}{\partial {\mathbf}{z'}_c}{\mathbf}{z'}_{c} \nonumber \\ &= \frac{\norm{{\mathbf}{z'}_{a}}^{2} - \norm{{\mathbf}{z'}_{b}}^{2} + \norm{{\mathbf}{z'}_{c}}^{2}}{2\norm{{\mathbf}{z'}_{a}}\norm{{\mathbf}{z'}_{b}}} + \nonumber \\ &\frac{-\norm{{\mathbf}{z'}_{a}}^{2} + \norm{{\mathbf}{z'}_{b}}^{2} + \norm{{\mathbf}{z'}_{c}}^{2}}{2\norm{{\mathbf}{z'}_{a}}\norm{{\mathbf}{z'}_{b}}} + \frac{-2\norm{{\mathbf}{z'}_{c}}^{2}}{2\norm{{\mathbf}{z'}_{a}}\norm{{\mathbf}{z'}_{b}}} = 0. \nonumber\end{aligned}$$ Therefore, $R_{W}({\mathbf}{p}){\mathbf}{p}=\frac{\partial {\mathbf}{A}}{\partial {\mathbf}{z'}}{\mathbf}{z'}=0$ and ${\mathbf}{p}\subseteq$ Null$(R_{W}({\mathbf}{p}))$. Also, we can easily prove that span$\{\mathds{1}\otimes I_2, (I_n\otimes J){\mathbf}{p} \} \subseteq$ Null$(R_{W}({\mathbf}{p}))$ as the case of $\mathcal{E} \neq \emptyset$ is proved. With Lemma \[Lem:linearly\_independence\], we can see that the inequality $\operatorname{{rank}}(R_{W}({\mathbf}{p}))\leq 2n-4$ is expressed from span$\{\mathds{1}\otimes I_2, (I_n\otimes J){\mathbf}{p}, {\mathbf}{p} \} \subseteq$ Null$(R_{W}({\mathbf}{p}))$ directly. The next result gives us the necessary and sufficient condition for infinitesimal weak rigidity of a framework. \[Thm:Inf\_Rank1\] A framework $(\mathcal{G},{\mathbf}{p})$ with $n \ge 3$ and $\mathcal{E} \neq \emptyset$ is infinitesimally weakly rigid in $\mathbb{R}^{2}$ if and only if the weak rigidity matrix $R_{W}({\mathbf}{p})$ has rank $2n - 3$. Lemma \[lem\_null\_of\_rigid matrix\] shows span$\{\mathds{1}\otimes I_2, (I_n\otimes J){\mathbf}{p} \} \subseteq$ Null$(R_{W}({\mathbf}{p}))$. Observe that $\mathds{1}\otimes I_2$ and $(I_n\otimes J){\mathbf}{p}$ correspond to a rigid-body translation and a rotation of the framework, respectively, with reference to [@zhao2016bearing; @sun2017distributed]. Therefore, the theorem directly follows from Definition \[weak\_rigidity\_trivial\]. However, in case of $\mathcal{E} = \emptyset$, the condition span$\{\mathds{1}\otimes I_2, (I_n\otimes J){\mathbf}{p}, {\mathbf}{p} \}\subseteq$ Null$(R_{W}({\mathbf}{p}))$ is satisfied as proved in Lemma \[lem\_null\_of\_rigid matrix\]. Observe that $\mathds{1}\otimes I_2$, $(I_n\otimes J){\mathbf}{p}$ and ${\mathbf}{p}$ correspond to a rigid-body translation, a rotation and a scaling of the framework, respectively, with reference to [@zhao2016bearing; @sun2017distributed]. Therefore, when $\mathcal{E} = \emptyset$, the following theorem follows from Definition \[weak\_rigidity\_trivial\] directly. \[Thm:Inf\_Rank2\] A framework $(\mathcal{G},{\mathbf}{p})$ with $n \ge 3$ and $\mathcal{E} = \emptyset$ is infinitesimally weakly rigid in $\mathbb{R}^{2}$ if and only if the weak rigidity matrix $R_{W}({\mathbf}{p})$ has rank $2n - 4$. The Formation Control Problem on Three-Agent Formations {#Sec:Formation Control Problem} ======================================================= Let $t \in [0,\infty)$ be time. We assume that the motion of an agent $i$ is governed by a single integrator, i.e., $$\frac{d}{dt}{\mathbf}{p}_i=\dot{{\mathbf}{p}}_i=u_i,$$ where $u_i$ is a control input. Define the following two column vectors of squared distances and cosines $$\begin{aligned} d_c({\mathbf}{p}) &= [\ldots, d_{ij}^2, \ldots]^\top_{(i,j) \in \mathcal{E}}, \\ c_c({\mathbf}{p}) &= [\ldots, \cos\theta_{ij}^k, \ldots]^\top_{(k,i,j) \in \mathcal{A}}.\end{aligned}$$ Similarly, $d_c^*$ and $c_c^*$ are defined as vectors of desired squared distance constraints and cosine constraints respectively, and both of them are constant. Then, an error vector can be defined as follows $${\mathbf}{e}({\mathbf}{p})=[d_c({\mathbf}{p})^\top c_c({\mathbf}{p})^\top]^\top - [d_c^{*\top} c_c^{*\top}]^\top.$$ If either $\mathcal{E}=\emptyset$ or $\mathcal{A}=\emptyset$, then the error vector is ${\mathbf}{e}({\mathbf}{p})=d_c({\mathbf}{p})-d_c^*$ or ${\mathbf}{e}({\mathbf}{p})=c_c({\mathbf}{p})-c_c^*$ respectively. We consider the following formation control problem. \[problem1\] The weakly rigid formation control problem is to design a control input $u_i$, $\forall i \in \mathcal{V}$, such that $\mathbf{e} \to 0$ as t $\to \infty$. Since we only consider a three-agent formation problem with two distance constraints and one angle constraint. The error vector is written as ${\mathbf}{e}({\mathbf}{p})=[e_{12} \quad e_{13} \quad e^1_{23}]^\top$, where $e_{ij}=d_{ij}^2-d_{ij}^{*2}$ and $e^k_{ij}=\cos\theta^k_{ij}-\cos(\theta^k_{ij})^*$. Equations of motion ------------------- The gradient-descent law [@krick2009stabilisation; @bishop2015distributed; @park2014stability] is employed to make a formation control system stable. First, we consider the control law defined as $$\dot{{\mathbf}{p}}=u \triangleq -(\nabla{\mathbf}{e}({\mathbf}{p}))^\top{\mathbf}{e}({\mathbf}{p}).$$ The control law can be written as $$\begin{aligned} \dot{{\mathbf}{p}}=u = -(\nabla{\mathbf}{e})^\top{\mathbf}{e} &= -\frac{\partial}{\partial {\mathbf}{p}}\left(\begin{bmatrix} d_c({\mathbf}{p})\\ c_c({\mathbf}{p}) \end{bmatrix} - \begin{bmatrix} d_c^*\\ c_c^* \end{bmatrix}\right)^\top {\mathbf}{e}({\mathbf}{p}), \nonumber \\ &= -\frac{\partial}{\partial {\mathbf}{p}}\left(\begin{bmatrix} d_c({\mathbf}{p})\\ c_c({\mathbf}{p}) \end{bmatrix}\right)^\top {\mathbf}{e}({\mathbf}{p}), \\ &= -R_{W}({\mathbf}{p})^\top {\mathbf}{e}({\mathbf}{p}) \label{control_law01}.\end{aligned}$$ \[controller\] In the case of the three-agent formation, the equation (\[control\_law01\]) can be again written as $$\begin{aligned} \dot{{\mathbf}{p}}=u &= -R_{W}({\mathbf}{p})^\top {\mathbf}{e}({\mathbf}{p}) \nonumber \\ &= -(E({\mathbf}{p})\otimes I_2){\mathbf}{p}\end{aligned}$$ where $R_{W}({\mathbf}{p})$ is defined by $$R_{W}({\mathbf}{p})=\begin{bmatrix} 2{\mathbf}{z}_{12}^\top & -2{\mathbf}{z}_{12}^\top & 0 \\ 2{\mathbf}{z}_{13}^\top & 0 &-2{\mathbf}{z}_{13}^\top \\ \alpha & \beta & \gamma \end{bmatrix},$$ $$\alpha=\frac{\partial}{\partial {\mathbf}{p}_1}\cos\theta^1_{23}, \; \beta=\frac{\partial}{\partial {\mathbf}{p}_2}\cos\theta^1_{23}, \; \gamma=\frac{\partial}{\partial {\mathbf}{p}_3}\cos\theta^1_{23},$$ $\alpha,\beta,\gamma \in \mathbb{R}^{1 \times 2}$, and $E({\mathbf}{p})=$ $$\small \begin{bmatrix} 2e_{12}+2e_{13}+\alpha_{{\mathbf}{p}_1}e_{23}^1 & -2e_{12}+\alpha_{{\mathbf}{p}_2}e_{23}^1 & -2e_{13}+\alpha_{{\mathbf}{p}_3}e_{23}^1\\ -2e_{12}+\beta_{{\mathbf}{p}_1}e_{23}^1 & 2e_{12}+\beta_{{\mathbf}{p}_2}e_{23}^1 & \beta_{{\mathbf}{p}_3}e_{23}^1 \\ -2e_{13}+\gamma_{{\mathbf}{p}_1}e_{23}^1 & \gamma_{{\mathbf}{p}_2}e_{23}^1 & 2e_{13}+\gamma_{{\mathbf}{p}_3}e_{23}^1 \end{bmatrix}.$$ In the matrix $E({\mathbf}{p})$, $\alpha_{{\mathbf}{p}_1}$, $\alpha_{{\mathbf}{p}_2}$ and $\alpha_{{\mathbf}{p}_3}$ are coefficients of ${\mathbf}{p}_1$, ${\mathbf}{p}_2$ and ${\mathbf}{p}_3$ in $\alpha$, respectively. Similarly, $\beta_{{\mathbf}{p}_1}$, $\beta_{{\mathbf}{p}_2}$, $\beta_{{\mathbf}{p}_3}$, $\gamma_{{\mathbf}{p}_1}$, $\gamma_{{\mathbf}{p}_2}$ and $\gamma_{{\mathbf}{p}_3}$ are defined. Also, equations of $\alpha_{{\mathbf}{p}_2}=\beta_{{\mathbf}{p}_1}$, $\alpha_{{\mathbf}{p}_3}=\gamma_{{\mathbf}{p}_1}$ and $\beta_{{\mathbf}{p}_3}=\gamma_{{\mathbf}{p}_2}$ hold, i.e. the matrix $E({\mathbf}{p})$ is symmetric. We define a desired equilibrium set and an incorrect equilibrium set as $$\begin{aligned} \mathcal{P}^* &= \set{{\mathbf}{p} \in \mathbb{R}^{2n} {}{\mathbf}{e}=0}, \\ \mathcal{P}_i &= \set{{\mathbf}{p} \in \mathbb{R}^{2n} {}R_{W}^\top{\mathbf}{e}=0, {\mathbf}{e}\neq0},\end{aligned}$$ respectively. The first set $\mathcal{P}^*$ corresponds to a desired target formation, and the second set $\mathcal{P}_i$ does not correspond to a desired target formation but makes the equation (\[control\_law01\]) become zero. Both of the sets constitute the set of all equilibria. Analysis of the incorrect equilibrium points -------------------------------------------- \[incorrect\_collinear\] In the case of the three-agent formation, incorrect equilibria take place only when the three agents are collinear. The equation (\[control\_law01\]) can be written as $$\begin{aligned} \dot{{\mathbf}{p}}_1 &= -2{\mathbf}{z}_{12}e_{12}-2{\mathbf}{z}_{13}e_{13}-\alpha^\top e^1_{23} \label{collinear_01} \\ \dot{{\mathbf}{p}}_2 &= 2{\mathbf}{z}_{12}e_{12}-\beta^\top e^1_{23} \label{collinear_02} \\ \dot{{\mathbf}{p}}_3 &= 2{\mathbf}{z}_{13}e_{13}-\gamma^\top e^1_{23} \label{collinear_03}\end{aligned}$$ In the incorrect equilibrium set $\mathcal{P}_i$, the equation (\[collinear\_03\]) is calculated as $${\mathbf}{z}_{12}=\left(\frac{\norm{{\mathbf}{z}_{12}}}{\norm{{\mathbf}{z}_{13}}}\cos\theta^1_{23}-2\norm{{\mathbf}{z}_{12}}\norm{{\mathbf}{z}_{13}}\frac{e_{13}}{e^1_{23}} \right) {\mathbf}{z}_{13} \mid_{{\mathbf}{p} \in \mathcal{P}_i} \label{Eq:collinear_04}$$ It follows that ${\mathbf}{p}_1$, ${\mathbf}{p}_2$ and ${\mathbf}{p}_3$ must be collinear from the equation (\[Eq:collinear\_04\]). The equations (\[collinear\_01\]) and (\[collinear\_02\]) also give us similar results. We assumed that there are no position vectors overlapping each other. Because the cosine cannot be defined if there exists at least one overlapped point of two agents. Thus, $e^1_{23}$ cannot be equal to zero in the incorrect equilibrium set $\mathcal{P}_i$ and, regardless of the values of $e_{12}$ and $e_{13}$, the three agents must be collinear. The formation shape of the three agents falls into one of three cases as depicted in Fig. \[Fig:formation\_forms\]. Note that the stability of an equilibrium point is independent of a rigid-body translation, a rotation and a scaling of a framework. Because relative distances and subtended angles only matter. Therefore, without loss of generality, we suppose that the three agents are on the x-axis to analyze the stability at the incorrect equilibria. Also, it is observed that $\frac{\partial}{\partial p_i} \cos\theta_{23}^1 = -\sin\theta_{23}^1 \frac{\partial \theta_{23}^1}{\partial p_i}$. Thus, if the three agents are in a collinear configuration, there holds $\theta_{23}^1 = 0$ or $\pi$, which implies that the values of $\alpha$, $\beta$, and $\gamma$ calculated at an incorrect equilibrium are 0. To analyze the stability at the incorrect equilibria, we linearize the system (\[control\_law01\]). The negative Jacobian $J({\mathbf}{p})$ of the system (\[control\_law01\]) with respect to ${\mathbf}{p}$ is given by $$\begin{aligned} J({\mathbf}{p})=-\frac{\partial}{\partial {\mathbf}{p}}\dot{{\mathbf}{p}}=R_{W}({\mathbf}{p})^\top R_{W}({\mathbf}{p})+E({\mathbf}{p})\otimes I_2 \nonumber \\ + \sum_{i=1}^3(I_3\otimes{\mathbf}{p}_i) \frac{\partial}{\partial {\mathbf}{p}} \begin{bmatrix} \alpha_{{\mathbf}{p}_i}\\ \beta_{{\mathbf}{p}_i}\\ \gamma_{{\mathbf}{p}_i} \end{bmatrix} e^1_{23}. \label{negative_jaco}\end{aligned}$$ If $J({\mathbf}{p})$ has a negative eigenvalue at the incorrect equilibrium point, then the system at the incorrect equilibrium is unstable. We also use a permutation matrix $T$ which reorders columns of matrix such that $$\begin{aligned} R_WT&=[R_x \,\,\, R_y]=\bar{R}_W, \nonumber \\ P_1T&=[P_{1x} \,\,\, P_{1y}]=\bar{P}_1, \nonumber \\ C_1T&=[C_{1x} \,\,\, C_{1y}]=\bar{C}_1, \nonumber \end{aligned}$$ where $R_i \in \mathbb{R}^{3\times3}$, $P_{1i} \in \mathbb{R}^{3\times3}$ and $C_{1i} \in \mathbb{R}^{3\times3}$ are matrices whose columns are composed of the columns of coordinate $i$ in the matrix $R_W$, $P_1$ and $C_1$ respectively, and $P_1\triangleq(I_3\otimes {\mathbf}{p}_1^\top)$, $C_1 \triangleq \frac{\partial}{\partial {\mathbf}{p}}[\alpha_{{\mathbf}{p}_1} \,\, \beta_{{\mathbf}{p}_1} \,\, \gamma_{{\mathbf}{p}_1}]^\top$. Similarly, $\bar{P}_2$, $\bar{P}_3$, $\bar{C}_2$ and $\bar{C}_3$ are defined in the same way. \[incorrect\_negative\_eigen\] Let ${\mathbf}{p}_*$ be in the incorrect equilibrium set $\mathcal{P}_i$. Then, $E({\mathbf}{p}_*)$ has at least one negative eigenvalue. Consider a configuration $\bar{{\mathbf}{p}}$, then the following equation holds $$\bar{{\mathbf}{p}}^\top[E({\mathbf}{p}_*)\otimes I_3]\bar{{\mathbf}{p}} = \sum_{(i,j)\in\mathcal{E}}{\mathbf}{e}_{ij}({\mathbf}{p}_*)\norm{\bar{{\mathbf}{p}}_i-\bar{{\mathbf}{p}}_j}^2,$$ where the parts involving ${\mathbf}{e}^1_{23}$ vanished since $\alpha|_{p=p_*}=0$, $\beta|_{p=p_*}=0$, and $\gamma|_{p=p_*}=0$. The remaining of this proof is similar to the proof of Lemma 1 in [@park2014stability]. The permutated matrix $\bar{J}({\mathbf}{p})$ is given by $$\begin{aligned} \bar{J}({\mathbf}{p}) &= T^\top J({\mathbf}{p}) T \nonumber \\ &= \bar{R}_W^\top \bar{R}_W + I_2\otimes E({\mathbf}{p})+ \sum_{i=1}^3(\bar{P}_i^\top \bar{C}_i)e^1_{23} \nonumber \\ &= \begin{bmatrix} \bar{J}_{11} & \bar{J}_{12} \\ \bar{J}_{21} & \bar{J}_{22} \end{bmatrix}, \nonumber\end{aligned}$$ where\ $\bar{J}_{11} = R_x^\top R_x+E({\mathbf}{p})+\sum_{i=1}^3P_{ix}C_{ix}e^1_{23}$,\ $\bar{J}_{12} = R_x^\top R_y+\sum_{i=1}^3P_{ix}C_{iy}e^1_{23}$,\ $\bar{J}_{21} = R_y^\top R_x+\sum_{i=1}^3P_{iy}C_{ix}e^1_{23}$,\ $\bar{J}_{22} = R_y^\top R_y+E({\mathbf}{p})+\sum_{i=1}^3P_{iy}C_{iy}e^1_{23}$.\ \[Thm:incorrect\_unstable\] The system (\[controller\]) at any incorrect equilibrium point ${\mathbf}{p}_*$ is unstable. From Lemma \[incorrect\_collinear\], three agents in the incorrect equilibrium set $\mathcal{P}_i$ are collinear. The stability is also independent on a rigid-body translation, a rotation of the formation. Therefore, assuming that the formation is on the x-axis, the permutated matrix $\bar{J}({\mathbf}{p}_*)$ is given by $$\bar{J}({\mathbf}{p}_*) = \begin{bmatrix} R_x^\top R_x+E({\mathbf}{p_*})+\sum_{i=1}^3P_{ix}C_{ix}e^1_{23} & 0 \\ 0 & E({\mathbf}{p_*}) \end{bmatrix}.$$ From Lemma \[incorrect\_negative\_eigen\], we know that $E({\mathbf}{p}_*)$ has at least one negative eigenvalue and the matrix $\bar{J}({\mathbf}{p}_*)$ also does. Since eigenvalues of $\bar{J}({\mathbf}{p_*})$ and $J({\mathbf}{p_*})$ are the same, $J({\mathbf}{p_*})$ also has at least one negative eigenvalue. Thus, the system (\[controller\]) at any incorrect equilibrium point ${\mathbf}{p}_*$ is unstable \[Lem:no\_apporach\] Let ${\mathbf}{p}(0)$ denote an initial position, and $Z$ and $\mathcal{C}$ are defined as $Z = [{\mathbf}{z}_{12} \quad {\mathbf}{z}_{13}] \in \mathbb{R}^{2 \times 2}$, $\mathcal{C} = \set{{\mathbf}{p} \in \mathbb{R}^{2n} {}\det Z = 0}$, respectively. If ${\mathbf}{p}(0)$ is not in $\mathcal{C}$, then ${\mathbf}{p}(t)$ does not approach $\mathcal{P}_i$ for any time $t \geq t_0$. First, ${\mathbf}{z}_{12} - {\mathbf}{z}_{13} + {\mathbf}{z}_{23} = 0$ and it follows that $\det[{\mathbf}{z}_{12} \quad {\mathbf}{z}_{13}]=\det[{\mathbf}{z}_{12} \quad {\mathbf}{z}_{23}]$, which implies that $\mathcal{P}_i \subset \mathcal{C}$ from [@cao2007controlling]. We have the following derivative: $\frac{d}{dt} \det Z =\frac{d}{dt}\det[{\mathbf}{z}_{12} \quad {\mathbf}{z}_{13}]=- \sigma \det Z$, where $\sigma(t)=4{\mathbf}{e}_{12}+4{\mathbf}{e}_{13}-\cos\theta_{23}^1(\frac{1}{\norm{z_{12}}^2}+\frac{1}{\norm{z_{13}}^2}){\mathbf}{e}_{23}^1$. Thus, if $\det Z({t_0}) = 0$ then $\det Z(t) = 0, \forall t \geq 0$. The remainder of this proof is similar to the proof of Lemma 5 in [@park2013control]. \[Thm:desired\_eq\_set\_stable\] If ${\mathbf}{p}(0)$ is not in $\mathcal{C}$ and $\mathcal{P}_i$, then the ${\mathbf}{p}$ exponentially converges to a point in the desired equilibrium set $\mathcal{P}^*$. We define a Lyapunov candidate function as $ V({\mathbf}{e})=\frac{1}{2}{\mathbf}{e}^\top{\mathbf}{e}$. Notice that $V({\mathbf}{e}) \geq 0 \,\, \text{with} \,\, V({\mathbf}{e})=0 \,\, \text{iff} \,\, {\mathbf}{e} = 0, $ and $V$ is radially unbounded. The error dynamics can be written by $$\begin{aligned} \dot{{\mathbf}{e}} &= R_W({\mathbf}{e})\dot{{\mathbf}{e}} = -R_W({\mathbf}{e}) R_W({\mathbf}{e})^\top {\mathbf}{e}. \nonumber\end{aligned}$$ Then, the derivative of $V({\mathbf}{e})$ along a trajectory of $\mathbf{e}$ is calculated as $$\begin{aligned} \dot{V} = {\mathbf}{e}^\top \dot{{\mathbf}{e}} =-{\mathbf}{e}^\top R_W R_W^\top {\mathbf}{e} = - \norm{R_W^\top {\mathbf}{e}}^2. \label{eq:dot_potential_fn}\end{aligned}$$ We know that $\dot{V} \leq 0$, $\dot{V}$ is equal to zero iff $R_{W}^\top{\mathbf}{e}=0$. From Theorem \[Thm:incorrect\_unstable\] and Lemma \[Lem:no\_apporach\], and the assumption that ${\mathbf}{p}(0) \notin \mathcal{P}_i$, it follows that ${\mathbf}{e} \to 0$ asymptotically fast. Moreover, it follows from ${\mathbf}{p}(0) \notin \mathcal{P}_i$ that the initial positions are not collinear. Thus, the formation is weakly rigid and the rigidity matrix associated with a graph ${K}_3$ has full row rank from Corollary 1 of [@park2017rigidity]. It follows that the formation has only two distance preserving motions, i.e. a translation and a rotation. Also, we intuitively know that the infinitesimal weak motions in the case of $\mathcal{E}\neq 0$ correspond to the distance preserving motions with respect to the same formation. In this regard, the two matrices have the same null space, and thus $R_{W}({\mathbf}{p})$ also has full row rank for all ${\mathbf}{p} \notin \mathcal{P}_i$. It follows from ${\mathbf}{p}(0) \notin \mathcal{P}_i$ and Lemma \[Lem:no\_apporach\] that $R_W R_W^\top$ is positive definite, $\forall t \geq 0$. Henceforth, along a trajectory of ${\mathbf}{e}$, the equation (\[eq:dot\_potential\_fn\]) satisfies $$\begin{aligned} \dot{V} \leq -\lambda_{\text{min}}(R_W R_W^\top) \norm{{\mathbf}{e}}^2, \nonumber\end{aligned}$$ where $\lambda_{\text{min}}$ denotes the minimum eigenvalue of $R_w R_w^\top$ along this trajectory. Thus, ${\mathbf}{e} \to 0$ exponentially fast, which in turn implies that ${\mathbf}{p} \to {\mathbf}{p}^*$ for all initial positions outside the set $\mathcal{C}$, where ${\mathbf}{p}^*$ is a point in the desired equilibrium set $\mathcal{P}^*$. Since this result holds for every ${\mathbf}{p}(0) \notin \mathcal{C}$, we conclude that the formation system almost globally asymptotically converges to a desired configuration in $\mathcal{P}^*$. Simulation ---------- Consider a three-agent system with two distances and one angle constraints as depicted in Fig. \[Formation\_3e1a\]. For the simulation, we set the desired squared relative distances and subtended angle as $d_{12}^{*2}=8$, $d_{13}^{*2}=9$ and $(\theta^{1}_{23})^* = 40^\circ$, and set initial conditions as ${\mathbf}{p}_1(0)=[-3 \; 0]^\top$, ${\mathbf}{p}_2(0)=[1 \; 1]^\top$ and ${\mathbf}{p}_3(0)=[-1 \; -3]^\top$. As a result presented in Fig. \[Fig:simul\_result\], the squared distance errors and cosine error converge to 0 as time goes by. Modified Henneberg Construction {#Sec:MHenneberg} =============================== The Henneberg construction[@tay1985generating; @eren2004merging] is a technique to grow minimally rigid graphs with the iterative constructions of rigid formations. By using this technique, we define a new technique termed modified Henneberg construction based on the vertex addition and edge splitting of the Henneberg construction. First, we give a definition of minimal weak rigidity. \[Def:minimally\_weak\_rigidity\] If a framework $(\mathcal{G},{\mathbf}{p})$ is weakly rigid and no single distance- or angle-constraint can be removed without losing the weak rigidity, then the framework is minimally weakly rigid. The two operations of the modified Henneberg construction are termed [weakly rigid 0-extension]{} and [weakly rigid 1-extension]{}, respectively. In the weakly rigid 0-extension, a vertex and two angles are added from the formation illustrated in Fig. \[tri\_rigid\]. Let $\tilde{\mathcal{G}} = (\tilde{\mathcal{V}},\tilde{\mathcal{E}},\tilde{\mathcal{A}})$ be a graph, where a vertex $\nu$ is adjoined so that $\tilde{\mathcal{V}}=\mathcal{V}\cup\{\nu \}$ and $\tilde{\mathcal{A}}=\mathcal{A}\cup\{\theta_{j\nu}^{i}, \theta_{i\nu}^{j} \}$ for some $i,j\in \mathcal{V}$ as illustrated in Fig. \[0-extension\]. In the weakly rigid 1-extension, a vertex and three angles are added while one existing edge is removed from the formation illustrated in Fig. \[tri\_rigid\]. Let $\tilde{\mathcal{G}} = (\tilde{\mathcal{V}},\tilde{\mathcal{E}},\tilde{\mathcal{A}})$ be a graph, where a vertex $\nu$ is adjoined, while an edge of $\mathcal{G}$ is removed, so that $\tilde{\mathcal{V}}=\mathcal{V}\cup\{\nu \}$, $\tilde{\mathcal{E}}=\mathcal{E} \setminus \{(i,j) \}$ and $\tilde{\mathcal{A}}=\mathcal{A}\cup\{\theta_{j\nu}^{i}, \theta_{i\nu}^{j}, \theta_{ij}^{k} \}$ for some $i,j,k\in \mathcal{V}$ as illustrated in Fig. \[1-extension\]. From the properties of the constructions, the two operations can be also termed 0-angle splitting and 1-angle splitting, respectively. The modified Henneberg construction can be used to grow minimally rigid (or minimally weakly rigid) formations with additional angles as the following result. \[Thm:Mod\_Hen\_Const\] Frameworks constructed by the weakly rigid 0-extenstion and 1-extension from a framework $(\mathcal{G},{\mathbf}{p})$ are minimally weakly rigid if the framework $(\mathcal{G},{\mathbf}{p})$ is minimally rigid or minimally weakly rigid. i$)$ In the case of weakly rigid 0-extension as illustrated in Fig. \[0-extension\], the operation is extended from triangular formation as Fig. \[tri\_rigid\]. The three constraints $d_{23}$, $\theta_{34}^{2}$ and $\theta_{24}^{3}$ can be changed to three distance constraints $d_{23}$, $d_{24}$ and $d_{34}$ by the law of sines such that $\frac{d_{23}}{\sin{\theta_{23}^{4}}} = \frac{d_{24}}{\sin{\theta_{24}^{3}}} = \frac{d_{34}}{\sin{\theta_{34}^{2}}}.$ Thus, a formation with three constraints $d_{23}$, $\theta_{34}^{2}$ and $\theta_{24}^{3}$ can be transformed to a formation with three distance constraints $d_{23}$, $d_{24}$ and $d_{34}$, i.e. the formation extended by the weakly rigid 0-extenstion as Fig. \[0-extension\] can be transformed to the minimally rigid formation as Fig. \[rigid\_diamond\]. ii$)$ In the case of weakly rigid 1-extension as illustrated in Fig. \[1-extension\], the operation is extended from a formation with two edges and subtended angle as in Fig. \[Formation\_3e1a\]. The distance $d_{23}$ can be calculated by the law of cosines as mentioned in Section \[Sec:weakRigidity\]. Thus, with the proof of the case i) of weakly rigid 0-extension, the formation extended by the weakly rigid 1-extension can be also transformed to the rigid formation as Fig. \[rigid\_diamond\]. Therefore, if a framework $(\mathcal{G},{\mathbf}{p})$ is minimally rigid or minimally weakly rigid, then frameworks extended by the weakly rigid 0-extenstion or weakly rigid 1-extension are minimally weakly rigid. Weak Rigidity in the Three-Dimensional Space {#Sec:weakRigidity_3dim} ============================================ In this section, we extend the weak rigidity in the two-dimensional space to the concept of the three-dimensional space. We do not consider the infinitesimal weak rigidity but just the weak rigidity. Weak Rigidity from Rigidity Matrix in $\mathbb{R}^{3}$ ------------------------------------------------------ The weak rigidty in $\mathbb{R}^{3}$ can be similarly defined as the weak rigidity in [@park2017rigidity]. Consider formations in [Fig. ]{}\[Fig:ExTetrahedron\]. The first formation is defined by 3 edge lengths and 3 subtended angles while the second formation is defined by 6 edge lengths. The first formation can be transformed to the second formation with the law of cosines as stated in Section \[Sec:weakRigidity\]. \[Def:weakRigidity\] A framework $(\mathcal{G},{\mathbf}{p})$ is [weakly rigid]{} in $\mathbb{R}^{3}$ if there exists a neighborhood $\mathcal{B}_{{\mathbf}{p}} \subseteq \mathbb{R}^{3n}$ of ${\mathbf}{p}$ such that each framework $(\mathcal{G},{\mathbf}{q})$, ${\mathbf}{q} \in \mathcal{B}_{{\mathbf}{p}}$, strongly equivalent to $(\mathcal{G},{\mathbf}{p})$ is congruent to $(\mathcal{G},{\mathbf}{p})$. We examine weak rigidity from rigidity matrix. First, the [rigidity function]{} $F_D: \mathbb{R}^{dn} \rightarrow \mathbb{R}^{m}$ of $(\mathcal{G},{\mathbf}{p})$ is defined as $$F_D({\mathbf}{p})\equiv[...,\norm{{\mathbf}{p}_{ij}}^2,...]^\top_{(i,j) \in \mathcal{E}} \in \mathbb{R}^{m}.$$ The [rigidity matrix]{} then is defined as the Jacobian of the rigidity function: $$\label{rigidity_D_matrix} R_D({\mathbf}{p})=\frac{1}{2} \frac{\partial F_D({\mathbf}{p})}{\partial {\mathbf}{p}} \in \mathbb{R}^{m \times dn}.$$ \[Lemma:rankConditionForIR\] A framework $(\mathcal{G},{\mathbf}{p})$ in $\mathbb{R}^{3}$ with $n \ge 3$ is infinitesimally rigid in $\mathbb{R}^{3}$ if and only if the rank of the rigidity matrix of $(\mathcal{G},{\mathbf}{p})$ is $3n - 6$. Consider a graph $\bar{\mathcal{G}}$, $\bar{\mathcal{G}} = (\bar{\mathcal{V}},\bar{\mathcal{E}},\bar{\mathcal{A}})$, induced from $\mathcal{G}$ in such a way that[@park2017rigidity]: - $\bar{\mathcal{V}} = \mathcal{V}$, - $\bar{\mathcal{E}} = \set*{(i,j) {}(i,j) \in \mathcal{E} \lor \exists k \in \mathcal{V}\text{ s.t. } (k,(i,j)) \in \mathcal{A}}$, - $\bar{\mathcal{A}} = \emptyset$. Then, we can obtain the following result: \[COR:WRbyRigidity\^3\] A framework $(\mathcal{G},{\mathbf}{p})$ is weakly rigid in $\mathbb{R}^{3}$ if and only if $(\bar{\mathcal{G}},{\mathbf}{p})$ is rigid in $\mathbb{R}^{3}$. The proof is similar to the proof of Theorem 1 in [@park2017rigidity]. With the above result, we know that weak rigidity of $(\mathcal{G},{\mathbf}{p})$ can be determined by the rigidity of $(\bar{\mathcal{G}},{\mathbf}{p})$ indirectly. The infinitesimal rigidity of a framework is a sufficient condition for the framework to be rigid. The infinitesimal rigidity can be examined by the rank of the rigidity matrix as mentioned in Lemma \[Lemma:rankConditionForIR\]. Therefore, we can check the weakly rigid of $(\mathcal{G},{\mathbf}{p})$ by investigating the rank of the rigidity matrix of $(\bar{\mathcal{G}},{\mathbf}{p})$ with Corollary \[COR:WRbyRigidity\^3\] and Lemma \[Lemma:rankConditionForIR\]. \[Thm:RImpliesWR\] A framework $(\mathcal{G},{\mathbf}{p})$ with $n \ge 3$ is weakly rigid in $\mathbb{R}^{3}$ if the rigidity matrix $R_D({\mathbf}{p})$ associated with $(\bar{\mathcal{G}},{\mathbf}{p})$ has rank $3n - 6$. If $\operatorname{{rank}}(R_D({\mathbf}{p})) = 3n - 6$ associated with $(\bar{\mathcal{G}},{\mathbf}{p})$, then the framework $(\bar{\mathcal{G}},{\mathbf}{p})$ is infinitesimally rigid in $\mathbb{R}^{3}$ by Lemma \[Lemma:rankConditionForIR\]. Also, the framework $(\bar{\mathcal{G}},{\mathbf}{p})$ is rigid since it is infinitesimally rigid. Therefore, $(\mathcal{G},{\mathbf}{p})$ is weakly rigid in $\mathbb{R}^{3}$ by Corollary \[COR:WRbyRigidity\^3\]. A configuration ${\mathbf}{p}$ of a graph is said to be generic if the vertex coordinates are algebraically independent over the rationals [@C:Hendrickson:SIAM1992]. \[Thm:Generic\_R\_IR\] A framework $(\mathcal{G},{\mathbf}{p})$ with generic configuration ${\mathbf}{p}$ is rigid if and only if the framework is infinitesimally rigid. Therefore, if a configuration ${\mathbf}{p}$ of a graph is generic, then we can state the following result. \[Corol:IRiffWRGenericProp\] If a configuration ${\mathbf}{p}$ is generic, then $(\mathcal{G},{\mathbf}{p})$ with $n \ge 3$ is weakly rigid in $\mathbb{R}^{3}$ if and only if the rigidity matrix $R_D({\mathbf}{p})$ associated with $(\bar{\mathcal{G}},{\mathbf}{p})$ has rank $3n - 6$. Suppose that a given framework $(\mathcal{G},{\mathbf}{p})$ with generic configuration ${\mathbf}{p}$ is rigid. Then, the framework is infinitesimally rigid and vice versa[@C:Hendrickson:SIAM1992; @C:Asimow:JMAA1979]. Therefore, with Theorems \[Thm:RImpliesWR\] and \[Thm:Generic\_R\_IR\], if a configuration ${\mathbf}{p}$ is generic, then the rank condition of rigidity matrix $R_D({\mathbf}{p})$ becomes a necessary and sufficient condition for weak rigidity of $(\mathcal{G},{\mathbf}{p})$. Globally Weak Rigidity in $\mathbb{R}^{3}$ {#Thm:GWRbyRigidity^3} ------------------------------------------- We can also extend the local concept of the weak rigidity to the global concept. With reference to [@park2017rigidity], global weak rigidity can be defined and proved as follows. \[Def:globalWeakRigidity\] A framework $(\mathcal{G},{\mathbf}{p})$ is [globally weakly rigid]{} in $\mathbb{R}^{3}$ if any framework $(\mathcal{G},{\mathbf}{q})$, ${\mathbf}{q} \in \mathbb{R}^{3n}$, strongly equivalent to $(\mathcal{G},{\mathbf}{p})$ is congruent to $(\mathcal{G},{\mathbf}{p})$. As Corollary \[COR:WRbyRigidity\^3\] is proved, the following theorem can be also proved easily. \[Thm:GWRbyGRigidity\] A framework $(\mathcal{G},{\mathbf}{p})$ is globally weakly rigid in $\mathbb{R}^{3}$ if and only if $(\bar{\mathcal{G}},{\mathbf}{p})$ is globally rigid in $\mathbb{R}^{3}$. The proof is similar to the proof of Corollary \[COR:WRbyRigidity\^3\] except that $\mathcal{B}_{{\mathbf}{p}}$ is replaced by $\mathbb{R}^{3\card{\mathcal{V}}}$. CONCLUSIONS {#Sec:conclusion} =========== We have shown four main results in the paper. First, we introduced the infinitesimal weak rigidity in the two-dimensional space. In the original weak rigidity theory [@park2017rigidity], a framework with constraints of two adjacent edges and a subtended angle must be defined and transformed into a three distance constrained in order to check whether the framework is weakly rigid or not. On the contrary, the infinitesimal weak rigidity of a framework can be directly checked by a rank condition of the weak rigidity matrix associated with the framework. For the infinitesimal weak rigidity, adjacent edges do not need to be defined, that is, a framework with only angle constraints can be also infinitesimally weakly rigid. As the second result, we explored the three-agent formation control using the gradient control law in the two-dimensional space and showed that the formation system exponentially converges to the desired target formation from almost global initial positions. As the third result, we proposed the modified Henneberg construction to build up minimally (weakly) rigid frameworks. Finally, we extended the weak rigidity in $\mathbb{R}^{2}$ to the concept in $\mathbb{R}^{3}$. The final result shows that (locally) unique formation shape of a framework in $\mathbb{R}^{3}$ can be obtained by the weak rigidity theory even if the framework is not rigid in the viewpoint of (distance) rigidity. ACKNOWLEDGMENT {#acknowledgment .unnumbered} ============== The authors would like to thank Zhiyong Sun and Myoung-Chul Park for helpful discussions. [^1]: $^{\dagger}$School of Mechanical Engineering, Gwangju Institute of Science and Technology (GIST), Gwangju, Korea. [{seongho, trinhhoangminh, ohkhwan, hyosung}@gist.ac.kr]{} [^2]: $^{\ddagger}$Department of Automatic Control and Systems Engineering, University of Sheffield, UK. [szhao@sheffield.ac.uk]{}
[**<span style="font-variant:small-caps;">Molassembler</span>: Molecular graph construction, modification and conformer generation for inorganic and organic molecules** ]{} [Jan-Grimo Sobez[^1] and Markus Reiher[^2] ]{}\ Laboratory of Physical Chemistry, ETH Zurich,\ Vladimir-Prelog-Weg 2, 8093 Zurich, Switzerland May 8, 2020 **Abstract** Introduction {#sec:introduction} ============ Software encoding of molecules has become indispensable to many research fields. Significant effort has been invested into the formal description of organic molecules, owing to their importance as small-molecule drugs amenable to a characterization by chemical concepts. A trove of open-source cheminformatics programs exists serving various needs[@Pirhadi2016; @Ambure2016]. Many of these computer programs have some support for inorganic molecules, for which reliable and simple chemical principles are more difficult to define owing to an increased electronic structure. For example, take fragment-based [@Foscato2014] and evolutionary [@Chu2012] organometallic compound design applications, whose primary conformer generation is facilitated by Marvin [@Chemaxon534] and Chemistry Development Kit [@Steinbeck2003], respectively. The set of software whose primary focus is an encoding of organometallic and inorganic molecules is small, but steadily expanding. The high-level complex generator molSimplify [@ioannidis2016] is based on three-dimensional manipulation of preoptimized molecular fragments. AARON [@Guan2018] automates the generation of configurations and conformations of organometallic molecules in transition-metal catalytic explorations. HostDesigner [@Hay2002] selects ligands to complement metal ion guests. DENOPTIM [@Foscato2019] is a fragment-based transition metal compound design tool. For our efforts toward the automated exploration of chemical reaction networks based on first-principles heuristics [@Bergeler2015; @Simm2017; @Unsleber2019], we require structure-identification and -construction algorithms that are applicable to molecules composed of any element from the periodic table (in particular, to inorganic compounds) in order not to restrict explorations to irrelevant parts of chemical reaction space if certain reactants cannot be represented. No existing cheminformatics implementation provides the capabilities required for truly general mechanism exploration algorithms that identify nodes in a reaction network as chemical species described by a graph. To be more specific, the requirements for this purpose are: (i) The algorithm must be able to interpret a local minimum on the Born-Oppenheimer potential energy hypersurface as a chemical graph and capture the stereochemical configuration. (ii) The molecular model must encompass multidentate and haptic ligands. (iii) It must be possible to change the connectivity of atoms and enumerate all stereoisomers without restriction. (iv) Molecules must be comparable. (v) It must be possible to generate new conformers of molecules. This set of features does not exist in any software supporting inorganic molecules, because bonding patterns and stereochemistry in inorganic molecules present significant challenges for cheminformatics tools due to their immense variability. This variability affects how atom-centered chirality manifests. Compared to organic chemistry, the most complicated chiral unit will no longer be the tetrahedron with at most two distinct spatial arrangements (i.e., R or S) if its substituents are all deemed different by a ranking algorithm. Instead, the local shape can have as many as twelve vertices and most vertex counts accommodate multiple shapes. For instance, five sites can arrange into idealized shapes of a trigonal bipyramid, a square pyramid or a pentagon. The number of distinct spatial arrangements for the maximally asymmetric substitution situation scales approximately as the factorial of the number of sites. Note also that individual shape vertices are not necessarily occupied by single atoms, but possibly by multiple atoms in a haptic configuration. The degree to which these ostensibly slight structural changes complicate the design of algorithms necessarily depends on the specific aims of the molecule construction software. Let us consider some of the consequences of the proposed aims individually: In order to interpret a local minimum of the potential energy hypersurface as a chemical graph and capture the stereochemical configuration, the software must base its atom-connectivity representation on a graph and find a data representation of stereocenters that is general, and in particular, suitable for inorganic molecules. Returning to atom-centered chirality, this implies that for an arbitrary local shape, the software must enumerate rotationally non-superimposable arrangements of variable numbers of different substituents. This enumeration must consider the effects of site interconnectivity for multidentate ligands and abstract away the additional complication of haptic binding. The algorithm must decide which substituents are chemically different with a ranking algorithm that encompasses stereodescriptors of inorganic stereocenters. Local shapes and especially non-superimposable arrangements within those shapes must be reliably identifiable from Cartesian coordinates. An imposed rigid model of chiral arrangements might also consider interconverting molecular dynamics such as nitrogen inversion to avoid overstating chiral character. Given a molecule’s connectivity, deciding on the local shape at each non-terminal atom involves more complex electronic structures. For a reasonable degree of accuracy, it will in general be insufficient to base this decision on simple concepts such as Gillespie’s valence shell electron pair repulsion. If the molecular graph and the stereocenter representations are to be freely modifiable and if they are identified through relative substituent ranking, stereodescriptors must be propagated through the ranking changes that arbitrary edits can incur. For editing continuity, stereocenters should not generally lose their state upon addition or removal of a site while transitioning between different molecular structures (shapes). Moreover, the comparison of molecules will require the implementation of a graph isomorphism and optionally a graph canonicalization algorithm to accelerate repeated comparisons. Generation of conformations of arbitrary graphs requires a spatial model that can encompass multidentate and haptic ligands and a methodology that reliably generates specific stereocenter configurations in Cartesian coordinates. Note that model-free molecule comparison algorithms such as G-RMSD [@Tomonori2020], and continuous representations of molecules [@Gomez2018] are certainly also viable approaches to some of the challenges we seek to solve. However, we do not consider direct comparisons of Cartesian coordinates through measures such as root mean square deviations an option here, because increasing deviations may be expected for increasing molecule sizes of structures that actually resemble the same compound. In this work, we present a new molecular construction software, <span style="font-variant:small-caps;">Molassembler</span>, that seeks to encompass organometallic and inorganic bonding patterns in a more complex graph-based model. Comparably few of these sub-problems of our non-exhaustive list require entirely new solutions. Many can be solved by implementing existing algorithms or through careful extension of existing partial solutions, as we will discuss in the following. For instance, we will show that through partial abstraction of the International Union of Pure and Applied Chemistry (IUPAC) organic ranking sequence rules [@favre2013nomenclature], a configurational index of permutation can serve as a comparative stereodescriptor in a ranking algorithm. Moreover, through extension, the original algorithm can serve to differentiate not merely direct graph substituents, but binding sites. By contrast, the canonicalization of stereodescriptors and their propagation into new structures (shapes) or through ranking changes will require new solutions. Molecule model {#sec:molecule_model} ============== <span style="font-variant:small-caps;">Molassembler</span> applies a molecule model split into an undirected graph and a list of data structures named stereopermutators. Vertices of the graph represent atoms, each storing an element type. Edges of the graph represent bonds, storing an idealized bond order (i.e., an integer number that may be extracted from a real-valued bond order obtained from a quantum mechanical analysis of the electronic wave function). Idealized bond types comprise bond orders from single to sextuple and a algorithm-internal special bond called ’eta’ indicating connectivity to an atom forming part of a haptic binding site following the $\eta$-notation [@mcnaught1997compendium] for such bonding situations. The molecular graph must form a single connected component. Accidental graph disconnects upon atom or bond removal are prevented by permitting only the removal of non-bridge bonds or non-articulation vertices (removal of articulation vertices would enlarge the number of connected components). Intentional molecule splitting along bridge bonds is possible through specialized functions. Stereopermutators manage relative spatial orientations of groups of bonded atoms. They reduce the set of relative spatial orientations to an abstract permutational space. Special relative spatial orientations are reduced to an index of permutation. Collectively, stereopermutators capture a molecule’s configurational space. Stereopermutators are not named stereocenters because they also manage non-stereogenic cases. <span style="font-variant:small-caps;">Molassembler</span> comprises two types of stereopermutators: Atom-centered stereopermutators capture configurational differences between different orientations of the direct graph adjacents of a central atom. Bond-centered stereopermutators capture configurational differences arising due to rotational barriers along bonds. ![Illustration of spatial structure and molecule graph. *Left:* A typical Lewis structure encoding also stereoinformation. *Right:* The connectivity of this molecule represented as a graph. []{data-label="fig:model_graph"}](figure1.pdf){width="0.5\linewidth"} Consider the example given in Figure \[fig:model\_graph\]. The graph of the Lewis structure captures its connectivity, but not its spatial shape or chiral character. In addition to the graph, this requires two atom-centered stereopermutators. One is placed at the methyl group’s carbon atom, indicating a tetrahedral local shape and an abstract binding situation `AAAB` (three ranking-identical hydrogen atoms and one different group), for which case there are no multiple non-superimposable arrangements. That stereopermutator is therefore not stereogenic. The other stereopermutator is placed at the carbon atom central in the Lewis structure. It also has a tetrahedral local shape, but ranking-wise all its substituents are different in an `ABCD` abstract binding situation. In that shape, there are two non-superimposable spatial permutations of its substituents. The Lewis structure represents the S variation of this stereocenter, and the stereopermutator represents it as an index of permutation that parallels the IUPAC stereodescriptor. Our molecular representation model is capable of transferring molecular structures at local minima on the Born-Oppenheimer potential energy surface given in Cartesian coordinates to an abstract permutational space in which configurational variations can be enumerated easily and transformed back into Cartesian coordinates via conformer generation close to local minima on the potential energy surface. However, the model is not well suited for capturing structural elements of other regions of the potential energy surface. <span style="font-variant:small-caps;">Molassembler</span> places no structural limitations on the graph aside from constraining it to remain a single connected component. As a consequence, it is possible to generate graphs for which <span style="font-variant:small-caps;">Molassembler</span> cannot generate a spatial model allowing the graph to be embedded into three dimensions, such as the set of complete graphs (i.e., those in which all vertices are connected with all other vertices) with more than four vertices. Deciding graph embeddability in three dimensions is NP-hard [@Thomassen1989], and no attempt is made to guarantee that a supplied graph is embeddable during its construction. In the following sections, we discuss the individual domains of responsibility for atom- and bond-centered stereopermutators, the implemented ranking algorithm, manipulation of molecules and conformer generation. Local shapes and stereopermutations {#sec:stereopermutations} =================================== Atom-centered stereopermutators manage the spatial orientation of the graph substituents of a non-terminal atom by abstracting into a permutational space in which each permutation constitutes a mutually non-superimposable spatial orientation. If multiple permutations are geometrically feasible, the respective atom is a stereocenter. The local spatial arrangement of an atom’s substituents is classified into a set of idealized polyhedral shapes. <span style="font-variant:small-caps;">Molassembler</span> currently comprises thirty different shapes ranging from two to twelve vertices. The choice of shapes aims to reflect common geometric patterns while avoiding too fine distinctions. All chosen shapes are spherical, i.e. within their coordinate frames, all vertices of the encoded shapes have the same distance from the origin. The first class of chosen shapes is of reduced dimensionality and contains the following shapes: Line, bent line, equilateral triangle, T-shape, regular square, pentagon, and hexagon [@Garcon2019]. Next, four of the five Platonic solids are included: tetrahedron, octahedron, cube, and icosahedron. Then, some other polyhedra composed of regular faces: Uniform triangular prism, square antiprism, and cuboctahedron. Several Johnson solids are included without modification as they are by construction spherical: Square pyramid, pentagonal pyramid, trigonal bipyramid, and pentagonal bipyramid. Furthermore, the hexagonal and heptagonal bipyramids are also included. Several non-spherical Johnson solid shapes have been spherized by projection onto a sphere and subsequent minimization on the Thomson potential [@Thomson1904] to a local minimum that retains the original point group symmetry: Capped square antiprism, trigonal dodecahedron, and capped trigonal prism. The shapes of the global minima for nine, ten and eleven particles on the Thomson potential are also included: Tricapped trigonal prism, bicapped square antiprism and edge-contracted icosahedron. Lastly, some derived shapes are also present: Capped octahedron, a tetrahedron with one vertex vacant and both apical- and equatorial-monovacant variations of the trigonal bipyramid. The spatial arrangement of a non-terminal atom’s immediate substituents can be classified as one of these shapes, as demonstrated in Figure \[fig:shape\_classification\]. ![*Left:* An example of a molecule with classified shapes shown for three of its non-terminal atoms: An equilateral triangle at the bottom, an octahedron in the center, and a tetrahedron at the right edge (all highlighted by red circles). *Right:* Simplified Lewis structure of the left compound. Images of molecules in Cartesian coordinates were generated with PyMOL [@PyMOL2015], simplified Lewis structures with MarvinSketch [@MarvinSketch1926]. Atom coloring: Hydrogen in white, carbon in gray, oxygen in red, iron in orange, sulfur in yellow, and nitrogen in blue.[]{data-label="fig:shape_classification"}](figure2.png){width="\linewidth"} A binding site is a contiguous set of vertices adjacent to a central atom. If a site comprises more than a single atom, the binding site is haptic and the bonds of its edges (collected and idealized, for instance, from bond orders evaluated from an electronic wave function) to the central atom vertex are relabeled as eta bonds. Spatially, a binding site’s vertex-set centroid is modeled to co-locate with a vertex of the local shape as shown in Figure \[fig:haptic\_shape\_classification\]. Haptically binding atoms are free to rotate around the axis defined by the local shape’s central atom and the binding site’s vertex-set centroid. ![*Left:* An example of a molecule with a haptic binding site. The central iron atom has two binding sites consisting of single nitrogen atoms. The carbon atoms of the benzene molecule above the iron atom form a third binding site, as they are a contiguous group of atoms bonded to the central iron atom. The centroid of the haptic binding site is shown as a semi-opaque pseudoatom in the center of the carbocycle. The shape of the substituents of the central iron atom has three vertices, one for the centroid of each binding site, and is classified as an equilateral triangle. *Right:* Simplified Lewis structure of the left compound. Atom coloring: Hydrogen in white, carbon in gray, nitrogen in blue, and iron in orange.[]{data-label="fig:haptic_shape_classification"}](figure3.png){width="\linewidth"} Binding sites are ranked by an algorithm described in Section \[sec:ranking\_algorithm\]. Ranking is necessary to decide which binding sites are chemically different. Permuting chemically identical groups does not create a new non-superimposable configuration, which must be considered in the enumeration of stereopermutations. Links between binding sites, and by proxy multidentate ligands, are identified with cycle perception from the external software library RingDecomposerLib [@Flachsenberg2017]. From an idealized shape, a set of binding sites, their ranking, and their links, <span style="font-variant:small-caps;">Molassembler</span> enumerates a set of abstract non-superimposable permutations following a method reported in the literature [@Bennett1969]. In preparation, the ranked sites and their links are transformed into an abstract case such as `AA(B-C)` that merely reflects whether sites are different from one another and their connectivity. This is done by transforming the nested list of ranked sites into a flat list where each site is represented by an incrementing character. Within the considered shape, all permutations of vertex occupations are generated and the rotationally non-superimposable permutations are collected. These are the stereopermutations of the abstract case within the considered shape. Generating a mapping from sites to shape vertices for a particular stereopermutation will be nontrivial if links (paths between site constituting atoms not including the central atom) exist and there are multiple sites with equal ranking. In any situation, it is possible to represent the ranked sites and their links as a colored graph: Vertices are sites and links are edges between sites. Similarly, a stereopermutation can be represented as a graph: Vertices are shape vertices and edges are their links. Any isomorphism mapping between the two graphs is a valid mapping from sites to shape vertices. Within stereopermutation enumeration, three-dimensional feasibility of links between binding sites in each permutation is not considered. For instance, a bidentate oxalate ligand cannot coordinate in a trans-arrangement within an octahedron shape, but must be cis-coordinated: stereopermutations in which its sites are trans-arranged are not viable in three dimensions. Such stereopermutations arise in the abstract permutational scheme, and must be removed to avoid false classification of an atom as a stereocenter and unexpected failures in conformer generation. We consider it undesirable to exclude trans-ligating arrangements on principle for specific shapes and prefer to make general geometric arguments on a case-to-case basis. Unfeasible stereopermutations are avoided by probing whether a cyclic polygon can be modeled for each links’ cycle without non-binding atoms of the bridge entering a bonding distance to the central atom. This geometric algorithm is depicted in Figure \[fig:infeasibility\_denticity\]. The cyclic polygon was chosen because it maximizes the area of the polygon and by proxy the distance of its vertices from the original center. This scheme will identify individual cycles that, when modeled in a particular stereopermutation, would contradict the graph, but miss instances where multiple individually feasible cycles are impossible to realize jointly. For instance, in an octahedral shape with three bidentate ligands with bridges of sufficient length to achieve trans-ligation individually, this scheme will not remove the stereopermutation in which all are ligated in a trans-arrangement. If the bridges are too short for each to distort sufficiently to accommodate the others, conformer generation will fail. It may be possible to discover a lack of viability by modeling all bridges in a joint distance bounds matrix and applying triangle and tetrangle inequality smoothing, but this has not yet turned out to be fruitful. For haptic ligands, an additional feasibility criterion is applied: <span style="font-variant:small-caps;">Molassembler</span> estimates the conical space the site occupies and removes stereopermutations in which the conical spaces of multiple haptic binding sites overlap. ![Geometric construction for determining the viability of an arbitrary bidentate cycle. The spatial distances between atoms are modeled on the basis of their element type and mutual bond orders. The idealized angle between immediate adjacents of the central atom **M** is known in the given stereopermutation. The resulting distance between atoms **A** and **B** is the closing edge to the polygon defined by the cycle path between **A** and **B** excluding the central atom (labeled $i$ through $m$). This uniquely determines the cyclic polygon. From these quantities, the upper bound on the distances $d_i$ between members of the cycle and the central atom can be calculated. If, for any of the cycle members, the upper bound on the distance to the central atom is lower than a modeled distance in case they were bonded, then the stereopermutation will be considered unfeasible since any spatial realization will contradict the graph. []{data-label="fig:infeasibility_denticity"}](figure4.pdf){width="0.5\linewidth"} A stereopermutator with multiple feasible stereopermutations may be left unspecified: Although a set of feasible stereopermutations might exist, stereopermutators can be set as none of them. In other terms, their assigned stereopermutation is nullable. When unspecified, stereopermutators represent all of their possible stereopermutations. For example, a conformational ensemble generated from a compound with a single asymmetric, tetrahedral, and unspecified stereopermutator represents a racemic mixture of both configurations. In summary, <span style="font-variant:small-caps;">Molassembler</span> executes the following steps to identify stereopermutations from coordinates: First, it ranks the central atom’s graph adjacents and groups the substituents into sites. Second, it combines substituent ranking and binding site groups into a site ranking. Third, it classifies the local shape (see next section). Fourth, <span style="font-variant:small-caps;">Molassembler</span> enumerates abstract permutations and removes permutations deemed unfeasible. Finally, it identifies the stereopermutation present by finding the realized stereopermutation within the set of permutations deemed feasible. Shape information from Cartesian coordinates {#sec:local_shapes} ============================================ To extract which particular stereopermutation is present at a non-terminal atom in a molecular structure, local shapes must first be classified starting from Cartesian coordinates. This is achieved by calculating the continuous shape measures [@Pinsky1998] of all shapes with a number of vertices matching the number of binding sites and choosing the shape for which the continuous shape measure is minimal. The calculation of continuous shape measures is principally of factorial complexity since the point-pairwise mapping minimizing the measure is unknown. A faithful implementation of the suggested algorithm is prohibitively expensive for shapes with many vertices, particularly because the shape centroid must also be considered. The problem is not without exploitable structure, however: Although a faithful implementation prescribes minimizing both relative orientation and scaling over all point pairings, minimization over scaling can be relegated to after the minimal pairing regarding relative orientation is found, hence reducing the complexity of evaluating a single pairing. If the shape features rotational symmetries, these can be exploited to avoid redundant pairings. This divides the theoretical complexity by the shape’s number of superimposable rotations. A larger reduction in complexity is reached by the application of the following heuristic: The spatial rotation is considered converged with only a reduced, fixed number of point pairings. After performing a rotational minimization of square distances by a quaternion fit with limited pairings, the cost of adding individual pairings to the current set is calculated for all remaining pairs and the minimal variation is chosen without recalculating the spatial rotation with intermediate choices. This heuristic can fail if multiple points lie within a small spherical area, implying the rotation matrix is not converged by few point pairs. This failure mode is both detectable and unlikely to occur in molecular structures. The resulting complexity in terms of quaternion fits scales as $\mathcal{O}(N! / (N-D)!)$ where $N$ is the number of vertices of the shape including the centroid and $D$ is the number of point pairings for which the rotation is considered converged. <span style="font-variant:small-caps;">Molassembler</span> applies $D = 5$ and pre-sets the centroid point pair known from stereopermutator shape fitting. We made further algorithmic attempts at finding the minimal pairing permutation at reduced cost with simulated annealing, stochastic tunneling [@Wenzel1999Stochastic], thermodynamic simulated annealing [@andresen1994constant], fixed-cost greedy, and locally optimal minimizations with reshuffling, all without bearing fruit against the heuristic described above with respect to correctness and speed. Initially, shapes were recognized through minimizing the sum of absolute angular deviations between binding site centroids of the coordinates from the idealized geometry. Although it is difficult to objectively evaluate the agreement between shape classification algorithms and human perception, it can be plainly stated that shape classification through angular deviations alone quickly breaks subjective tests. Geometry indices as defined for limited sets of shapes with four [@yang2007structural; @Okuniewski2015] and five [@addison1984synthesis] vertices can improve angular deviation classification, but are limited in applicability: <span style="font-variant:small-caps;">Molassembler</span> contains more shapes with four vertices than the geometry index can classify. For example, the trigonal pyramid, here denoting an axially monovacant trigonal bipyramid, not a monovacant tetrahedron, is not considered in the value range definition of the geometry index $\tau_4$. Hence, the geometry index is only applied to exclude that shape for which its value indicates the largest distance. Overall, both the pure angular deviation and its hybrid with geometry indices are fast and effective, but do not match visual intuition for strongly distorted structures. Another shape classification algorithm that we considered was the continuous symmetry measure [@Zabdrodsky1993], which measures the degree to which a set of points has a particular point group symmetry. Unfortunately, this is untenable as a matching point group symmetry does not demonstrate similarity to a particular polyhedral shape. If, for instance, a viable shape in classification were the equatorially monovacant trigonal bipyramid (also known as disphenoid or seesaw) of C~2v~ point group symmetry, a collinear arrangement of all points would minimize the continuous symmetry measure. Another thought experiment demonstrates a further weakness: Imagine one wants to classify a shape of four vertices, and the set of possible shapes includes the square and the equatorially monovacant trigonal bipyramid, which have D~4h~ and C~2v~ point group symmetries, respectively. The C~2v~ point group is a subgroup of the D~4h~ point group. A regular square has continuous symmetry measures $S(\textrm{D\textsubscript{4h}}) = 0$ and $S(\textrm{C\textsubscript{2v}}) = 0$, highlighting the ambiguity of polyhedral shape matching to point group symmetry. In the context of continuous shape measures, the concept of minimum distortion paths were introduced between distinct shapes [@Alvarez2005]. Another, in hindsight fruitless, shape classification algorithm studied was based on calculating the distance of the shape to be classified from the paths separating all pairs of viable shapes. That pair for which the distance from the path was minimal was chosen and then the smaller shape measure of that shape pair was the classified shape. However, this is a poor shape classification algorithm, indicated by a strong disconnect between its classification choices and visual intuition beginning at slight distortions. In an attempt to quantify and visualize the behavior of experimental shape classification algorithms, we compared their results on a uniform random distortion scale. In this attempt, the base shape to be re-identified was distorted by addition of fixed-length distortions of uniform random direction to each vertex including the centroid and the resulting point cloud reclassified as a shape. For a sense of scale of distortion vector norms relative to the shape size, recall that <span style="font-variant:small-caps;">Molassembler</span> encompasses only spherical shapes whose vertices are at unit distance from the coordinate origin, where the centroid is placed. From a reductionistic perspective disregarding the distinct relationships between particular shapes, an algorithm that reclassifies shapes correctly for larger distortion vector norms could be considered better. We shall argue with this metric by distinguishing algorithms with restricted or extensive *distortion tolerance*. A comparison of shape classification algorithms at the example of the tetrahedron is visualized in Figure \[fig:shape\_classification\_tetrahedron\] (cf. also the subsequent two figures discussed below). For each algorithm, the idealized shape in a unit sphere is uniformly distorted one-hundred times by applying vectors of uniform random direction at each vertex point. Then, the one-hundred distorted point clouds are reclassified as shapes. This is repeated for increasing distortion vector norms. In Figure \[fig:shape\_classification\_tetrahedron\], the pure angular deviation algorithm shows extensive distortion tolerance, but the continuous symmetry measure based algorithm misclassifies an undistorted tetrahedron into the C~3v~ symmetry of the trigonal pyramid and classifies all distorted tetrahedra as closer to C~2v~ or C~3v~ point groups than the T~d~ point group. The continuous shape measure has a much reduced distortion tolerance, but classifies structures into more diverse shapes at large distortions than the angular deviation algorithm. The minimum distortion path deviation continuous shape measure algorithm has even further reduced distortion tolerance than the pure continuous shape measure algorithm and over-represents the trigonal pyramid at the expense of the seesaw, which is classified only once. In this example, the discussed shortcomings of algorithms based on the continuous symmetry measure or minimum distortion path deviation continuous shape measure become clear. ![Shape classifications of four algorithms with varying distortion vector norms applied to all vertices of a regular tetrahedron. From left to right, top to bottom: Minimal sum of angular deviation square, minimal continuous symmetry measure, minimal continuous shape measure, and continuous shape measure (CShM) minimum distortion path deviation algorithms. ’Frequencies’ denotes the occurrence of a shape in the set of one-hundred distorted structures generated.[]{data-label="fig:shape_classification_tetrahedron"}](figure5.pdf){width="\linewidth"} We now compare some shape classification algorithms at the example of multiple shapes. For all shapes with four vertices encompassed in <span style="font-variant:small-caps;">Molassembler</span>, shape classification with angular distortion and its hybrid with geometry indices are compared by the aforementioned metric in Figure \[fig:shape\_classification\_size\_four\_first\]. It is noticeable that the trigonal pyramid (more specifically an axially monovacant trigonal bipyramid), despite being a rare occurrence when compared to the tetrahedron or the square, is overrepresented compared to a random shape distribution. The addition of geometry indices as an exclusion criterion exacerbates this, presumably because the trigonal pyramid is never excluded by geometry index, whereas all other shapes are. Generally, both variations of the angular distortion algorithm have good distortion tolerance. Merely the square shape dissipates quickly in the hybrid algorithm. Note that this is not necessarily bad as we have introduced distortion tolerance as a measure that does not consider the relationships between shapes, but solely as a means to compare algorithms. ![Shapes identified by sum of angular deviation squares algorithm (left column) and its hybrid algorithm with geometry indices (right column) for four-vertex shapes with varying distortion vector norms applied to all vertices. Classification of the base undistorted shape is not shown as both algorithms correctly identify it. ’Frequencies’ denotes the occurrence of a shape in the set of one-hundred distorted structures generated.[]{data-label="fig:shape_classification_size_four_first"}](figure6.pdf){width="\linewidth"} Lastly, consider the continuous shape measure based algorithm and a biased variation in Figure \[fig:shape\_classification\_size\_four\_second\]. For all shapes except the tetrahedron, the unbiased algorithm exhibits good distortion tolerance characteristics, and the set of classified shapes at large distortions is consistently diverse when compared to the angular distortion algorithm. The seesaw and trigonal pyramid shapes, which are equatorially or axially monovacant trigonal bipyramids, are comparatively rare structures relative to the tetrahedron or square. The biased variation reweights continuous shape measures of seesaw and trigonal bipyramids to reduce their classification probability. As a result, the algorithm exhibits extensive distortion tolerance for the tetrahedron and square shapes and reduced distortion tolerance for the seesaw and trigonal pyramid shapes. This biased algorithm is the shape classification algorithm chosen for <span style="font-variant:small-caps;">Molassembler</span>. ![Shapes identified by the continuous shape measure algorithm (left column) and a biased variant algorithm (right column) for four-vertex shapes with varying distortion vector norms applied to all vertices. Classification of the base undistorted shape is not shown as both algorithms correctly identify it. ’Frequencies’ denotes the occurrence of a shape in the set of one-hundred distorted structures generated.[]{data-label="fig:shape_classification_size_four_second"}](figure7.pdf){width="\linewidth"} Returning to the identification of a stereopermutation from Cartesian coordinates: By calculating the continuous shape measure in order to classify the shape, one also obtains a minimal square distance pairwise mapping between shape vertices and binding site centroids. This allows for direct mapping into the realized stereopermutation, which can then be sought in a generated set of feasible stereopermutations. Atom-centered stereopermutators {#sec:atom_stereocenters} =============================== Instead of IUPAC stereodescriptors such as *R* and *S* for the two stereopermutations of an asymmetric tetrahedron, <span style="font-variant:small-caps;">Molassembler</span> exposes two indices of permutation that serve as transferable and comparable stereodescriptors under different conditions. The first is the index of permutation within the abstract permutational space disregarding feasibility. This integer stereodescriptor is transferable among equivalent abstract inputs: If the number of different binding sites (as determined by ranking), their linking, and the local shape match between cases, the index of permutation will be comparable. The second is the index of an abstract stereopermutation within an ordered list of stereopermutations deemed feasible, which will be comparable only if all previous conditions hold and all linking cycles are composed of matching element types and bond orders. Note that the value range of the second stereodescriptor can change upon changes to the algorithm determining feasibility. The idealization of orientations into local shapes enables the enumeration of stereopermutations in an abstract space. However, two issues arise: Most importantly, distorted geometries present challenges for shape classification and conformer generation. It can be difficult to recognize the correct shape in strongly distorted structures. Special care must be taken not to misinterpret distortions owing to, e.g., small cycles in terms of a false shape, and tolerances on internal coordinates must be expanded in spatial modeling if distortions are necessary to form a conformation. This problem is discussed further in Section \[sec:conformer\_generation\]. Second, as the list of local shapes is bounded, the particular local shape one might require to adequately describe a molecular configuration may not be in the available set. However, the list of available shapes is designed to be extensible with little effort to alleviate such problems. The applied model of spatial orientations around a central atom cannot intrinsically distinguish in which combinations of atoms and shapes stereopermutations can interconvert with low energy barriers (consider, for example, inversion at a nitrogen atom). In essence, the current model treats molecular configuration as rigid, a situation considered to be realized classically at zero Kelvin. Though optional, a thawing protocol is invoked by default in which a stereopermutator managing a nitrogen atom in a monovacant tetrahedral local shape with three different binding sites will be a stereocenter only if it is a member of a cycle of size three or four [@Rauk1970]. Without this protocol, any monovacant tetrahedral nitrogen atom with three different binding sites is a stereocenter. If none of the ligands are linked in a trigonal or pentagonal bipyramid, axial and equatorial ligands interchange at room temperature through Berry pseudorotation [@Berry1960] and the Bartell mechanism [@adams1970structure], respectively. <span style="font-variant:small-caps;">Molassembler</span> will therefore ’thermalize’ stereopermutations in such shapes in the thawing approximation if none of the binding sites are linked, reducing such shapes’ chiral character. In spite of an abstract model of orientation, molecules are modifiable in close analogy to three-dimensional editing. If an atom and its immediate surroundings constitute a stereocenter and its stereopermutation is known, then under specific circumstances, adding or removing a binding site will not cause loss of information. In two situations, chiral state loss is the correct result. First, if the resulting set of binding sites post-edit cannot constitute a stereocenter in the target shape. Second, binding site addition can be ambiguous; from a square shape to a square pyramid shape, the new apical position can be either above or below the plane of existing shape vertices. By default, the algorithm opts to discard chiral state on ambiguous transitions or on significant total angular or chiral deviations of shape vertices. Alternatively, one can choose to retain chiral state if the transition is unique, or to let <span style="font-variant:small-caps;">Molassembler</span> choose at random from the best transitions. The set of best transitions between shapes of adjacent sizes is partially computed at compile-time with certain compilers in order to avoid runtime costs. Transitions are calculated instead of hard-coded to keep the list of local shapes easily extendable. Atom-centered stereopermutator propagation encompasses substituent count changes, site count changes, ranking order changes, and shape changes, any combination of which may occur simultaneously. To limit implementation complexity, propagation is limited to shape size, site, and substituent count changes of at most one. Under these circumstances, a mapping between sites can be generated on the basis of constituting substituent set similarity quantified by an edit distance metric in analogy to string edit distance metrics like Levenshtein distance [@Levenshtein1966]. If the shape is changed, a mapping of shape vertices between shapes is chosen in accordance with propagation settings. Propagation then proceeds in four steps: First, site indices are placed at the old shape vertices with the stereopermutator’s set stereopermutation. Second, if a shape vertex mapping exists, the old site indices will be transferred into the new shape. Third, old site indices are mapped to new site indices with the site index mapping. From the new site indices placed at the shape vertices of the new shape, a stereopermutation is generated that can then be sought in the set of new stereopermutations. If the stereopermutation cannot be propagated and the stereopermutator in its new shape and ranking is a stereocenter, it will be left undetermined. The propagation practiced here has distinct parallels to the transformations in reaction rules of Andersen et al.[@Andersen2017], particular with respect to shape vertex propagation. Bond-centered stereopermutators {#sec:bond_stereocenters} =============================== In contrast to atom-centered stereopermutators that capture non-superimposable orientations of the immediate graph adjacents of a single atom, bond-centered stereopermutators capture non-superimposable relative orientations of a bond’s substituents that arise due to hindered rotation around the bond axis; consider, e.g., substituents bonded to the atoms of a non-rotatable double bond (see Figure \[fig:double\_bond\_perspective\]), where for each atom there exists also an atom-centered stereopermutator characterizing the local shape. Bond-centered stereopermutators are generalized to arbitrary combinations of shapes instead of being specialized toward the common combinations among and between equilateral triangles and bent shapes that constitute most rotational barriers along double bonds. ![*Left:* Lewis structure of (E)-but-2-ene. *Right:* Abstract classification of the double-bond motif of (E)-but-2-ene into two triangle shapes and the substituents’ ranking cases. Light gray circles denote shape vertex loci. []{data-label="fig:double_bond_perspective"}](figure8.pdf){width="0.6\linewidth"} Stereodescriptors derived from a procedure enumerating all possible rotational orientations must be comparable and transferable among equivalent situations. Consequently, inputs to the stereopermutation enumeration algorithm are standardized and abstracted. Each side of the bond is reduced to its local shape, which is the fused shape vertex within that shape (i.e., the vertex that is part of the bond), and the ranking characters of all non-fused vertices, as shown in Fig. \[fig:double\_bond\_perspective\]. The stereopermutators at each end of the bond carry the information required for the abstraction: The ranking of their binding sites permits the abstraction to ranking characters, and the specified stereopermutation places ranking characters at shape vertices. There exist degrees of freedom in the data representation of the input to the stereopermutation enumeration algorithm. Any shape vertex may be fused into a bond. Additionally, shape vertex enumeration is arbitrary. These degrees of freedom can be removed by rotating the fused vertex to the algebraically smallest shape vertex of its set of rotationally interconvertible vertices. For example, the octahedron has a single set of interconvertible vertices, but the square pyramid has two: The equatorial set of four vertices, and the singleton set containing the apical vertex. Removing representational degrees of freedom ensures the stereopermutation enumeration algorithm generates transferable stereodescriptors. The enumeration scheme proceeds to identify the set of closest off-axis shape vertices that will be most relevant to rotational energy barriers. If, at either side of the bond, this shape vertex set’s ranking characters are all equivalent, then rotation around the bond is isotropic. Next, dihedral orientations are generated by sequentially aligning off-axis shape vertices across both sets. Alignment of off-axis shape vertices need not be ecliptic, but can also be staggered, enabling bond-centered stereopermutators to also enumerate hypothesized rotational minimum-energy structures in e.g. alkanes. This scheme will omit the trans-alignment in a combination of two bent shapes, however, so it is explicitly added. Furthermore, the emergent ordering of stereopermutations does not yield stereodescriptors paralleling the relative order of the IUPAC stereodescriptors *E*/*Z* as defined in the sequence rules. To ease the implementation of a ranking algorithm, the stereopermutation sequence is modified to match the relative order of IUPAC stereodescriptors. Bond-centered stereopermutators can be placed at any bond where both constituting atoms are managed by a stereopermutator whose stereopermutation is specified. Unspecified atom-centered stereopermutators cannot form the basis of a bond-centered stereopermutator because their mapping from binding sites to shape vertices is unknown. Constituting shapes are not limited to the common cases constituting multiple-order bonds in organic chemistry. The rotational energy structure of wild combinations of shapes with many vertices is likely to be more complex than <span style="font-variant:small-caps;">Molassembler</span> would suggest, however. Analogously to atom-centered stereopermutators, bond-centered stereopermutations may not be viable in small rings. For instance, altering the orientation of the double bond in cyclopentene to an $E$ orientation yields an unembeddable graph, and this bond should therefore not be considered for bond-centered stereopermutation. Unfeasible bond-centered stereopermutations can be identified in analogy to unfeasible links in atom-centered stereopermutators, In atom-centered stereopermutator feasibility, one segment of the cyclic polygon is the segment $c$ that replaces the 1–3 distance modeled explicitly by two atomic distances and one angle in Figure \[fig:infeasibility\_denticity\]. In bond-centered stereopermutator feasibility, this segment is the replacement of the 1–4 distance, which is explicitly modeled via three atomic distances, two angles, and one dihedral angle. Another difference is that the cyclic polygon expansion plane must be modeled in three dimensions (see Supporting Information). In a sequence of four atoms at a dihedral angle of zero, the cyclic polygon expands within the same plane as the dihedral-constituting atoms to maximize atomic distances from cycle atoms to dihedral-angle-constituting atoms. At dihedral angle $\pi$, the expansion plane maximizing cycle-atom distances to the dihedral-angle base atoms lies perpendicular to that of the dihedral-angle-constituting atoms, as shown in Figure \[fig:bond\_stereopermutator\_infeasibility\]. ![*Left:* A dihedral sequence $ABCD$ composed of segments of unit length and joined at $\frac{2\pi}{3}$ angles with dihedral angle zero. The plane in which the cyclic polygon expands coincides with the plane composed of the points $BCD$ as this offers points on the cycle the maximal distance to the base vertices $BC$. *Right:* The same dihedral sequence at dihedral angle $\pi$. The cyclic polygon now expands in a plane perpendicular to that of $BCD$. []{data-label="fig:bond_stereopermutator_infeasibility"}](figure9.pdf){width="\linewidth"} Local modeling of bond-centered stereopermutator feasibility by cyclic polygons has a necessarily limited purview. Consider a graph cycle of arbitrary length containing two bond-centered stereopermutators each composed of two atom-centered stereopermutators of equilateral triangle shape. Each bond-centered stereopermutator permits a cis and trans arrangement of the cycle continuation. At small cycle sizes, individual feasibility modeling detects that only a cis arrangement is possible and argue that neither bond-centered stereopermutator is stereogenic. At large cycle sizes, both arrangements are deemed feasible. It is conceivable that a cycle size exists in which only one bond-centered stereopermutator can be cis at a time for the graph to be embeddable. Individual bond-centered stereopermutator feasibility modeling will not capture this. In carbocycles of size seven and below, bond stereopermutators are not stereogenic, yet they do serve to enforce planarity of their substituents in conformers. During molecule construction, if a cycle is detected as approximately flat, bond-centered stereopermutators are placed at all edges of the cycle, enforcing coplanarity of all cycle edges. As chiral state propagation of bond-centered stereopermutators on alterations of underlying atom-centered stereopermutators apart from ranking changes has not been implemented, alterations of atom-centered stereopermutators forming a bond-centered stereopermutator cause the bond-centered stereopermutator to be dropped. Ranking algorithm {#sec:ranking_algorithm} ================= Ranking is based on the IUPAC sequence rules for organic molecules [@favre2013nomenclature]. The rules for organic molecules form a starting point for the differentiation of ligands that is generalizeable to <span style="font-variant:small-caps;">Molassembler</span>’s model of stereocenters. Without the ambition to generate canonical IUPAC names for molecules of either organic or inorganic character, there is no need to implement further rules and relative preferences were laid out for inorganic molecules in Ref. [@IUPACInorg]. Differences to organic-molecule ranking arise mainly from generalization of the sequence rules from organic stereodescriptors to the library model and from partial omissions of sequence rules due to missing implementations. The stereodescriptors *R* and *S* arising from maximally asymmetric tetrahedral centers directly correspond to indices of permutation arising from the atom-centered permutational scheme. The stereodescriptors *Z/seqCis* and *E/seqTrans* similarly correspond to indices of permutation arising from the bond-centered stereopermutation scheme. Note that helical chirality stereodescriptors *M* and *P* are currently not detected by <span style="font-variant:small-caps;">Molassembler</span>. The following changes and omissions were made with respect to the IUPAC sequence rules: First, no algorithm to enumerate mesomeric Kekulé structures is currently implemented, so the relevant part of sequence rule 1a in Ref. [@favre2013nomenclature] is discarded. This omission is planned to be fixed in a future version of our software. Second, no distinction will be made between asymmetric and pseudoasymmetric stereogenic units, affecting sequence rules 4a and 4c. Third, <span style="font-variant:small-caps;">Molassembler</span> considers stereopermutators alike for sequence rule 4b if the number of stereopermutations and the index of permutation match. This is merely an abstraction of the sequence rule that covers all explicit cases laid out in the sequence rule definition. Lastly, the relative priority of stereodescriptors *R* before *S* and *Z* before *E* defined in sequence rule 5 is abstracted to algebraic ordering of the index of permutation. Inherently, since the vertex enumeration of shapes is arbitrary, the relative order of indices of permutation is also arbitrary. We can therefore choose the vertex enumeration at the tetrahedron and monovacant tetrahedron so that the integer ordering parallels the order of the corresponding *R* and *S* stereodescriptors. The generation of indices of permutation at bond-centered stereopermutators is similarly altered to parallel the IUPAC stereodescriptors *E/seqCis* and *Z/seqTrans*. Indices of permutation for shapes with more vertices than the tetrahedron can distinctly depend on the abstract binding case. For instance, indices of permutation for a trigonal bipyramid of abstract binding case `ABCDE` are not comparable to indices of permutation for the abstract binding case `AABCD`. Furthermore, they are not comparable across different shapes. Consequently, relative ordering of stereodescriptors in sequence rule 5 is established by sequential comparison of the shape (algebraic ordering of its index in the list of all shapes), lexicographic comparison of the abstract binding case, and finally comparison of the index of permutation. The relative ordering established remains arbitrary, but the sequence rule distinguishes more cases that may occur in inorganic compounds without affecting the relative priority of the organic stereodescriptors. A ranking algorithm based upon these modified IUPAC sequence rules can establish chemical differences between individual atom substituents, but not between binding sites. For a ranking at the level of binding sites, <span style="font-variant:small-caps;">Molassembler</span> applies two sequence rules. First, sites consisting of more atoms precede sites consisting of fewer atoms. Second, sites are ordered lexicographically by their constituting atoms’ ranking positions set. The ranking positions set is ordered descending by value. Ranking implementation correctness was tested against relevant IUPAC examples [@favre2013nomenclature] and a validation suite from a proposed revision of the sequence rules [@Hanson2018]. Not surprisingly, owing to the implementation of only a limited set of sequence rules and some omissions from their definitions (as discussed above), <span style="font-variant:small-caps;">Molassembler</span> did not pass all tests from this validation suite. However, additional differences arose due to different interpretations of the sequence rules. The consequence of an incorrect ranking can be twofold. By falsely differentiating substituents, duplicate stereopermutations can arise that are superimposable. By falsely equating substituents, stereopermutations can be missed. These issues point to future work on the implementation, which will lift limitations imposed upon sequence rules so that sequence rule interpretation can be brought into full agreement with that of the validation set. Currently, the generalization of the sequence rules to <span style="font-variant:small-caps;">Molassembler</span>’s molecule model resulted in a ranking algorithm that can be applied in myriad combinations. Stereodescriptors of multidentate and haptic ligands in arbitrary polyhedral shapes and stereodescriptors of any multiple bond-order combination of polyhedral shapes at any fused vertices are accounted for. Despite its incompleteness, the ranking algorithm as implemented is correct for a vast majority of typical cases. Molecule representation, equivalence, and manipulation {#sec:molecule_manipulation} ====================================================== A set of atom (nuclear) Cartesian coordinates is in principle free of any information about the presence of bonds between atoms. <span style="font-variant:small-caps;">Molassembler</span> allows the translation of molecules given as a set of nuclear coordinates into a graphs. This proceeds by identification of bonds by atom-pair spatial distance or discretization of a bond order matrix, subsequent connected component identification, and finally classification of local shapes and identification of stereopermutations of a specific molecule. Crucial for the correct molecule interpretation is that bonds are identified correctly. Discretizing bond orders can yield different results depending on the method applied to calculate them. Naturally, the whole procedure will depend on the specific approaches chosen, while at the same time no one can be considered clearly superior over another one. Small manipulations of molecules, as trivial as they may seem, impose a significant amount of nontrivial work. Stereodescriptors are propagated at binding site additions and removals to the target shape as discussed above. Since stereodescriptors are chosen based on substituent ranking, which may have been altered by the edit, it is necessary to re-rank all binding sites of non-terminal atoms. For an example see Figure \[fig:why\_rerank\]. Should a binding site ranking change by the edit, assigned stereopermutations are propagated to the new ranking. Consequently, to ensure continuity, molecule edits scale at least linearly in the number of atoms. ![*Left:* This example shows a molecule with two substituents at the central carbon atom that are constitutionally isomeric and carry the same stereodescriptor, yielding an identical ranking. The central carbon atom is not stereogenic. If a molecule edit changes the stereopermutation of one of these substituents, akin to swapping the hydrogen and fluor atom as indicated, the stereodescriptor flips. *Right:* The desired result of the molecular manipulation. Although the change is local to the right arm of the molecule, it incurs a differentiation between the two arms at the central carbon atom, which is stereogenic and unspecified after re-ranking.[]{data-label="fig:why_rerank"}](figure10.pdf){width="0.7\linewidth"} In large-scale manipulations of molecules such as connecting multiple molecules we may avoid increasing complexity by delaying chiral state propagations due to ranking changes until all graph modifications are finished. Hence, specialized functions for large-scale manipulations such as connects, disconnects, substitutions, and chelating ligand additions are made available in our software. Equivalence between two molecules is based on a colored graph isomorphism algorithm with modular vertex coloring so that molecules can be compared for varying definitions of molecular equality. The information that can be exloited to color vertices are element types, bond orders, local shapes, and stereopermutations. To expedite molecular-graph comparisons in large sets, it is helpful to standardize all involved graphs. If both graphs in a comparison are in canonical form, it is no longer necessary to search for an index permutation matching vertices across the graphs; instead, it is possible to base comparisons on an identity vertex mapping. Such canonicalization is modular, as is the colored graph isomorphism. If two molecular graphs have been brought into standardized form with the same level of information as the comparison is tasked with, equivalence comparison of molecule instances is a direct identity mapping comparison instead of a colored graph isomorphism. ![Simplified data flow in construction of a molecule representation from various primitive data types. Solid paths represent the default flow of data, dashed paths represent optional data flow. For instance, by default, the local shape of non-terminal atoms is inferred from the graph in the construction of atom-centered stereopermutators. Optionally and preferably, if positions are available, local shapes can be classified from atom (nuclear) positions. []{data-label="fig:data_flow"}](figure11.pdf){width="\linewidth"} The components of the molecule model and some of their dependencies are illustrated in Figure \[fig:data\_flow\]. Cartesian coordinates (of atom, and hence, nuclear positions) and bond orders are optional primitive data types in the construction of a molecule. Minimally, a molecule representation consists of a graph storing element types and bonds. Atom-centered stereopermutators are constructed at non-terminal atoms of the graph from a ranking of its substituents and a local shape either inferred from the graph or classified from coordinates. These parts are combined into the set of stereopermutations. Two adjacent atom-centered stereopermutators can form the foundation of a bond-centered stereopermutator, which determines its stereopermutations from the information embedded in both atom-centered stereopermutators. The indices of permutation of both atom-centered and bond-centered stereopermutators can be inferred from positions. Worth mentioning are a number of further features available: (i) Molecule instances are serializeable into plain-text JavaScript Object Notation and its binary encodings for database or file storage. (ii) It is possible to assert whether molecules are enantiomers of one another. (iii) McGregor’s maximum common subgraph algorithm [@McGregor1982] with custom vertex comparators enables subgraph matching. (iv) <span style="font-variant:small-caps;">Molassembler</span> also contains an openSMILES [@james2015opensmiles] compliant SMILES parser notably implementing stereoconfiguration for square, trigonal bipyramid and octahedron shapes in addition to the tetrahedron. Conformer generation {#sec:conformer_generation} ==================== Conformer generation in <span style="font-variant:small-caps;">Molassembler</span> is achieved through Distance Geometry [@Blaney1994]. In the context of molecular conformer generation, Distance Geometry enables the transformation from an atom-pairwise distance bounds matrix to atom coordinates. The distance bounds matrix is filled with lower and upper bounds on atom distances by spatial modeling. Before spatial modeling can occur, all stereopermutators of a molecule must have at least one feasible stereopermutation and may not be unspecified. For each generated conformer, unspecified stereopermutators are iteratively set by choosing a random unspecified stereopermutator and setting its stereopermutation at random until no unspecified stereopermutators remain. For atom-centered stereopermutators, each stereopermutation is weighted by relative statistical occurrence. In spatial modeling of molecules, distance bounds, chiral bounds and dihedral bounds are collected. Immediate graph adjacents’ distance bounds are populated from estimations of atom distances for the given bond order. The distance bounds of 1–3 bonded atoms are populated from the immediate bond bounds and bounds on their joint angle as modeled by the intermediate atom-centered stereopermutator. The distance bounds of 1–4 bonded atoms are populated from their immediate bond bounds, the intermediate angle bounds as modeled by atom-centered stereopermutators, and dihedral bounds from the path median bond-centered stereopermutator, if present. If no path median bond-centered stereopermutator is present, default full definition range dihedral bounds will be assumed instead. Chiral bounds are collected from atom-centered stereopermutators, each emitting a varying number of chiral bounds depending on the complexity of the modeled local shape. Dihedral bounds are populated from bond-centered stereopermutators. For example, a butane molecule will have atom-centered stereopermutators placed at each carbon atom indicating the local shape of a tetrahedron. No chiral or dihedral bounds are generated for refinement as there is no chirality to enforce and all dihedral-angle rotations are free. However, atom-centered stereopermutators of but-2-ene at the double bond indicate the local shape of an equilateral triangle. The single bond-centered stereopermutator placed at the double bond indicates whether a $Z$- or $E$-ethene substructure is modeled. The distance bounds among the atoms adjacent to the double bond are affected by the selected bond stereopermutation. Dihedral bounds are collected for each pair across the sides of the bond, yielding nine dihedral bounds. Spatial modeling modifies the collected bounds to encompass distorted structures. Angle bounds of local shapes are expanded if they are a member of a cycle with fewer than six members. Angles between substituents of spirocenters with small cycles are explicitly modeled. It is possible to specify fixed positions for atoms, albeit with the condition that none, one or all of its binding sites’ atom positions must be fixed as well. The distance matrix is generated by triangle inequality bounds smoothing the distance bounds matrix and iteratively choosing pairwise distances between the resulting bounds. To avoid $O(N^5)$ complexity for full metrization, the triangle inequality smoothing problem is transferred to a shortest-path graph problem [@Dress1988]. The simplified GOR1 algorithm [@Cherkassky1996] solves the shortest-path graph problem with negative edge weights. <span style="font-variant:small-caps;">Molassembler</span> implements three metrization [@havel1984distance] variants: Full metrization, 10% metrization, and four-atom metrization. Error functions for coordinate refinement post-embedding typically contain two types of terms: Distance errors mediating particle pair distances, and chiral errors mediating four-particle relative orientation by a signed tetrahedron volume, similar to an improper dihedral. In refinement, distance bound violation error terms can prevent the tetrahedron volume spanned by a chiral constraint from inverting. The more chiral constraints occur, the more problematic this becomes, in particular if chiral constraints have overlapping particle domains. The larger a local shape, the more chiral constraints are emitted to ensure capture of its chirality, exacerbating the problem. To ensure proper inversion of incorrectly embedded chiral constraints, we adopted multiple-stage four spatial coordinates refinement: Distance errors are applied on four spatial coordinates while chiral errors are applied on three. In the first stage of refinement, the structure is free to expand into the fourth spatial dimension to minimize chiral errors. When all chiral constraints have the correct sign, an additional potential is applied on the fourth spatial dimension to compress structures back into three dimensional space. The refinement error function in our implementation features two further changes: Bond-centered stereopermutators emit dihedral bounds and hence the refinement error function contains dihedral error terms. In contrast to enforcing zero-dihedral angle orientations of four particles with chiral constraints of zero volume, dihedral error terms offer more flexibility in the specificity of refined structures. Additionally, for haptic ligands, in which the centroids of the set of vertices forming the binding site occupy a shape vertex, chiral and dihedral errors are calculated with centroids of particle sets instead of with positions of individual particles. The refinement error function depends on a set of $N$ particles with positions $\vec{r}_i$. The symmetric matrices $\mathbf{L}$ and $\mathbf{U}$ contain the lower and upper distance bounds. Each chiral constraint of the set $C$ consists of four particle sets $S_\alpha, S_\beta, S_\gamma, S_\delta$, and a lower and an upper signed volume bound $L_V$ and $U_V$, respectively. Each dihedral constraint of the set $D$ consists of four particle sets, a lower and an upper signed dihedral bound $L_\phi$ and $U_\phi$, respectively. The signed tetrahedron volume of a chiral constraint is calculated from the centroids of its constituting particle sets $\vec{s}_j$ as: $$\vec{s}_j = \frac{1}{|S_j|}\sum_{i=1}^{|S_j|} \vec{r}_{S_{j, i}}$$ $$V_{\alpha\beta\gamma\delta} = \frac{1}{6}\left(\vec{s}_{\alpha}-\vec{s}_{\delta}\right)^{T} \cdot \left[ \left(\vec{s}_{\beta}-\vec{s}_{\delta}\right) \times \left(\vec{s}_{\gamma}-\vec{s}_{\delta}\right) \right].$$ The distance error $d_{ij}$, the chiral error $C_{\alpha\beta\gamma\delta}$, and the dihedral error $D_{\alpha\beta\gamma\delta}$ are defined as: $$d_{ij} = \mathrm{max}^2\left(0, \frac{\left(\vec{r}_{j}-\vec{r}_{i}\right)^{2}}{U_{ij}^{2}}-1\right) + \textrm{max}^{2} \left(0, \frac{2 L_{ij}^{2}}{L_{ij}^{2} + \left(\vec{r}_{j}-\vec{r}_{i}\right)^{2}}-1\right)$$ $$C_{\alpha\beta\gamma\delta} = \mathrm{max}^{2} \left(0, V_{\alpha\beta\gamma\delta}-U_{V} \right) + \mathrm{max}^{2} \left(0, L_{V}-V_{\alpha\beta\gamma\delta} \right)$$ $$D_{\alpha\beta\gamma\delta} = \mathrm{max}^2 \left(0, \left\lvert\phi_{\alpha\beta\gamma\delta}+\left\{ \begin{array}{r | r} 2\pi & \phi < \frac{U_\phi + L_\phi - 2\pi}{2}\\ 0 & \\ -2\pi & \phi > \frac{U_\phi + L_\phi + 2\pi}{2}\\ \end{array} \right\}-\frac{U_\phi{}+L_\phi}{2} \right\rvert-\frac{U_\phi{}-L_\phi}{2} \right)$$ where $\mathrm{max}^{2}$ denotes the operation ’square the largest element out of the two given in parenthesis’. The total error function then reads $$\mathrm{errf}\left(\left\{\vec{r}_i\right\}\right) = \sum_{i < j}^{N} d_{ij} + \sum_{\left(S_\alpha, S_\beta, S_\gamma, S_\delta, U_{V}, L_{V}\right) \in{} C} C_{\alpha\beta\gamma\delta} + \sum_{\left(S_\alpha, S_\beta, S_\gamma, S_\delta, U_\phi, L_\phi\right) \in{} D} D_{\alpha\beta\gamma\delta}$$ (note the doubled letter ’r’ introduced in the abbreviation of the function to distinguish it from the standard error function). The error function $\mathrm{errf}$ is minimized with analytical gradients by L-BFGS [@Liu1989] in three stages: First, without application of a potential on the fourth spatial dimension and dihedral errors, the tetrahedra modeled by chiral errors are allowed to invert. Second, a potential is applied to the fourth dimension and components along the fourth spatial dimension are eliminated. Lastly, dihedral errors are enabled. Conformer generation can then trivially be parallelized. Directed conformer generation has been implemented by placing bond stereopermutators enumerating staggered arrangements at bonds whose rotations are non-isotropic. <span style="font-variant:small-caps;">Molassembler</span> implements a trie data structure to track and ease generation of rotational combinations among the set of staggered bond stereopermutators. These combinations can then be passed to a special conformer generation function. Until feasibility criteria for combinations of dihedral arrangements in cycles are devised and implemented, bonds in cycles of any size are excluded from consideration in directed conformer generation, however. Shortcomings {#sec:shortcomings} ============ As mentioned before, the general model of stereopermutators treats molecular configuration as frozen with the exceptions of (i) nitrogen atom inversion, (ii) the Berry pseudorotation, and (iii) the Bartell mechanism, for all of which the thawing of stereopermutations has been implemented as an option. Most likely, these are not the only cases in which stereopermutations can easily interconvert. In all unhandled cases, <span style="font-variant:small-caps;">Molassembler</span> may overstate chiral character. Feasibility determination of stereopermutations involving haptic binding sites based on conical-space overlap avoidance is limited by the algorithm estimating the required conical space. This algorithm currently estimates the required conical space for biatomic or cyclic topologies; apart from this, no feasibility checks are made. The potential consequences are overstatements of chiral character and failures to generate conformations. Our effort to ensure that modeled conformers are close to local minima of the potential energy hypersurface is still rather limited and improved structure prediction can be envisioned for future improvement of our software. In the present version, atom pair distances in discrete bond orders are simply estimated by Universal Force Field [@rappe1992uff] parameters. Systematic distortions of common local shapes such as those due to Jahn-Teller distortion [@jahn1937stability] in an octahedron have not been addressed. It is therefore inadvisable to directly analyze generated conformational ensembles without subjecting the generated structures to suitable structure optimization or molecular mechanics methods. <span style="font-variant:small-caps;">Molassembler</span> does not pass the validation test set proposed by Hanson et. al. [@Hanson2018]. The library ranking algorithm predates their work, which laid bare the possibility of alternate interpretations of the existing sequence rules. The validation set also includes test cases for a proposed additional sequence rule which is not implemented in . In future work, the library ranking algorithm will be brought into closer alignment with their proposed changes. A ranking algorithm capable of exact differentiation is of paramount importance as stereogenicity of atom centers may be misrepresented. Furthermore, no axial or helical chirality is yet identified by out implementation and certain molecular features may be missed. Demonstration example {#sec:demonstration} ===================== <span style="font-variant:small-caps;">Molassembler</span> has already been employed in the context of automated exploration of chemical reaction networks [@grimmel2019], in which it served as molecular graph interpreter, equivalence oracle, and conformer generator. Therefore, we here focus on a demonstration of two of its core features: We shall consider an example where feasibility checks are important: An octahedral center with three short-bridge bidentate ligands. For the abstract binding case `(A-A)3`, there are four stereopermutations, as shown in Figure \[fig:demonstration\_feasibility\]. In the particular case of `[Fe(\mu2-Oxalate)3]3+`, the feasibility algorithm rules out all stereopermutations in which an oxalate is trans-arranged due to its short bridges. <span style="font-variant:small-caps;">Molassembler</span> reports that the stereopermutator has four stereopermutations but only two of these are feasible. ![The four abstract stereopermutations of the binding case `(A-A)3` in an octahedron, from left to right: The $\Lambda$ and $\Delta$ isomers, in which all bidentate ligands are cis-arranged, a stereopermutation in which a single ligand is trans-arranged, and lastly a stereopermutation in which all ligands are trans-arranged.[]{data-label="fig:demonstration_feasibility"}](figure12.pdf){width="\linewidth"} Next, we demonstrate that the abstraction <span style="font-variant:small-caps;">Molassembler</span> introduces regarding binding sites is effective for haptic ligands. In Figure \[fig:demonstration\_haptic\], an example molecule is shown with a relatively complex case for shape classification, ranking, and stereopermutation – all at once. The assumption that the centroid of a set of contiguous binding atoms is placed at a vertex of an underlying shape is well-applicable here, reducing the five-atom copper atom to a three-site equilateral triangle. Ranking correctly deduces that its haptic ligands are identical in an `(A-A)B` abstract binding case. Similarly, the twelve-atom titanium atom is reduced to a four-site tetrahedron with a `(A-A)(B-B)` binding case. Stereopermutation concludes that neither center is stereogenic. ![*Left:* The heavy-atom skeleton of a molecule with two types of haptic ligands: The bridged cyclopentadienyl groups bonded to a titanium atom in the center, and the triple bonds bonded to the copper atom towards the right. *Right:* Simplified Lewis structure of the left compound. Atom coloring: Carbon and titanium in gray, phosphorus in orange, copper in brown, and chlorine in green.[]{data-label="fig:demonstration_haptic"}](figure13.pdf){width="\linewidth"} Conclusions {#sec:conclusion} =========== The molecular model applied in <span style="font-variant:small-caps;">Molassembler</span> is capable of accurately representing many inorganic molecular complexes. The data representation of stereoisomerism generalizes to complex shapes and ligands, permitting an abstract high-level approach to molecular structure. Transferable stereodescriptors are devised for any combination of shape and abstract binding case, including haptic and multidentate ligands. Molecule instances are interpretable from Cartesian coordinates, constructible from various cheminformatics formats, editable, canonicalizeable, serializeable, and comparable. Both stochastic and directed conformer generation are implemented. Despite the few shortcomings in its first version, <span style="font-variant:small-caps;">Molassembler</span> is already a feature-rich, general-purpose program that enables programs to expand into parts of organometallic and inorganic chemical space. In future releases, we will address the existing shortcomings, particularly with regard to ranking and handling of aromaticity. <span style="font-variant:small-caps;">Molassembler</span> is open-source under the BSD 3-clause license and available in the SCINE project [@Brunken2019]. Acknowledgments {#sec:acknowledgments .unnumbered} =============== This work has been financially supported by ETH Zürich (ETH-38 17-1). Program architecture and dependencies {#sec:code_architecture} ===================================== <span style="font-variant:small-caps;">Molassembler</span> is a C++ program written in the C++14 language standard. It has multiple dependencies: The SCINE Utilities [@Brunken2019] form the data exchange formats prevalent throughout our software. Linear algebra computations are mediated by the Eigen [@Guennebaud2010] library. RingDecomposerLib [@Flachsenberg2017; @Kolodzik2012] provides graph cycle perception with a chemical perspective, nauty [@McKay2014] serves graph canonicalization purposes, JSON for Modern C++ [@Lohmann2013] offers serialization and deserialization to JavaScript Object Notation (JSON), and the Boost libraries [@Boost2017] form the basis of the molecular graph representation and offer relevant algorithms and data structures. Optional dependencies offer functionality if available: If one wishes to compile the available Python bindings, the C++ library pybind11 [@Wenzel2017] will be required. Parallelization of core library algorithms will be available if OpenMP [@Dagum1998] is supported by the available compiler and linker. <span style="font-variant:small-caps;">Molassembler</span> will accept SMILES [@Weininger1988] and InChI [@Stein2003] input if OpenBabel’s [@openbabel2011] chemical file format conversion utility is found at runtime. Configuration of optional components, compiler-agnostic building and installation are supported cross-platform by modern CMake scripts. <span style="font-variant:small-caps;">Molassembler</span> draws from two sub-libraries, each serving singular purposes. One is responsible for defining the local shapes and calculating properties and transition mappings between them. The other is responsible for the symbolic permutation required in stereopermutation. Further sub-libraries model cyclic polygons, implement the simplified GOR1 shortest path algorithm for Boost Graph Library data structures, and provide data structures and algorithms for compile-time computation. The main body of <span style="font-variant:small-caps;">Molassembler</span> itself is split into the application programming interface and the private implementations thereof. Overall, <span style="font-variant:small-caps;">Molassembler</span> consists of roughly sixty thousand lines of code and twenty thousand lines of comments and documentation. With the open-source release we provide an installation and user’s manual as well as separate technical documentations for the C++ program and the Python module. @ifundefined [61]{} Pirhadi, S.; Sunseri, J.; Koes, D. R. [Open source molecular modeling]{}. *J. Mol. Graph. Model.* **2016**, *69*, 127–143 Ambure, P.; Aher, R. B.; Roy, K. In *Computer-Aided Drug Discovery*; Zhang, W., Ed.; Springer New York: New York, NY, 2014; pp 257–296 Foscato, M.; Occhipinti, G.; Venkatraman, V.; Alsberg, B. K.; Jensen, V. R. [Automated Design of Realistic Organometallic Molecules from Fragments]{}. *J. Chem. Inf. Model.* **2014**, *54*, 767–780 Chu, Y.; Heyndrickx, W.; Occhipinti, G.; Jensen, V. R.; Alsberg, B. K. [An Evolutionary Algorithm for de Novo Optimization of Functional Transition Metal Compounds]{}. *J. Am. Chem. Soc.* **2012**, *134*, 8885–8895 *[Marvin 5.3.4]{}*; ChemAxon: Budapest, Hungary, 2010 Steinbeck, C.; Han, Y.; Kuhn, S.; Horlacher, O.; Luttmann, E.; Willighagen, E. [The Chemistry Development Kit (CDK): An open-source Java library for chemo-and bioinformatics]{}. *J. Chem. Inf. Comp. Sci.* **2003**, *43*, 493–500 Ioannidis, E. I.; Gani, T. Z.; Kulik, H. J. [molSimplify: A toolkit for automating discovery in inorganic chemistry]{}. *J. Comput. Chem.* **2016**, *37*, 2106–2117 Guan, Y.; Ingman, V. M.; Rooks, B. J.; Wheeler, S. E. [AARON: An Automated Reaction Optimizer for New Catalysts]{}. *J. Chem. Theory Comput.* **2018**, *14*, 5249–5261 Hay, B. P.; Firman, T. K. [HostDesigner: a program for the de novo structure-based design of molecular receptors with binding sites that complement metal ion guests]{}. *Inorg. Chem.* **2002**, *41*, 5502–5512 Foscato, M.; Venkatraman, V.; Jensen, V. R. [DENOPTIM: Software for Computational de Novo Design of Organic and Inorganic Molecules]{}. *J. Chem. Inf. Model.* **2019**, *59*, 4077–4082 Bergeler, M.; Simm, G. N.; Proppe, J.; Reiher, M. [Heuristics-Guided Exploration of Reaction Mechanisms]{}. *J. Chem. Theory Comput.* **2015**, *11*, 5712–5722 Simm, G. N.; Reiher, M. [Context-Driven Exploration of Complex Chemical Reaction Networks]{}. *J. Chem. Theory Comput.* **2017**, *13*, 6108–6119 Unsleber, J. P.; Reiher, M. [The Exploration of Chemical Reaction Networks]{}. *Ann. Rev. Phys. Chem.* **2020**, *71*, DOI: 10.1146/annurev–physchem–071119–040123,; Miyazawa, K.; Iwata, S.; Satoh, H. [G-RMSD: Root Mean Square Deviation Based Method for Three-dimensional Molecular Similarity Determination]{}. *chemRxiv* **2020**, DOI: 10.26434/chemrxiv.11933055.v1 G[ó]{}mez-Bombarelli, R.; Wei, J. N.; Duvenaud, D.; Hern[á]{}ndez-Lobato, J. M.; S[á]{}nchez-Lengeling, B.; Sheberla, D.; Aguilera-Iparraguirre, J.; Hirzel, T. D.; Adams, R. P.; Aspuru-Guzik, A. [Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules]{}. *ACS Cent. Sci.* **2018**, *4*, 268–276 Favre, H. A.; Powell, W. H. *[Nomenclature of organic chemistry: IUPAC recommendations and preferred names 2013]{}*; Royal Society of Chemistry, 2013 McNaught, A. D. *[Compendium of chemical terminology]{}*; Blackwell Science Oxford, 1997 Thomassen, C. [The graph genus problem is NP-complete]{}. *J. Algorithms* **1989**, *10*, 568–576 Gar[ç]{}on, M.; Bakewell, C.; Sackman, G. A.; White, A. J. P.; Cooper, R. I.; Edwards, A. J.; Crimmin, M. R. [A hexagonal planar transition-metal complex]{}. *Nature* **2019**, *574*, 390–393 Thomson, J. J. [XXIV. On the structure of the atom: an investigation of the stability and periods of oscillation of a number of corpuscles arranged at equal intervals around the circumference of a circle; with application of the results to the theory of atomic structure]{}. *Philos. Mag.* **1904**, *7*, 237–265 *[The PyMOL Molecular Graphics System 2.20]{}*; Schr[ö]{}dinger LLC: New York, NY, 2015 *[MarvinSketch 19.26]{}*; ChemAxon: Budapest, Hungary, 2019 Flachsenberg, F.; Andresen, N.; Rarey, M. [RingDecomposerLib: An Open-Source Implementation of Unique Ring Families and Other Cycle Bases]{}. *J. Chem. Inf. Model.* **2017**, *57*, 122–126 Bennett, W. E. [Computation of the number of isomers and their structures in coordination compounds]{}. *Inorg. Chem.* **1969**, *8*, 1325–1328 Pinsky, M.; Avnir, D. [Continuous Symmetry Measures. 5. The Classical Polyhedra]{}. *Inorg. Chem.* **1998**, *37*, 5575–5582 Wenzel, W.; Hamacher, K. [Stochastic tunneling approach for global minimization of complex potential energy landscapes]{}. *Phys. Rev. Lett.* **1999**, *82*, 3003 Andresen, B.; Gordon, J. M. [Constant thermodynamic speed for minimizing entropy production in thermodynamic processes and simulated annealing]{}. *Phys. Rev. E* **1994**, *50*, 4346 Yang, L.; Powell, D. R.; Houser, R. P. [Structural variation in copper (I) complexes with pyridylmethylamide ligands: structural analysis with a new four-coordinate geometry index, $\tau$ 4]{}. *Dalton Trans.* **2007**, 955–964 Okuniewski, A.; Rosiak, D.; Chojnacki, J.; Becker, B. [Coordination polymers and molecular structures among complexes of mercury(II) halides with selected 1-benzoylthioureas]{}. *Polyhedron* **2015**, *90*, 47–57 Addison, A. W.; Rao, T. N.; Reedijk, J.; van Rijn, J.; Verschoor, G. C. [Synthesis, structure, and spectroscopic properties of copper (II) compounds containing nitrogen–sulphur donor ligands; the crystal and molecular structure of aqua \[1, 7-bis (N-methylbenzimidazol-2-yl)-2, 6-dithiaheptane\] copper (II) perchlorate]{}. *J. Chem. Soc. Dalton* **1984**, 1349–1356 Zabrodsky, H.; Peleg, S.; Avnir, D. [Continuous symmetry measures. 2. Symmetry groups and the tetrahedron]{}. *J. Am. Chem. Soc.* **1993**, *115*, 8278–8289 Alvarez, S.; Alemany, P.; Casanova, D.; Cirera, J.; Llunell, M.; Avnir, D. [Shape maps and polyhedral interconversion paths in transition metal chemistry]{}. *Coordin. Chem. Rev.* **2005**, *249*, 1693–1708 Rauk, A.; Allen, L. C.; Mislow, K. [Pyramidal Inversion]{}. *Angew. Chem. Int. Ed.* **1970**, *9*, 400–414 Berry, R. S. [Correlation of Rates of Intramolecular Tunneling Processes, with Application to Some Group V Compounds]{}. *J. Chem. Phys.* **1960**, *32*, 933–938 Adams, W. J.; Thompson, H. B.; Bartell, L. S. [Structure, Pseudorotation, and Vibrational Mode Coupling in IF 7 : An Electron Diffraction Study]{}. *J. Chem. Phys.* **1970**, *53*, 4040–4046 Levenshtein, V. I. [Binary codes capable of correcting deletions, insertions, and reversals]{}. Phys.-Dokl. 1966; pp 707–710 Andersen, J. L.; Flamm, C.; Merkle, D.; Stadler, P. F. [Chemical Graph Transformation with Stereo-Information BT - Graph Transformation]{}. Cham, 2017; pp 54–69 Connelly, N. G., Damhus, T., Hartshorn, R. M., Hutton, A. T., Eds. *[Nomenclature of Inorganic Chemistry]{}*; The Royal Society of Chemistry, 2005 Hanson, R. M.; Musacchio, S.; Mayfield, J. W.; Vainio, M. J.; Yerin, A.; Redkin, D. [Algorithmic Analysis of Cahn–Ingold–Prelog Rules of Stereochemistry: Proposals for Revised Rules and a Guide for Machine Implementation]{}. *J. Chem. Inf. Model.* **2018**, *58*, 1755–1765 McGregor, J. J. [Backtrack search algorithms and the maximal common subgraph problem]{}. *Software Pract. Exper.* **1982**, *12*, 23–34 James, C. A.; Vandermeersch, T.; Dalke, A. [OpenSMILES specification]{}. 2015; <http://opensmiles.org/opensmiles.pdf> Blaney, J. M.; Dixon, J. S. [Distance Geometry in Molecular Modeling]{}. *Rev. Comput. Chem.* **1994**, *5*, 299–335 Dress, A. W. M.; Havel, T. F. [Shortest-path problems and molecular conformation]{}. *Discrete Appl. Math.* **1988**, *19*, 129–144 Cherkassky, B. V.; Goldberg, A. V.; Radzik, T. [Shortest paths algorithms: Theory and experimental evaluation]{}. *Math. Program.* **1996**, *73*, 129–174 Havel, T.; W[ü]{}thrich, K. [A distance geometry program for determining the structures of small proteins and other macromolecules from nuclear magnetic resonance measurements of intramolecular 1H-1H proximities in solution]{}. *B. Math. Biol.* **1984**, *46*, 673–698 Liu, D. C.; Nocedal, J. [On the limited memory BFGS method for large scale optimization]{}. *Math. Program.* **1989**, *45*, 503–528 Rapp[é]{}, A. K.; Casewit, C. J.; Colwell, K. S.; Goddard, W. A.; Skiff, W. M. [UFF, a Full Periodic Table Force Field for Molecular Mechanics and Molecular Dynamics Simulations]{}. *J. Am. Chem. Soc.* **1992**, *114*, 10024–10035 Jahn, H. A.; Teller, E. [Stability of polyatomic molecules in degenerate electronic states-I—Orbital degeneracy]{}. *Proc. R. Soc. London, Ser. A* **1937**, *161*, 220–235 Grimmel, S.; Reiher, M. [The Electrostatic Potential as a Descriptor for the Protonation Propensity in Automated Exploration of Reaction Mechanisms]{}. *Faraday Discuss.* **2019**, *220*, 443–463 Brunken, C.; Bosia, F.; Grimmel, S.; Simm, G.; Sobez, J.-G.; Unsleber, J.; Vaucher, A. C.; Weymuth, T.; Reiher, M. [SCINE Utilities]{}. 2019; <https://github.com/qcscine/utilities> Guennebaud, G.; Jacob, B.; Others, [Eigen]{}. 2010; <http://eigen.tuxfamily.org> Kolodzik, A.; Urbaczek, S.; Rarey, M. [Unique ring families: A chemically meaningful description of molecular ring topologies]{}. *J. Chem. Inf. Model.* **2012**, *52*, 2013–2021 McKay, B. D.; Piperno, A. [Practical graph isomorphism, II]{}. *J. Symb. Comput.* **2014**, *60*, 94–112 Lohmann, N. [JSON for Modern C++]{}. 2013; <https://github.com/nlohmann/json> Abrahams, D. [et al.]{} [Boost C++ Libraries]{}. 2017; <https://www.boost.org> Wenzel, J. [Pybind11 2.4.2]{}. 2017; <https://github.com/pybind/pybind11> Dagum, L.; Menon, R. [OpenMP: an industry standard API for shared-memory programming]{}. *IEEE Comput. Sci. Eng.* **1998**, *5*, 46–55 Weininger, D. [SMILES, a chemical language and information system. 1. Introduction to methodology and encoding rules]{}. *J. Chem. Inf. Comp. Sci.* **1988**, *28*, 31–36 Stein, S. E.; Heller, S. R.; Tchekhovski, D. [An Open Standard for Chemical Structure Representation - The IUPAC Chemical Identifier]{}. Nimes International Chemical Information Conference. 2003; pp 131–143 Boyle, N. M. O.; Banck, M.; James, C. A.; Morley, C.; Vandermeersch, T.; Hutchison, G. R. [Open Babel: An open chemical toolbox]{}. *J. Cheminform.* **2011**, *3*, 1–14 [^1]: ORCID:0000–0002–3264–0622 [^2]: Corresponding author; e-mail: markus.reiher@phys.chem.ethz.ch; ORCID:0000–0002–9508–1565
--- abstract: 'Let $A$ be any plane set. It is known that a holomorphic motion $h: A \times {\mathbb{D}}\to {\mathbb{C}}$ always extends to a holomorphic motion of the entire plane. It was recently shown that any isotopy $h: X \times [0,1] \to {\mathbb{C}}$, starting at the identity, of a plane continuum $X$ also extends to an isotopy of the entire plane. Easy examples show that this result does not generalize to all plane compacta. In this paper we will provide a characterization of isotopies of uniformly perfect plane compacta $X$ which extend to an isotopy of the entire plane. Using this characterization, we prove that such an extension is always possible provided the diameters of all components of $X$ are uniformly bounded away from zero.' address: - 'Nipissing University, Department of Computer Science & Mathematics, 100 College Drive, Box 5002, North Bay, Ontario, Canada, P1B 8L7' - 'University of Alabama at Birmingham, Department of Mathematics, Birmingham, AL 35294, USA' - 'University of Saskatchewan, Department of Mathematics and Statistics, 106 Wiggins road, Saskatoon, Canada, S7N 5E6' author: - 'L. C. Hoehn' - 'L. G. Oversteegen' - 'E. D. Tymchatyn' bibliography: - 'Isotopies.bib' title: Extension of isotopies in the plane --- [^1] [^2] Introduction ============ Denote the complex plane by ${\mathbb{C}}$ and the open unit disk by ${\mathbb{D}}$. An *isotopy* of a set $X \subset {\mathbb{C}}$ is a homotopy $h: X \times [0,1] \to {\mathbb{C}}$ such that for each $t \in [0,1]$, the function $h^t: X \to {\mathbb{C}}$ defined by $h^t(x) = h(x,t)$ is an embedding (i.e. a homeomorphism of $X$ onto the range of $h^t$). Suppose that $h: X \times [0,1] \to {\mathbb{C}}$ is an isotopy of a compactum $X \subset {\mathbb{C}}$ such that $h^0 = {\mathrm{id}}_X$. We consider the old problem of when the isotopy $h$ can be extended to an isotopy of the entire plane [^3]. A more restrictive variant of the notion of an isotopy is a holomorphic motion (see e.g. [@astamart01]). The remarkable $\lambda$-Lemma [@slod91] states that any holomorphic motion of any plane set can be extended to a holomorphic motion of the entire plane. See [@astamart01] for a different proof and some history of that problem. Although the $\lambda$-Lemma holds for arbitrary plane sets, some additional restrictions are needed for the existence of an extension of an isotopy to the entire plane ${\mathbb{C}}$. First, it is reasonable to restrict to isotopies of plane compacta. This by itself is not enough since Fabel has shown that there exists an isotopy of a convergent sequence which cannot be extended over the plane (see [@fabe05 p. 991]). On the other hand, it was shown recently in [@ot10] that any isotopy of an arbitrary plane continuum $X$ can be extended over the plane. In this case each complementary domain $U$ of $X$ is simply connected and, hence, there exists a conformal isomorphism $\varphi_U: {\mathbb{D}}\to U$. The proof made use of two key analytic results for these conformal isomorphisms: the Carathéodory kernel convergence theorem, and the Gehring-Hayman inequality for the diameters of hyperbolic geodesics in $U$. Let us now consider the case when $X$ is a plane compactum. Since we may assume that $X$ contains at least three points, the boundary of every complementary component $U$ of $X$ contains at least three points, so $U$ is hyperbolic, i.e. there exists an analytic covering map $\varphi_U: {\mathbb{D}}\to U$ (see [@Ahlfors1973]). There is an analogue of the Carathéodory kernel convergence theorem which holds for families of analytic covering maps (see \[sec:analytic covering maps\]). For an analogue of the Gehring-Hayman inequality, an additional geometric condition will be required: A compact subset $X \subset {\mathbb{C}}$ is *uniformly perfect with constant $k$* provided there exists $0 < k < 1$ so that for all $r < {\mathrm{diam}}(X)$ and all $x \in X$, $$\{z \in {\mathbb{C}}: kr \leq |z - x| \leq r\} \cap X \neq {\emptyset}.$$ Clearly every uniformly perfect set is perfect and the standard “middle-third” Cantor set is uniformly perfect. It is known that the Gehring-Hayman estimate on the diameter of hyperbolic geodesics still holds for very analytic covering map $\varphi_U: {\mathbb{D}}\to U$ to a domain $U$ whose boundary is uniformly perfect (see \[sec:analytic covering maps\] for details). The main result in this paper is a characterization of isotopies $h: X \times [0,1] \to {\mathbb{C}}$ of uniformly perfect plane compacta $X$ which can be extended over the entire plane (see \[main1\]). We use our characterization to prove that any isotopy of a plane compactum such that the diameter of every component is uniformly bounded away from zero can be extended over the plane (see \[main2\]). Along the way, we will provide simpler proofs of some of the technical results in [@ot10]. Notation {#sec:notation} -------- By a *map* we mean a continuous function. For $z \in {\mathbb{C}}$, the magnitude of $z$ is denoted $|z|$, so that the Euclidean distance between two points $z,w \in {\mathbb{C}}$ is $|z - w|$. Given $z_0 \in {\mathbb{C}}$ and $r > 0$, denote $$B(z_0,r) = \{z \in {\mathbb{C}}: |z_0 - z| < r\} .$$ By a *domain* we mean a connected, open, non-empty set $U \subset {\mathbb{C}}$. If $X \subset {\mathbb{C}}$ is closed, then a *complementary domain* of $X$ is a component of ${\mathbb{C}}{\setminus}X$. A *crosscut* of a domain $U$ is an *open arc* $Q$ (i.e. $Q \approx (0,1) \subset {\mathbb{R}}$) contained in $U$ such that $\overline{Q}$ is a *closed arc* (i.e. $\overline{Q} \approx [0,1]$) whose endpoints are in $\partial U$. Note that the endpoints of $\overline{Q}$ are required to be distinct. In general, if $A$ is an open arc whose closure $\overline{A}$ is a closed arc, we may refer to the endpoints of $\overline{A}$ as the “endpoints of $A$”. A *path* is a map $\gamma: [0,1] \to {\mathbb{C}}$. Given a domain $U$, we say $\gamma$ is a *path in $U$* if $\gamma((0,1)) \subset U$. Note that *we allow the possibility that $\gamma(0) \in \partial U$ and/or $\gamma(1) \in \partial U$* – we still call such a path a path in $U$. We will make frequent use of covering maps in this paper. Given a covering map $\varphi: V \to U$, where $V$ and $U$ are domains, a *lift* of a point $x \in U$ is a point ${\widehat{x}} \in V$ such that $\varphi({\widehat{x}}) = x$. Similarly, if $\gamma$ is a path with $\gamma([0,1]) \subset U$ then a lift of $\gamma$ is a path ${\widehat{\gamma}}$ in $V$ such that $\varphi \circ {\widehat{\gamma}} = \gamma$. The *Hausdorff metric* $d_H$ measures the distance between two compact sets $A_1,A_2 \subset {\mathbb{C}}$ as follows: $$d_H(A_1,A_2) = \max \{ \max_{z_1 \in A_1} \min_{z_2 \in A_2} |z_1 - z_2|, \; \max_{z_2 \in A_2} \min_{z_1 \in A_1} |z_1 - z_2| \} .$$ Equivalently, $d_H(A_1,A_2)$ is the smallest number ${\varepsilon}\geq 0$ such that $A_1$ is contained in the closed ${\varepsilon}$-neighborhood of $A_2$ and $A_2$ is contained in the closed ${\varepsilon}$-neighborhood of $A_1$. Given an isotopy $h: X \times [0,1] \to {\mathbb{C}}$, we denote $h^t = h|_{X \times \{t\}}$ and, for $x \in X$, we denote $x^t = h^t(x)$. Preliminaries {#sec:preliminaries} ============= In this section we collect several tools which we use in this paper. Many of these are standard analytical results; others are less well-known. Bounded analytic covering maps {#sec:analytic covering maps} ------------------------------ It is a standard classical result (see e.g. [@Ahlfors1973]) that for any domain $U \subset {\mathbb{C}}$ whose complement contains at least two points, and for any $z_0 \in U$, there is a complex analytic covering map $\varphi: {\mathbb{D}}\to U$ such that $\varphi(0) = z_0$. Moreover, this covering map $\varphi$ is uniquely determined by the argument of $\varphi'(0)$. Many of the results below hold only for analytic covering maps $\varphi: {\mathbb{D}}\to U$ to bounded domains $U$. For the remainder of this subsection, let $U \subset {\mathbb{C}}$ be a bounded domain, and let $\varphi_U=\varphi: {\mathbb{D}}\to U$ be an analytic covering map. \[Fatou\] The radial limits $\lim_{r \to 1^-} \varphi(re^{i\theta})$ exist for all points $e^{i\theta}$ in $\partial {\mathbb{D}}$ except possibly for a set of linear measure zero. From now on, we will always assume that any bounded analytic covering map $\varphi: {\mathbb{D}}\to U$ has been extended to be defined over all points $e^{i\theta} \in \partial {\mathbb{D}}$ where the radial limit exists by $\varphi(e^{i\theta}) = \lim_{r \to 1^-} \varphi(re^{i\theta})$. Note that the function $\varphi$ is not necessarily continuous at these points. For this extended map $\varphi$, we extend the notion of lifts. If $\gamma$ is a path in $U$ (recall this allows for the possibility that $\gamma(0)$ and/or $\gamma(1)$ belongs to $\partial U$), then a *lift* of $\gamma$ is a path ${\widehat{\gamma}}$ in ${\mathbb{D}}$ such that $\varphi \circ {\widehat{\gamma}} = \gamma$. This means that if $\gamma(0) \in \partial U$, then ${\widehat{\gamma}}(0) \in \partial {\mathbb{D}}$ and $\varphi$ is defined at the point ${\widehat{\gamma}}(0)$ (and $\varphi({\widehat{\gamma}}(0)) = \gamma(0)$); and likewise for $\gamma(1)$ and ${\widehat{\gamma}}(1)$. \[Riesz\] For each $x \in \partial U$, the set of points $e^{i\theta}$ for which $\lim_{r\to 1^-} \varphi(re^{i\theta}) = x$ has linear measure zero in $\partial {\mathbb{D}}$. The next result about lifts of paths is very similar to classical results for covering maps. Since our extended map $\varphi_U$ is not a covering map at points in $\partial {\mathbb{D}}$, we include a proof for completeness. \[lift\] Suppose $\gamma$ is a path in $U$ such that $\gamma((0,1]) \subset U$. Let ${\widehat{z}} \in {\mathbb{D}}$ be such that $\varphi({\widehat{z}}) = \gamma(1)$. Then there exists a unique lift ${\widehat{\gamma}}$ of $\gamma$ with ${\widehat{\gamma}}(1) = {\widehat{z}}$. In particular, if $\gamma(0) \in \partial U$, then ${\widehat{\gamma}}(0) \in \partial {\mathbb{D}}$, $\varphi$ is defined at ${\widehat{\gamma}}(0)$ (i.e. the radial limit of $\varphi$ exists there), and $\varphi({\widehat{\gamma}}(0)) = \gamma(0)$. We may assume that $\gamma(0) \in \partial U$. Since $\varphi$ is a covering map, $\gamma|_{(0,1]}$ lifts to a path with initial point ${\widehat{z}}$ which compactifies on a continuum $K \subset \partial {\mathbb{D}}$. If $K$ is non-degenerate, then there exists by \[Fatou\] a set $E$ of positive measure in the interior of $K$ so that for each $e^{i\theta} \in E$, the radial limit $\lim_{r \to 1^-} \gamma(r e^{i\theta})$ exists. Since the graph of ${\widehat{\gamma}}$ compactifies on $K$ we can choose a sequence $s_i \to 1$ so that ${\widehat{\gamma}}(s_i)= r_i e^{i\theta}$ with $r_i \to 1$. It follows that the radial limit $\lim_{r \to 1^-} \varphi^t(r e^{i\theta}) = \gamma(1)$ for each $e^{i\theta} \in E$, a contradiction with \[Riesz\]. Thus $K$ is a point $e^{i\theta}$. If $\gamma(0)$ is a limit point of $\partial U$, then we can choose arbitrarily small $\rho > 0$ so that the circle $S(\gamma(0),\rho) = \partial B(\gamma(0),\rho)$ intersects $\partial U$, and $\varphi(0), \gamma(1) \notin B(\gamma(0),2\rho)$. Let $C$ be the component of $S(\gamma(0),\rho) {\setminus}\partial U$ so that the closure of the component of $\gamma([0,1]) \cap B(\gamma(0),\rho)$, which contains $\gamma(0)$, intersects $C$. By the above, $C$ lifts to a crosscut ${\widehat{C}}$ of ${\mathbb{D}}$ such that $e^{i\theta}$ is contained in the component $H$ of ${\mathbb{D}}{\setminus}{\widehat{C}}$ which does not contain $0$. Since a terminal segment of the radial segment $\{r e^{i\theta}: 0 \leq r < 1\}$ is contained in $H$, and $\rho$ is arbitrarily small, it follows that $\varphi({\widehat{\gamma}}(0)) = \lim_{r\to 1^-} \varphi(r e^{i\theta}) = \gamma(0)$ as required. In the case that $\gamma(0)$ is an isolated point of $\partial U$ (this case will not be needed in this paper as all domains we consider will have perfect boundaries), a similar argument can be made by lifting a small circle in $U$ centered at $\gamma(0)$. We leave this case to the reader. The next result is a variant of \[lift\], in which the base point of the path to be lifted is in the boundary of $U$. In the case that the boundary of $U$ is uniformly perfect, we prove below in \[Hlift\] a stronger result about lifting a homotopy under covering maps to a domain whose boundary is changing under an isotopy. The present result can be obtained as a Corollary to \[Hlift\] by using the identity isotopy. We omit a proof for the non-uniformly perfect case, since we won’t need it for this paper. \[lift from bd\] Suppose $\gamma$ is a path in $U$ such that $\gamma((0,1]) \subset U$ and $\gamma(0) \in \partial U$. Let ${\widehat{x}} \in \partial {\mathbb{D}}$ be such that $\varphi_U({\widehat{x}}) = \gamma(0)$ and $\gamma$ is homotopic to the radial path $\varphi_U|_{\{r {\widehat{x}} \;:\; 0 \leq r \leq 1\}}$ under a homotopy in $U$ that fixes the point $\gamma(0)$. Then there exists a lift ${\widehat{\gamma}}$ of $\gamma$ with ${\widehat{\gamma}}(0) = {\widehat{x}}$. Moreover, if $\partial U$ is perfect, this lift ${\widehat{\gamma}}$ is unique. The *hyperbolic metric* on the unit disk ${\mathbb{D}}$ is given by the form $\frac{2|dz|}{1 - |z|^2}$, meaning that the length of a smooth path $\gamma: [0,1] \to {\mathbb{D}}$ is $\int_0^1 \frac{2 |\gamma'(t)|}{1 - |\gamma(t)|^2} \,dt$. The important property of the hyperbolic metric for us is that (hyperbolic) geodesics in ${\mathbb{D}}$ are pieces of round circles or straight lines which cross the boundary $\partial {\mathbb{D}}$ orthogonally. Via the covering map $\varphi: {\mathbb{D}}\to U$, we obtain the *hyperbolic metric on $U$*, in which the length of a smooth path in $U$ is equal to the length of any lift of that path under $\varphi$ – this length is independent of the choice of lift. It is a standard result that the hyperbolic metric on $U$ is independent of the choice of covering map $\varphi: {\mathbb{D}}\to U$. \[gehrhay\] Suppose $\partial U$ is uniformly perfect with constant $k$. There exists a constant $K$ such that if ${\widehat{g}}$ is a hyperbolic geodesic in ${\mathbb{D}}$ and ${\widehat{\Gamma}}$ is a curve with the same endpoints as ${\widehat{g}}$, then $${\mathrm{diam}}(\varphi({\widehat{g}})) \leq K \cdot {\mathrm{diam}}(\varphi({\widehat{\Gamma}})) .$$ The constant $K$ depends only on $k$, not on the domain $U$ itself or on the choice of analytic covering map $\varphi$. We end this subsection with a discussion of analytic covering maps of varying domains in the plane. We will make use of the notion of Carathéodory kernel convergence, which was introduced by Carathéodory for univalent analytic maps in [@cara12], then extended by Hejhal to the case of analytic covering maps. Let $U_1,U_2,\ldots$ and $U_\infty$ be domains and let $z_1,z_2,\ldots$ and $z_\infty$ be points with $z_n \in U_n$ for all $n = 1,2,\ldots$ and $z_\infty \in U_\infty$. We say that *$\langle U_n, z_n \rangle \to \langle U_\infty, z_\infty \rangle$ in the sense of Carathéodory kernel convergence* provided that (i) $z_n \to z_\infty$; (ii) for any compact set $K \subset U_\infty$, $K \subset U_n$ for all but finitely many $n$; and (iii) for any domain $U$ containing $z_\infty$, if $U \subseteq U_n$ for infinitely many $n$, then $U \subseteq U_\infty$. \[caratheodory\] Let $U_1,U_2,\ldots$ and $U_\infty$ be domains and let $z_1,z_2,\ldots$ and $z_\infty$ be points with $z_n \in U_n$ for all $n = 1,2,\ldots$ and $z_\infty \in U_\infty$. Let $\varphi_\infty: {\mathbb{D}}\to U_\infty$ be the analytic covering map such that $\varphi(0) = z_\infty$ and $\varphi'(0) > 0$. Likewise, for each $n = 1,2,\ldots$, let $\varphi_n: {\mathbb{D}}\to U_n$ be the analytic covering map such that $\varphi_n(0) = z_n$ and $\varphi_n'(0) > 0$. Then $\langle U_n, z_n \rangle \to \langle U_\infty, z_\infty \rangle$ in the sense of Carathéodory kernel convergence if and only if $\varphi_n \to \varphi_\infty$ uniformly on compact subsets of ${\mathbb{D}}$. Partitioning plane domains {#sec:partitioning domains} -------------------------- Let $U$ be a bounded domain in ${\mathbb{C}}$. We next describe a way of partitioning $U$ into simple sets which are either circular arcs or regions whose boundaries are unions of circular arcs. Let $\mathcal{B}$ be the collection of all open disks $B(c,r) \subset U$ such that $|\partial B(c,r) \cap \partial U| \geq 2$. Let $\mathcal{C}$ be the collection of all centers of such disks, and for $c \in \mathcal{C}$ let $r(c)$ be the radius of the corresponding disk in $\mathcal B$. The set $\mathcal{C}$, called the *skeleton of $U$*, was studied by several authors (see for example [@fre97]). Note that for each $c \in \mathcal{C}$, $B(c,r(c))$ is conformally equivalent with the unit disk ${\mathbb{D}}$ and, hence, can be endowed with the hyperbolic metric $\rho_c$. Let ${\mathrm{Hull}}(c)$ be the convex hull of the set $\partial B(c,r(c)) \cap \partial U$ in $B(c,r(c))$ *using the hyperbolic metric $\rho_c$ on the disk $B(c,r(c))$*. The following theorem by Kulkarni and Pinkall generalizes an earlier result by Bell [@bell76] (see [@bfmot13] for a more complete description): \[KP\] For each $z \in U$ there exists a unique $c \in \mathcal{C}$ such that $z \in {\mathrm{Hull}}(c)$. ![Depiction of two examples of the sets ${\mathrm{Hull}}(c)$ from the Kulkarni-Pinkall decomposition of a domain $U$ in ${\mathbb{C}}$. In the picture, $U$ is a component of the complement of the wavy lines.[]{data-label="fig:KP"}](KP.pdf) Let $\mathcal{J}$ be the collection of all crosscuts of $U$ which are contained in the boundaries of the sets ${\mathrm{Hull}}(c)$ for $c \in \mathcal{C}$. The elements of $\mathcal{J}$ are circular open arcs (called *chords*) whose endpoints are in $\partial U$. Two such chords do not cross each other inside $U$ (i.e., if $\ell \neq \ell'$ are chords in $\mathcal{J}$, then $\ell \cap \ell' = {\emptyset}$) and the limit of any convergent sequence of chords in $\mathcal{J}$ is either a chord in $\mathcal{J}$ or a point in $\partial U$. In particular, the subcollection of chords of diameter greater or equal to ${\varepsilon}$ is compact for each ${\varepsilon}> 0$. As such, the family $\mathcal{J}$ is close to being a *lamination* of $U$ (see \[d:lamination\] in \[sec:characterization\] below). However, it is possible that uncountably many distinct chords in $\mathcal{J}$ have the same pair of endpoints $x,y \in \partial U$. Equidistant sets {#sec:equi set} ---------------- Let $A_1$ and $A_2$ be disjoint closed sets in ${\mathbb{C}}$. The *equidistant set* between $A_1$ and $A_2$ is the set $${\mathrm{Equi}}(A_1,A_2) = \left\{ z \in {\mathbb{C}}: \min_{w \in A_1} |z-w| = \min_{w \in A_2} |z-w| \right\} .$$ The equidistant set is a convenient way to define a set running “in between” $A_1$ and $A_2$. Moreover, it has a very simple local structure in the case that the sets $A_1$ and $A_2$ are not “entangled” in the sense of the following definition: We say that $A_1$ and $A_2$ are *non-interlaced* if whenever $B(c,r)$ is an open disk contained in the complement of $A_1 \cup A_2$, there are disjoint arcs $C_1,C_2 \subset \partial B(c,r)$ such that $A_1 \cap \partial B(c,r) \subset C_1$ and $A_2 \cap \partial B(c,r) \subset C_2$. We allow for the possibility that $C_1 = {\emptyset}$ in the case that $A_2 \cap \partial B(c,r) = \partial B(c,r)$, and vice versa. By a *$1$-manifold* in the plane, we mean a *closed* set $M \subset {\mathbb{C}}$ such that each component of $M$ is homeomorphic either to ${\mathbb{R}}$ or to $\partial {\mathbb{D}}$, and these components are all open in $M$. \[thm:manifold\] Let $A_1$ and $A_2$ be disjoint closed sets in ${\mathbb{C}}$. If $A_1$ and $A_2$ are non-interlaced, then ${\mathrm{Equi}}(A_1,A_2)$ is a $1$-manifold in the plane. Midpoints of paths {#sec:midpoints} ------------------ We identify the space of all paths in the plane ${\mathbb{C}}$ with the function space $\mathcal{C}([0,1],{\mathbb{C}})$ with the *uniform metric*; that is, the distance between two paths $\gamma_1,\gamma_2 \in \mathcal{C}([0,1],{\mathbb{C}})$ is equal to $\sup \{|\gamma_1(t) - \gamma_2(t)|: t \in [0,1]\}$. The standard Euclidean length of a path is not a well-behaved function from $\mathcal{C}([0,1],{\mathbb{C}})$ to $[0,\infty)$. First, it is not defined (i.e., not finite) for all paths in $\mathcal{C}([0,1],{\mathbb{C}})$, but only for rectifiable paths. Second, paths can be arbitrarily close in the uniform metric and yet have very different Euclidean path lengths. However, there do exist alternative “path length” functions ${\mathsf{len}}: \mathcal{C}([0,1],{\mathbb{C}}) \to [0,\infty)$ such that ${\mathsf{len}}$ is defined for *all* paths in $\mathcal{C}([0,1],{\mathbb{C}})$, and ${\mathsf{len}}$ is continuous with respect to the uniform metric on $\mathcal{C}([0,1],{\mathbb{C}})$ and the standard topology on $[0,\infty) \subset {\mathbb{R}}$, see [@Morse1936; @Silverman1969; @hot13]. Such an alternative path length function can be used to define a choice of “midpoint” of a path which varies continuously with the path. Specifically, the midpoint of $\gamma$ is defined to be the point ${\mathsf{m}}(\gamma) = \gamma(t_0)$, where $t_0 \in (0,1)$ is chosen such that ${\mathsf{len}}(\gamma|_{[0,t_0]}) = {\mathsf{len}}(\gamma|_{[t_0,1]})$. In this paper, we will not need to know any particulars about the definitions of such path length functions, but only this result about existence of such midpoints, which we state below. \[thm:midpt\] There is a continuous function $${\mathsf{m}}: \mathcal{C}([0,1],{\mathbb{C}}) \to {\mathbb{C}}$$ such that ${\mathsf{m}}(\gamma) \in \gamma((0,1))$ for all $\gamma \in \mathcal{C}([0,1],{\mathbb{C}})$. Moreover, if $\gamma_1$ and $\gamma_2$ are both parameterizations of a closed arc $A$ (i.e. if $\gamma_1([0,1]) = \gamma_2([0,1]) = A$ and $\gamma_1$ and $\gamma_2$ are homeomorphisms between $[0,1]$ and $A$), then ${\mathsf{m}}(\gamma_1) = {\mathsf{m}}(\gamma_2)$. In light of the second part of \[thm:midpt\], given an (open or closed) arc $A$, we define the midpoint of $A$ to be ${\mathsf{m}}(A) = {\mathsf{m}}(\gamma)$ where $\gamma$ is any path which parameterizes $A$ ($\overline{A}$ if $A$ is an open arc). Main Theorem {#sec:characterization} ============ In this section, we state and prove the main theorem of this paper, which is a characterization of isotopies of uniformly perfect plane compacta which can be extended over the entire plane. Note that the example of Fabel mentioned in the Introduction can easily be modified to obtain an isotopy $h: X \times [0,1] \to {\mathbb{C}}$ so that for each $t$, $X^t = h^t(X)$ is a uniformly perfect Cantor set with the same constant $k$. Thus, additional assumptions are required to ensure the extension of such an isotopy over the plane. \[main1\] Suppose that $h: X \times [0,1] \to {\mathbb{C}}$ is an isotopy of a compactum $X \subset {\mathbb{C}}$ starting at the identity, such that $X^t$ is uniformly perfect with the same constant $k$ for each $t \in [0,1]$. Then the following are equivalent: 1. $h$ extends to an isotopy of the entire plane ${\mathbb{C}}$; 2. For each ${\varepsilon}> 0$ there exists $\delta > 0$ such that for any crosscut $Q$ of a complementary domain $U$ of $X$ with ${\mathrm{diam}}(C) < \delta$, there exists a homotopy $h_Q: (X \cup Q) \times [0,1] \to {\mathbb{C}}$ starting at the identity which extends $h$ and is such that $h_Q^t(Q) \cap X^t = {\emptyset}$ and ${\mathrm{diam}}(h_Q^t(Q)) < {\varepsilon}$ for all $t \in [0,1]$. It is trivial to see that condition (i) implies condition (ii) from \[main1\]. To obtain the converse, we will in fact prove a stronger characterization in \[main1technical\] below. To state this Theorem, we introduce the following simple condition: Let $X \subset {\mathbb{C}}$ be a compact set and let $h: X \times [0,1] \to {\mathbb{C}}$ be an isotopy of $X$ starting at the identity. We say that $X$ is *encircled* if $X$ has a component which is a large circle $\Sigma$ such that $h^t |_\Sigma$ is the identity for all $t \in [0,1]$, and $X^t {\setminus}\Sigma$ is contained in a compact subset of the bounded complementary domain of $\Sigma$ for all $t \in [0,1]$. Note that if (ii) from \[main1\] holds, then we may additionally assume without loss of generality (i.e. without falsifying condition (ii) from \[main1\]) that $X$ is encircled. Tracking bounded complementary domains {#sec:tracking domains} -------------------------------------- For the remainder of this section, we assume that $h: X \times [0,1] \to {\mathbb{C}}$ is an isotopy of a compact set $X \subset {\mathbb{C}}$ starting at the identity, such that $X^t$ is uniformly perfect for all $t \in [0,1]$ with the same constant $k$, and that $X$ is encircled. Clearly such an isotopy can be extended over the unbounded complementary domain of $X$ as the identity for all $t \in [0,1]$. Hence we only need to consider bounded complementary domains for the remainder of this section. Let $U$ be a bounded complementary domain of $X$. Choose a point $z_U \in U$. Clearly the isotopy $h$ can be extended to an isotopy $h_U: (X \cup \{z_U\}) \times [0,1] \to {\mathbb{C}}$ starting at the identity. Define $U^t$ to be the complementary domain of $X^t$ which contains the point $h_U^t(z_U) = z_U^t$. Let $\varphi_U^t: {\mathbb{D}}\to U^t$ be the analytic covering map such that $\varphi_U^t(0) = z_U^t$ and $(\varphi_U^t)'(0) > 0$. It is straightforward to see that if $t_n \to t_\infty$, then the pointed domains $\langle U^{t_n}, z_U^{t_n} \rangle$ converge to $\langle U^{t_\infty}, z_U^{t_\infty} \rangle$ in the sense of Carathéodory kernel convergence. Hence, by \[caratheodory\], the covering maps $\varphi_U^{t_n}$ converge to $\varphi_U^{t_\infty}$ uniformly on compact subsets of ${\mathbb{D}}$. We will always assume that the complementary domains $U^t$ of $X^t$ and analytic covering maps $\varphi_U^t: {\mathbb{D}}\to U^t$ are defined in this way. It is clear that this definition of $U^t$ does not depend on the choices of $z_U$ and $h_U$. The following Theorem is a stronger characterization of isotopies of uniformly perfect plane compacta that can be extended over the plane than the one given in \[main1\], in the sense that condition (ii) of \[main1technical\] is weaker than condition (ii) of \[main1\]. We will in fact use this stronger characterization in \[sec:large components\]. \[main1technical\] Suppose that $h: X \times [0,1] \to {\mathbb{C}}$ is an isotopy of a compactum $X \subset {\mathbb{C}}$ starting at the identity, such that $X^t$ is uniformly perfect with the same constant $k$ for each $t \in [0,1]$, and that $X$ is encircled. Then the following are equivalent: 1. $h$ extends to an isotopy of the entire plane ${\mathbb{C}}$; 2. For each bounded complementary domain $U$ of $X$ and each ${\varepsilon}> 0$ there exists $\delta > 0$ with the following property: For any crosscut $Q$ in $U$ with endpoints $a,b \in \partial U$ and with ${\mathrm{diam}}(Q) < \delta$, there exists a family $\{\gamma_t: t \in [0,1]\}$ such that (1) $\gamma_t$ is a path in $U^t$ joining $a^t$ and $b^t$ for each $t \in [0,1]$, (2) $\gamma_0$ is homotopic to $Q$ in $U$ with endpoints fixed, (3) ${\mathrm{diam}}(\gamma_t([0,1])) < {\varepsilon}$ for all $t \in [0,1]$, and (4) there are lifts ${\widehat{\gamma}}_t$ of the paths $\gamma_t$ under $\varphi_U^t$ such that the sets ${\widehat{\gamma}}_t([0,1])$ vary continuously in $t$ with respect to the Hausdorff metric. We have deliberately chosen to use subscripts in the notation for $\gamma_t$ (instead of superscripts like $\gamma^t$) to emphasize the point that the paths $\gamma_t$ are *not* required to change continuously in the sense of an isotopy or homotopy – we only require the weaker condition that the images of the lifts ${\widehat{\gamma}}_t$ vary continuously with respect to the Hausdorff metric. Even though condition (ii) of \[main1technical\] is more cumbersome to state, we demonstrate in \[sec:large components\] that it is easier to apply. The proofs of \[main1\] and \[main1technical\] will be completed in \[sec:proof main1\] below. Lifts in moving domains {#sec:lifts moving} ----------------------- As in \[sec:tracking domains\], we continue to assume that $h: X \times [0,1] \to {\mathbb{C}}$ is an isotopy of a compact set $X \subset {\mathbb{C}}$ starting at the identity, such that $X^t$ is uniformly perfect for all $t \in [0,1]$ with the same constant $k$, and that $X$ is encircled. We begin by proving two statements about lifts under the covering maps $\varphi_U^t$, in the spirit of the results from \[sec:analytic covering maps\] above. \[smallup\] Let $U$ be a bounded complementary domain of $X$. For every ${\varepsilon}> 0$ there exists $\delta > 0$ such that for any $t \in [0,1]$ if $\gamma$ is a path in $U^t$ with ${\mathrm{diam}}(\gamma([0,1])) < \delta$ and ${\widehat{\gamma}}$ is any lift of $\gamma$ under $\varphi_U^t$, then ${\mathrm{diam}}({\widehat{\gamma}}([0,1])) < {\varepsilon}$. Suppose the lemma fails. Then there exists ${\varepsilon}> 0$, a sequence $\gamma_i$ of paths in $U^{t_i}$ and lifts ${\widehat{\gamma}}_i$ such that $\lim {\mathrm{diam}}(\gamma_i([0,1])) = 0$ and ${\mathrm{diam}}({\widehat{\gamma}}_i([0,1])) \geq {\varepsilon}$ for all $i$. Choose two points ${\widehat{a}}_i, {\widehat{b}}_i$ in ${\widehat{\gamma}}_i([0,1])$ such that $|{\widehat{a}}_i - {\widehat{b}}_i| > \frac{{\varepsilon}}{2}$, and let ${\widehat{{\mathfrak{g}}}}_i$ be the hyperbolic geodesic with endpoints ${\widehat{a}}_i$ and ${\widehat{b}}_i$. Put $\varphi_U^{t_i}({\widehat{{\mathfrak{g}}}}_i) = {\mathfrak{g}}_i$. By \[gehrhay\], ${\mathrm{diam}}({\mathfrak{g}}_i) \to 0$. Since the geodesics ${\widehat{{\mathfrak{g}}}}_i$ are pieces of round circles or straight lines which cross $\partial {\mathbb{D}}$ perpendicularly and have diameter bigger than $\frac{{\varepsilon}}{2}$, there exist $\eta > 0$ and points ${\widehat{x}}_i \in {\widehat{{\mathfrak{g}}}}_i$ such that $|{\widehat{x}}_i| \leq 1 - \eta$ for all $i$. By choosing a subsequence we may assume that $t_i \to t_\infty$, ${\widehat{x}}_i \to {\widehat{x}}_\infty \in {\mathbb{D}}$, and $\lim {\mathfrak{g}}_i = z_\infty$ is a point in $\overline{U^{t_\infty}}$. Let $K_i$ be the component of ${\widehat{{\mathfrak{g}}}}_i\cap B({\widehat{x}}_\infty,\frac{\eta}{2})$ containing the point ${\widehat{x}}_i$. We may assume $K_i \to K_\infty$, where $K_\infty$ is a non-degenerate continuum in ${\mathbb{D}}$. Since $\varphi_U^{t_i} \to \varphi_U^{t_\infty}$ uniformly on compact sets in ${\mathbb{D}}$, $\varphi_U^{t_\infty}(K_\infty) = z_\infty$, which is a contradiction since $\varphi_U^{t_\infty}$ is a covering map. Given a homotopy $\Gamma: [0,1] \times [0,1] \to {\mathbb{C}}$ we denote for each $t \in [0,1]$ $\Gamma^t = \Gamma|_{[0,1]\times\{t\}}: [0,1] \to {\mathbb{C}}$. \[Hlift\] Let $U$ be a bounded complementary domain of $X$. Suppose that $\Gamma: [0,1] \times [0,1] \to {\mathbb{C}}$ is a homotopy with $\Gamma^t(0) = h^t(\Gamma^0(0)) \in \partial U^t$ and $\Gamma^t(s) \in U^t$ for all $s \in (0,1]$ and all $t \in [0,1]$. Let ${\widehat{z}} \in {\mathbb{D}}$ be such that $\varphi_U^0({\widehat{z}}) = \Gamma^0(1)$. Then there exists a homotopy ${\widehat{\Gamma}}: [0,1] \times [0,1] \to \overline{{\mathbb{D}}}$ lifting $\Gamma$, i.e. $\varphi_U^t \circ {\widehat{\Gamma}}^t = \Gamma^t$ for all $t \in [0,1]$, and such that ${\widehat{\Gamma}}^0(1) = {\widehat{z}}$. Define $\Psi: {\mathbb{D}}\times [0,1] \to \bigcup_{t \in [0,1]} (U^t \times \{t\})$ by $\Psi(z,t) = (\varphi_U^t(z),t)$ for $t \in [0,1]$ and $z \in {\mathbb{D}}$. \[claim:Psi covering\] $\Psi$ is a covering map. Let $(y_0,t_0) \in U^{t_0} \times \{t_0\}$. Choose a small simply connected neighborhood $V$ of $y_0$ and $\delta > 0$ such that $\overline{V} \cap X^t = {\emptyset}$ and $V$ is evenly covered by $\varphi_U^t$ for all $t$ with $|t - t_0| \leq \delta$. Hence, $V \times (t_0-\delta, t_0+\delta)$ is a simply connected neighborhood of $(y_0,t_0)$ in $\bigcup_{t \in [0,1]} (U^t \times \{t\})$. Next let $(x_0,t_0) \in \Psi^{-1}((y_0,t_0))$. Since the covering maps $\varphi_U^t$ are uniformly convergent on compact sets, it is not difficult to see that there exists a map $g: (t_0-\delta, t_0+\delta) \to {\mathbb{D}}\times [0,1]$ such that $g(t_0) = (x_0,t_0)$ and $\Psi \circ g(t) = (y_0,t)$ for all $t$ with $|t - t_0| < \delta$. For each $t$ with $|t - t_0| < \delta$, let $x \in U^t$ be such that $g(t) = (x,t)$, and let $W^t$ be the component of $(\varphi_U^t)^{-1}(V)$ which contains the point $x$. Let $W = \bigcup_{t \in (t_0-\delta, t_0+\delta)} (W^t \times \{t\})$. Then it is not difficult to see that $\Psi |_W: W \to V \times (t_0-\delta, t_0+\delta)$ is a homeomorphism. Thus $\Psi$ is a covering map. Define $\alpha: [0,1] \times [0,1] \to \bigcup_{t \in [0,1]} (U^t \times \{t\})$ by $\alpha(s,t) = (\Gamma^t(s),t)$. Define the lift ${\widehat{\alpha}}$ of $\alpha$ under $\Psi$ as follows: first lift $\alpha |_{\{1\} \times [0,1]}$, using the covering map $\Psi$, to define ${\widehat{\alpha}} |_{\{1\} \times [0,1]}$ such that ${\widehat{\alpha}}(1,0) = ({\widehat{z}},0)$. Next, for each $t \in [0,1]$, use \[lift\] to lift $\alpha |_{[0,1] \times \{t\}}$ to define ${\widehat{\alpha}} |_{[0,1] \times \{t\}}$, so that this lift coincides with the first lift of $\alpha |_{\{1\} \times [0,1]}$ at $(1,t)$. Finally, define ${\widehat{\Gamma}} = \pi_1 \circ {\widehat{\alpha}}$, where $\pi_1$ denotes the first coordinate projection. Observe that for all $s \in (0,1]$, the function ${\widehat{\alpha}} |_{[s,1] \times [0,1]}$ is the unique lift of $\alpha |_{[s,1] \times [0,1]}$ under the covering map $\Psi$ with ${\widehat{\alpha}}(1,0) = {\widehat{z}}$, hence is continuous by standard covering map theory. It follows that ${\widehat{\alpha}}$, and hence ${\widehat{\Gamma}}$, is continuous on $(0,1] \times [0,1]$. It remains to prove that ${\widehat{\Gamma}}$ is continuous at all points of the form $(0,t_0)$. Fix $t_0 \in [0,1]$ and ${\varepsilon}> 0$. Choose $\delta > 0$ small enough (using \[smallup\]) so that for any $t \in [0,1]$ and any open arc $D$ in $U^t$ of diameter less than $\delta$, each lift ${\widehat{D}}$ of $D$ under $\varphi_U^t$ has diameter less than $\frac{{\varepsilon}}{3}$. Choose $\eta_1,\eta_2 > 0$ small enough so that: 1. $|{\widehat{\Gamma}}^{t_0}(0) - {\widehat{\Gamma}}^{t_0}(\eta_1)| < \frac{{\varepsilon}}{3}$ (this is possible since the lifted path ${\widehat{\Gamma}}^{t_0}$ is continuous); 2. $|{\widehat{\Gamma}}^{t}(\eta_1) - {\widehat{\Gamma}}^{t_0}(\eta_1)| < \frac{{\varepsilon}}{3}$ for each $t \in [t_0-\eta_2, t_0+\eta_2]$ (this is possible since we already know that ${\widehat{\Gamma}}$ is continuous on $(0,1] \times [0,1]$); and 3. $\Gamma([0,\eta_1] \times [t_0-\eta_2, t_0+\eta_2]) \subset B(\Gamma^{t_0}(0),\frac{\delta}{2})$ (this is possible since $\Gamma$ is continuous). Now for any $t \in [t_0-\eta_2, t_0+\eta_2]$, the image $\Gamma^t([0,\eta_1])$ has diameter less than $\delta$, hence ${\widehat{\Gamma}}^t([0,\eta_1])$ has diameter less than $\frac{{\varepsilon}}{3}$. It follows that ${\widehat{\Gamma}}^t([0,\eta_1]) \subset B({\widehat{\Gamma}}^{t_0}(0), {\varepsilon})$. So $[0,\eta_1) \times (t_0-\eta_2, t_0+\eta_2)$ is a neighborhood of $(0,t_0)$ which is mapped by ${\widehat{\Gamma}}$ into $B({\widehat{\Gamma}}^{t_0}(0), {\varepsilon})$. Thus ${\widehat{\Gamma}}$ is continuous at $(0,t_0)$. Observe that in light of \[Hlift\], condition (ii) of \[main1\] is stronger than condition (ii) of \[main1technical\]. Therefore to complete the proofs of both \[main1\] and \[main1technical\], we must prove that if condition (ii) of \[main1technical\] holds then the isotopy $h$ extends to the entire plane ${\mathbb{C}}$. Hence we will assume for the remainder of this section that condition (ii) of \[main1technical\] holds. Let ${\widehat{a}} \in \partial {\mathbb{D}}$ be any point at which $\varphi_U$ is defined (i.e. at which the radial limit of $\varphi_U$ exists). Using any sufficiently small crosscut $Q$ in $U$ which has one endpoint equal to $a = \varphi_U({\widehat{a}})$ and which is the image of a crosscut of ${\mathbb{D}}$ having one endpoint equal to ${\widehat{a}}$, we obtain from condition (ii) of \[main1technical\] a family of paths $\{\gamma_t: t \in [0,1]\}$ and lifts ${\widehat{\gamma}}_t$ with the properties listed there, and such that $\gamma_t(0) = a^t$ for each $t \in [0,1]$, and ${\widehat{\gamma}}_0(0) = {\widehat{a}}$. Because the sets ${\widehat{\gamma}}_t([0,1])$ vary continuously in $t$ with respect to the Hausdorff metric, the endpoint ${\widehat{\gamma}}_t(0)$ moves continuously in $t$. Now we define ${\widehat{a}}^t = {\widehat{\gamma}}_t(0)$ for each $t \in [0,1]$. Then ${\widehat{a}}^0 = {\widehat{a}}$ and $\varphi_U({\widehat{a}}^t) = a^t$ for all $t \in [0,1]$. It is straightforward to see that this definition of ${\widehat{a}}^t$ is independent of the choice of crosscut $Q$ and of the paths $\gamma_t$ and lifts ${\widehat{\gamma}}_t$ afforded by condition (ii) of \[main1technical\]. Thus, in the presence of condition (ii) of \[main1technical\], we can extend the superscript $t$ notation to points in $\partial {\mathbb{D}}$ at which $\varphi_U$ is defined. We will assume this is done for all such points ${\widehat{a}} \in \partial {\mathbb{D}}$ for the remainder of this section. Hyperbolic laminations {#sec:laminations} ---------------------- The following condition on a set of hyperbolic geodesics ${\mathcal{L}}$ is inspired by a similar notion introduced by Thurston (cf. [@thur85]). \[d:lamination\] A *hyperbolic lamination* ${\mathcal{L}}$ in a bounded domain $U \subset {\mathbb{C}}$ is a closed set of pairwise disjoint hyperbolic geodesic crosscuts in $U$ such that two distinct crosscuts in ${\mathcal{L}}$ are disjoint and have *at most one* common endpoint in the boundary of $U$ and the family of crosscuts in ${\mathcal{L}}$ of diameter greater or equal ${\varepsilon}$ is compact for any ${\varepsilon}> 0$. We denote by $\bigcup {\mathcal{L}}$ the union of all the crosscuts in ${\mathcal{L}}$. A *gap* of ${\mathcal{L}}$ is the closure of a component of $U {\setminus}\bigcup {\mathcal{L}}$. The compactness condition in \[d:lamination\] is equivalent to the following statement: if $\langle {\mathfrak{g}}_n \rangle_{n=1}^\infty$ is a sequence of elements of ${\mathcal{L}}$, then either ${\mathrm{diam}}({\mathfrak{g}}_n) \to 0$, or there is a convergent subsequence whose limit is also an element of ${\mathcal{L}}$. Fix a bounded complementary domain $U$ of $X$. Recall the Kulkarni-Pinkall construction described in \[sec:partitioning domains\]: we consider the collection $\mathcal{B}$ of all open disks $B(c,r) \subset U$ such that $|\partial B(c,r) \cap \partial U| \geq 2$. For each such disk $B(c,r)$, ${\mathrm{Hull}}(c)$ denotes the convex hull of the set $\partial B(c,r(c)) \cap \partial U$ in $B(c,r(c))$ *using the hyperbolic metric $\rho_c$ on the disk $B(c,r(c))$*. Let $\mathcal{J}$ be the collection of all crosscuts of $U$ which are contained in the boundaries of the sets ${\mathrm{Hull}}(c)$ for $B(c,r) \in \mathcal{B}$. Let $${\widehat{\mathcal{J}}} = \{{\widehat{Q}}: {\widehat{Q}} \textrm{ is a component of } \varphi_U^{-1}(Q) \textrm{ for some } Q \in \mathcal{J}\} .$$ For any $Q \in \mathcal{J}$, it is straightforward to see that each component ${\widehat{Q}}$ of $\varphi_U^{-1}(Q)$ is an open arc whose closure is mapped homeomorphically onto $\overline{Q}$ by $\varphi_U$. Given an (open) arc $A$, we denote the set of endpoints of $A$ by ${\mathrm{Ends}}(A)$; that is, ${\mathrm{Ends}}(A) = \{a,b\}$ means that $a$ and $b$ are the endpoints of (the closure of) $A$. Let $\mathcal{J}_{\mathrm{Ends}}= \{{\mathrm{Ends}}(Q): Q \in \mathcal{J}\}$, and let ${\widehat{\mathcal{J}}}_{\mathrm{Ends}}= \{{\mathrm{Ends}}({\widehat{Q}}): {\widehat{Q}} \in {\widehat{\mathcal{J}}}\}$. These are sets of (unordered) pairs. For each $t \in [0,1]$, let $$\begin{aligned} {\widehat{{\mathcal{L}}}}^t = \{{\widehat{{\mathfrak{g}}}}^t: & {\widehat{{\mathfrak{g}}}}^t \textrm{ is the hyperbolic geodesic in } {\mathbb{D}}\\ & \textrm{ joining } {\widehat{a}}^t,{\widehat{b}}^t, \textrm{ where } \{{\widehat{a}},{\widehat{b}}\} \in {\widehat{\mathcal{J}}}_{\mathrm{Ends}}\}\end{aligned}$$ and let $${\mathcal{L}}^t = \{\varphi_{U}^t({\widehat{{\mathfrak{g}}}}^t): {\widehat{{\mathfrak{g}}}}^t \in {\widehat{{\mathcal{L}}}}^t\} .$$ Observe that ${\mathcal{L}}^0$ is the collection of all hyperbolic geodesic crosscuts of $U^0 = U$ which are homotopic (with endpoints fixed) to some crosscut in $\mathcal{J}$. For $t > 0$, the collection ${\mathcal{L}}^t$ is obtained from ${\mathcal{L}}^0$ by following the motion of the endpoints of the arcs in ${\mathcal{L}}^0$ under the isotopy and joining the resulting points in $\partial U^t$ by the hyperbolic geodesic crosscut ${\mathfrak{g}}^t = \varphi_{U}^t({\widehat{{\mathfrak{g}}}}^t)$ in $U^t$ using the hyperbolic metric induced by $\varphi_U^t$. We do *not* consider a Kulkarni-Pinkall style partition of the domain $U^t$ for $t > 0$. We shall prove that ${\mathcal{L}}^t$ is a hyperbolic lamination in $U^t$ for each $t \in [0,1]$. We start with the following lemma. \[gtarc\] For any $t \in [0,1]$ and any ${\widehat{{\mathfrak{g}}}}^t \in {\widehat{{\mathcal{L}}}}^t$, the map $\varphi_U^t$ is one-to-one on ${\widehat{{\mathfrak{g}}}}^t$ and, hence, the corresponding element ${\mathfrak{g}}^t = \varphi_U^t({\widehat{{\mathfrak{g}}}}^t) \in {\mathcal{L}}^t$ is a crosscut in $U^t$. Moreover, if ${\mathfrak{g}}_1^t,{\mathfrak{g}}_2^t$ are two distinct elements of ${\mathcal{L}}^t$, then ${\mathfrak{g}}_1^t \cap {\mathfrak{g}}_2^t = {\emptyset}$ (though their closures may have at most one common endpoint in $\partial U^t$). Let ${\mathfrak{g}}^0$ be an arbitrary hyperbolic crosscut of ${\mathcal{L}}^0$ with endpoints, $a$ and $b$. By the discussion at the end of \[sec:lifts moving\], we can lift ${\mathfrak{g}}^0$ to geodesics ${\widehat{{\mathfrak{g}}}}^t$ with continuously varying endpoints. Let ${\widehat{a}}^t$ (${\widehat{b}}^t$) be the endpoints of ${\widehat{{\mathfrak{g}}}}^t$ corresponding to $a^t$ ($b^t$, respectively). Since ${\mathfrak{g}}^0$ is an arc, all components ${\widehat{{\mathfrak{g}}}}^0$ of $\varphi_U^{-1}({\mathfrak{g}}^0)$ are pairwise disjoint geodesic crosscuts of ${\mathbb{D}}$. Since the endpoints of all these crosscuts move continuously in $t$ and the points $a^t$ and $b^t$ are distinct, the geodesics ${\widehat{{\mathfrak{g}}}}^t$ are also pairwise disjoint open arcs for all $t$. Hence, $\varphi_U^t$ is one-to-one on each of these crosscuts and their common image is a geodesic arc ${\mathfrak{g}}^t$. By a similar argument, the lifts ${\widehat{{\mathfrak{g}}}}_1^t$ and ${\widehat{{\mathfrak{g}}}}_2^t$ of two distinct geodesics ${\mathfrak{g}}_1^t$ and ${\mathfrak{g}}_2^t$ in ${\mathcal{L}}^t$ are pairwise disjoint in ${\mathbb{D}}$ and, hence, ${\mathfrak{g}}_1^t \cap {\mathfrak{g}}_2^t = {\emptyset}$. It follows easily from the construction that two distinct geodesics in ${\mathcal{L}}^0$ share at most one common endpoint and, hence, the same is true for ${\mathcal{L}}^t$. To prove ${\mathcal{L}}^t$ is a hyperbolic lamination in $U^t$ for each $t \in [0,1]$, it remains to show that the collection of arcs in ${\mathcal{L}}^t$ of diameter at least ${\varepsilon}$ is compact for every ${\varepsilon}> 0$. This will follow from the next Lemma, which states that even for varying $t$, the limit of a convergent sequence of elements of the corresponding ${\mathcal{L}}^t$ collections must belong to the limit ${\mathcal{L}}^t$ collection as well. \[gtconverge\] Let $\{a_1,b_1\}, \{a_2,b_2\}, \ldots$ be a sequence of pairs in $\mathcal{J}_{\mathrm{Ends}}$ such that $a_n \to a_\infty$ and $b_n \to b_\infty$, where $a_\infty$ and $b_\infty$ are distinct points in $\partial U$. Then $\{a_\infty,b_\infty\} \in \mathcal{J}_{\mathrm{Ends}}$. Furthermore, let $t_1,t_2,\ldots \in [0,1]$ be a sequence such that $t_n \to t_\infty \in [0,1]$. For each $n \in \{1,2,\ldots\} \cup \{\infty\}$ and each $t \in [0,1]$, let ${\mathfrak{g}}_n^t \in {\mathcal{L}}^t$ be the geodesic with endpoints $a_n^t$ and $b_n^t$. Then ${\mathfrak{g}}_n^{t_n} \to {\mathfrak{g}}_\infty^{t_\infty}$ in the sense that there exist homeomorphisms $\theta_n: {\mathfrak{g}}_\infty^{t_\infty} \to {\mathfrak{g}}_n^{t_n}$ such that $\theta_n \to {\mathrm{id}}$. Let ${\widehat{\mathcal{A}}} \subset \partial {\mathbb{D}}$ be the set of all points in $\partial {\mathbb{D}}$ at which $\varphi_U^0$ is defined, and let $\mathcal{A} = \{\varphi_U^0(x): x \in {\widehat{\mathcal{A}}}\}$. This set $\mathcal{A}$ is the set of all accessible points in $\partial U$ by Theorem \[lift\]. The set $\mathcal{A}$ is dense in $\partial U$ and the set ${\widehat{\mathcal{A}}}$ of lifts of points in $\mathcal{A}$ under $\varphi_U^0$ is dense in $\partial {\mathbb{D}}$ by Theorem \[Fatou\]. \[claim:contangle\] For each $t \in [0,1]$, the function $\alpha^t: \partial {\mathbb{D}}\to \partial {\mathbb{D}}$ which extends the function that maps each ${\widehat{y}} \in {\widehat{\mathcal{A}}}$ to ${\widehat{y}}^t$, and is defined by $\alpha^t(x) = \lim_{\{{\widehat{y}} \to x \,\mid\, {\widehat{y}} \in {\widehat{\mathcal{A}}} \}} {\widehat{y}}^t$ for each $x \in \partial {\mathbb{D}}$, is a homeomorphism. Moreover $\alpha: \partial {\mathbb{D}}\times [0,1] \to \partial {\mathbb{D}}$, defined by $\alpha(x,t) = \alpha^t(x)$, is an isotopy starting at the identity. Since the restriction $\alpha^t|_{{\widehat{\mathcal{A}}}}$ is one-to-one and preserves circular order, it suffices to show that $\alpha^t({\widehat{\mathcal{A}}})$ is dense for each $t$. The proof will make use of the following notion: Let $\mathbb S$ be the unit circle, $\gamma:\mathbb S\to \mathbb C$ a continuous function and $O$ a point in the unbounded complementary domain of $\gamma(\mathbb S)$. A complementary domain $U$ of $\gamma(\mathbb S)$ is odd if every arc $J$ from $O $ to a point in the domain intersects $\gamma(\mathbb S)$ an odd number of times; counting with multiplicity and assuming that every intersection is transverse and the total number of crossings is finite; cf. [@OT82 Lemma 2.1]. Fix $\varepsilon>0$. By Theorem \[Riesz\], $\alpha^0({\widehat{\mathcal{A}}})$ is dense. By condition (ii) of Theorem \[main1technical\] and Lemma \[smallup\] there exists $\delta>0$ so that for any crosscut $C$ of $X^0$ all lifts of the paths $\gamma_t^C$ (whose existence follows from condition (ii)) have diameters less than $\varepsilon$. Since $X$ is uniformly perfect one can choose finitely many simple closed curves $S_i$ which bound disjoint closed disks $D_i$ so that $X^0\subset \bigcup_i D_i$, $X^0\cap \bigcup \partial D_i$ is finite, and for all $i$ and every component $C$ of $S_i\setminus X^0$, the diameter of $C$ is less than $\delta$. Moreover we can assume that for all $t$  $\varphi^t_U(0)$ is contained in the unbounded component of $\bigcup \gamma^C_t([0,1])$. Then all lifts $\widehat\gamma_t^C$ have diameter less than $\varepsilon$. Let $F^0=\bigcup_i X^0\cap S_i$ and $F^t=h^t(F^0)$. Since for all $C$ and all $t$, $\gamma_t^C((0,1))\cap X^t=\emptyset$ and ${\widehat{\gamma}}_t$ is continuous in the Hausdorff metric, it follows that every point of $X^t\setminus F^t$ is contained in an odd bounded complementary component of $\bigcup \gamma^C_t([0,1])$. Every component $C$ of $S_i\setminus X$ is a crosscut which defines a collection of paths $\gamma_t^C$ by condition (ii) of Theorem \[main1technical\]. For all $t$ let ${\widehat{\mathcal C}}_t$ be the collection of all lifts of all the paths $\gamma_t^C$. Fix $t$. Suppose that $r$ is a radius of the unit disk $\mathbb D$ so that $R=\varphi^t_U(r)$ lands on a point in $X^t \setminus F^t$. Then a terminal segment $B$ of $R$ must be in an odd complementary domain of $\bigcup \gamma^C_t([0,1])$. Let $A =R \setminus B$ be the initial segment of $R$. Then the subsegment $b$ of $r$ that corresponds to $B$ is disjoint from all crosscuts in ${\widehat{\mathcal C}}_t$. Suppose that that $b$ is not contained in the shadow of one of these crosscuts. Then we may assume that the intersection of $a$ and any member of ${\widehat{\mathcal C}}_t$ is finite and even. Since we may also assume that the intersection of $a$ with all lifted crosscuts is finite, the intersection of $a$ with the union of all members of ${\widehat{\mathcal C}}_t$ in a finite even number. Since $\varphi^t_U$ is a local homeomorphism, the number of intersections of $A$ with all crosscuts $\gamma_t^C$ is also even, a contradiction since $A$ terminates in an odd domain. Note that by construction, $\mathcal{J}$ is almost a lamination, except that multiple arcs in $\mathcal{J}$ can share the same two endpoints. In particular, if $C(a_n b_n)$ are circular arcs in $\mathcal{J}$ joining the points $a_n$ and $b_n$ then, after taking a subseuence if necessary, $\lim C(a_n b_n)$ is a circular arc in $\mathcal{J}$ joining $a_\infty$ to $b_\infty$. From this it follows easily that ${\mathcal{L}}^0$ is a lamination and, if ${\mathfrak{g}}_n \in {\mathcal{L}}^0$ is the geodesic joining $a_n$ to $b_n$, then $\lim {\mathfrak{g}}_n = {\mathfrak{g}}_\infty$, where ${\mathfrak{g}}_\infty \in {\mathcal{L}}^0$ is the geodesic joining $a_\infty$ to $b_\infty$. Choose lifts ${\widehat{{\mathfrak{g}}}}_n^t$ and ${\widehat{{\mathfrak{g}}}}_\infty^t$ under $\varphi_U^t$ for each $t \in [0,1]$ as in the proof of \[gtarc\], such that $\lim {\widehat{{\mathfrak{g}}}}_n^0 = {\widehat{{\mathfrak{g}}}}^0_\infty$. Fix $k$. By \[claim:contangle\], $\lim {\widehat{{\mathfrak{g}}}}_n^{t_k} = {\widehat{{\mathfrak{g}}}}_\infty^{t_k}$. This implies immediately that $\liminf {\mathfrak{g}}_n^{t_k} \supset {\mathfrak{g}}_\infty^{t_k}$. Since the points $a_n$ and $a_\infty$ can be joined by a small crosscut in $U$, it follows from assumption (ii) of \[main1technical\] that the points $a_n^{t_k}$ and $a_\infty^{t_k}$ can be joined by a small path. Hence, points $x_n^{t_k}$ in ${\mathfrak{g}}_n^{t_k}$ close to an endpoint (say $a_n^{t_k}$) can be joined to the endpoint $a_n^{t_k}$ by a small path (first by a small arc to a point in ${\mathfrak{g}}_\infty^{t_k}$ and then by a small arc in $U^{t_k}$ to the endpoint $a_\infty^{t_k}$, followed by a small path in $U^{t_k}$ to $a_n^{t_k}$). By \[gehrhay\], the sub-geodesic of ${\mathfrak{g}}_n^{t_k}$ from $x_n^{t_k}$ to $a_n^{t_k}$ is small and we can conclude that $\lim {\mathfrak{g}}_n^{t_k} = {\mathfrak{g}}_\infty^{t_k}$ for each $k$. Since the maps $\varphi_U^t$ are uniformly convergent on compact subsets, $\liminf {\mathfrak{g}}_\infty^{t_k} \supset {\mathfrak{g}}_\infty^{t_\infty}$. Since by the above argument the sub-geodesic from a point close to the endpoint of ${\mathfrak{g}}_\infty^{t_k}$ to this endpoint is small, $\lim {\mathfrak{g}}_\infty^{t_k} = {\mathfrak{g}}_\infty^{t_\infty}$. It is now easy to see that there exist homeomorphisms $\theta_n: {\mathfrak{g}}_\infty^{t_\infty} \to {\mathfrak{g}}_n^{t_n}$ such that $\theta_n \to {\mathrm{id}}$. For each $t \in [0,1]$, we conclude from \[gtarc\] and \[gtconverge\] (using $t_n = t$ for all $n$) that ${\mathcal{L}}^t$ is a lamination in $U^t$. Proof of \[main1technical\] {#sec:proof main1} --------------------------- In this section we will complete the proof of \[main1technical\] (and hence of \[main1\] as well). We will employ here the path midpoint function ${\mathsf{m}}$ described in \[thm:midpt\] of \[sec:midpoints\]. Let $U$ be any bounded complementary domain of $X$, and consider the hyperbolic laminations ${\mathcal{L}}^t$ in $U^t$ as constructed above in \[sec:laminations\]. Given any element ${\mathfrak{g}}\in {\mathcal{L}}^0$, we extend the isotopy $h$ over ${\mathfrak{g}}$ to $h_{\mathfrak{g}}: (X \cup {\mathfrak{g}}) \times [0,1] \to {\mathbb{C}}$ by defining $h_{\mathfrak{g}}^t({\mathsf{m}}({\mathfrak{g}})) = {\mathsf{m}}({\mathfrak{g}}^t)$ and, if $x \in {\mathfrak{g}}$ is located on the subarc with endpoints ${\mathsf{m}}({\mathfrak{g}})$ and $a$ (respectively, $b$), then $h_{\mathfrak{g}}^t(x)$ is the unique point on the subarc of ${\mathfrak{g}}^t$ with endpoints ${\mathsf{m}}({\mathfrak{g}}^t)$ and $a^t$ (respectively, $b^t$) such that $\rho^0(x, {\mathsf{m}}({\mathfrak{g}})) = \rho^t(h_{\mathfrak{g}}^t(x), {\mathsf{m}}({\mathfrak{g}}^t))$, using the hyperbolic metric $\rho^t$ on $U^t$. Now extend $h$ to $h_{\mathcal{L}}: X \cup \bigcup {\mathcal{L}}^0 \to {\mathbb{C}}$ by defining $$h_{\mathcal{L}}(x,t) = \begin{cases} h(x,t) & \textrm{if $x \in X$} \\ h_{\mathfrak{g}}(x,t) & \textrm{if $x \in {\mathfrak{g}}\in {\mathcal{L}}^0$.} \end{cases}$$ Then for each $t \in [0,1]$, $h_{\mathcal{L}}^t$ is clearly a bijection from $X \cup \bigcup {\mathcal{L}}^0$ to $X^t \cup \bigcup {\mathcal{L}}^t$. \[claim:cont\] $h_{\mathcal{L}}$ is continuous. Suppose that $(x_i,t_i) \to (x_\infty,t_\infty)$ and $x_i \in {\mathfrak{g}}_i \in {\mathcal{L}}^0$. If there exists ${\varepsilon}> 0$ so that ${\mathrm{diam}}({\mathfrak{g}}_i) > {\varepsilon}$ for all $i$, then we may assume, by taking a subsequence if necessary, that $\lim {\mathfrak{g}}_i = {\mathfrak{g}}_\infty \in {\mathcal{L}}^0$. If $x_\infty$ is not an endpoint of ${\mathfrak{g}}_\infty$ then, by uniform convergence of $\varphi_U^t$ on compact sets, $\lim h_{\mathcal{L}}(x_i,t_i) = h_{\mathcal{L}}(x_\infty,t_\infty)$. If $x_\infty$ is an endpoint of ${\mathfrak{g}}_\infty$ (so $x_\infty \in X$), then $\rho^0(x_i,{\mathsf{m}}({\mathfrak{g}}_i)) \to \infty$ and again $\lim h_{\mathcal{L}}(x_i,t_i) = h_{\mathcal{L}}(x_\infty,t_\infty) = h(x_\infty,t_\infty)$. Hence we may assume that $\lim {\mathrm{diam}}({\mathfrak{g}}_i) = 0$. Then $x_\infty \in X$ and $\lim {\mathrm{diam}}(h_{\mathcal{L}}^{t_i}({\mathfrak{g}}_i)) = 0$. Hence, if $a_i$ is an endpoint of ${\mathfrak{g}}_i$, then $\lim h_{\mathcal{L}}(x_i,t_i) = \lim h(a_i,t_i) = h(x_\infty,t_\infty)$ as desired. Finally, we repeat the above procedure on each bounded complementary domain $U$ of $X$ to extend $h$ over the hyperbolic lamination obtained from the Kulkarni-Pinkall construction as in \[sec:laminations\] on each such $U$. The result is a function $H: Y \times [0,1] \to {\mathbb{C}}$ which is defined on the union $Y$ of $X$ with all the hyperbolic laminations of all bounded complementary domains of $X$. Note that for any ${\varepsilon}> 0$, there are only finitely many bounded complementary domains of $X$ which contain a disk of diameter at least ${\varepsilon}$, and hence there are only finitely many such domains whose corresponding hyperbolic lamination contains an arc of diameter at least ${\varepsilon}$. This implies, as above, that $H$ is continuous. Note that each bounded complementary domain of $Y$ is a gap of the hyperbolic lamination of one of the bounded complementary domains of $X$. Since all such gaps are simply connected, $Y$ is a continuum. Hence by [@ot10] the isotopy $H$ of $Y$ can be extended over the entire plane. This completes the proof of \[main1technical\]. By the comments at the end of \[sec:lifts moving\], this also completes the proof of \[main1\]. In \[main1\] we assumed that $X^t$ is uniformly perfect for each $t \in [0,1]$. This assumption allows for the use of the powerful analytic results described in \[sec:analytic covering maps\]. It is natural to wonder if this assumption is really needed. We conjecture that this is not the case. Suppose that $X$ is a plane compactum and $h: X \times [0,1] \to {\mathbb{C}}$ is an isotopy starting at the identity. Then the following are equivalent: 1. $h$ extends to an isotopy of the entire plane, 2. for each ${\varepsilon}> 0$ there exists $\delta > 0$ such that for every complementary domain $U$ of $X$ and each crosscut $Q$ of $U$ with ${\mathrm{diam}}(Q) < \delta$, $h$ can be extended to an isotopy $h_Q: (X \cup Q) \times [0,1] \to {\mathbb{C}}$ such that for all $t \in [0,1]$, ${\mathrm{diam}}(h^t(Q)) < {\varepsilon}$. Compact sets with large components {#sec:large components} ================================== The remaining part of this paper is devoted to a proof of the following theorem. \[main2\] Suppose $X \subset {\mathbb{C}}$ is a compact set for which there exists $\eta > 0$ such that every component of $X$ has diameter bigger than $\eta$. Let $h: X \times [0,1] \to {\mathbb{C}}$ be an isotopy which starts at the identity. Then $h$ extends to an isotopy of the entire plane which starts at the identity. Suppose $X \subset {\mathbb{C}}$ is a compact set for which there exists $\eta > 0$ such that every component of $X$ has diameter bigger than $\eta$. Let $h: X \times [0,1] \to {\mathbb{C}}$ be an isotopy which starts at the identity. Clearly in this case $X^t$ is uniformly perfect with the same constant $k$ for each $t \in [0,1]$, and we may assume that $X$ is encircled. By scaling, we may also assume that for any $a \in X$ and any component $C$ of $X$, there exists $c \in C$ such that $|a^t - c^t| \geq 1$ for all $t \in [0,1]$. *We will make these assumptions for the remainder of the paper*. We will prove \[main2\] using the characterization from \[main1technical\]. To this end, we fix (again for the remainder of the paper) an arbitrary bounded complementary domain $U$ of $X$. To satisfy condition (ii) of \[main1technical\] we must construct, for a sufficiently small crosscut $Q$ of $U$ with endpoints $a$ and $b$, a family of paths $\gamma_t$ in $U^t$ with endpoints $a^t$ and $b^t$, which remain small during the isotopy, such that $\gamma_0$ is homotopic to $Q$ in $U$ with endpoints fixed, and which can be lifted under $\varphi_U^t$ to paths ${\widehat{\gamma}}_t$ in ${\mathbb{D}}$ which are continuous in the Hausdorff metric. We will show first that, in the case that $X$ has large components, it suffices to construct the family of paths $\gamma_t$ to be continuous in the Hausdorff metric. \[liftexist\] Let $a,b \in \partial U$. Suppose that $\{\gamma_t: t \in [0,1]\}$ is a family such that $\gamma_t$ is a path in $U^t$ joining $a^t$ and $b^t$ with ${\mathrm{diam}}(\gamma_t([0,1])) < \frac{1}{2}$ for each $t \in [0,1]$, and the sets $\gamma_t([0,1])$ vary continuously in $t$ with respect to the Hausdorff metric. Then there are lifts ${\widehat{\gamma}}_t$ of the paths $\gamma_t$ under $\varphi_U^t$ such that the sets ${\widehat{\gamma}}_t([0,1])$ also vary continuously in $t$ with respect to the Hausdorff metric. Suppose that the family $\gamma_t$ is as specified in the statement. Recall that $d_H$ denotes the Hausdorff distance. Fix $t_0 \in [0,1]$. It suffices to show that, given a lift ${\widehat{\gamma}}_{t_0}$ of $\gamma_{t_0}$ and $0 < {\varepsilon}< \frac{1}{2}$, there exists $\delta > 0$ and lifts ${\widehat{\gamma}}_t$ of $\gamma_t$ for $|t - t_0| < \delta$ such that $d_H({\widehat{\gamma}}_t([0,1]), {\widehat{\gamma}}_{t_0}([0,1])) < {\varepsilon}$. By \[smallup\] we can choose small disjoint open balls $B_a$ centered at $a^{t_0}$ and $B_b$ centered at $b^{t_0}$ of diameters less than $\frac{1}{4}$ such that for all $t$ and any component $C$ of $\partial B_a \cap U^t$ or $\partial B_b \cap U^t$, the diameter of each component of $(\varphi_U^t)^{-1}(C)$ is less than $\frac{{\varepsilon}}{4}$. Let $s_a,s_b \in (0,1)$ be the numbers such that $\gamma_{t_0}(s_a) \in \partial B_a$, $\gamma_{t_0}([0,s_a)) \subset B_a$, $\gamma_{t_0}(s_b) \in \partial B_b$, and $\gamma_{t_0}((s_b,1]) \subset B_b$. Denote $z_a = \gamma_{t_0}(s_a)$ and $z_b = \gamma_{t_0}(s_b)$. Choose an open set $O \subset {\mathbb{C}}$ such that $\gamma_{t_0}([s_a,s_b]) \subset O$, $\overline{O} \subset U^{t_0}$, and the diameter of $O \cup B_a \cup B_b$ is less than $1$. For $t$ sufficiently close to $t_0$, we have $\overline{O} \subset U^t$ and $\gamma_t([0,1]) \subset O \cup B_a \cup B_b$. Since each component of $X^t$ has diameter greater than $1$, we have that no bounded complementary component of $O \cup (B_a \cup B_b {\setminus}X^t)$ contains any points of $X^t$. It follows that there exists a simply connected open set $P_t$ in $U^t$ such that $\gamma_t((0,1)) \cup O \subset P_t$. This means that the covering map $\varphi_U^t$ maps each component of $(\varphi_U^t)^{-1}(P_t)$ homeomorphically onto $P_t$. Since the maps $\varphi_U^t$ converge uniformly on compact sets as $t \to t_0$, for $t$ sufficiently close to $t_0$ there exists exactly one component ${\widehat{P}}_t$ of $(\varphi_U^t)^{-1}(P_t)$ such that ${\widehat{\gamma}}_{t_0}([s_a,s_b]) \subset {\widehat{P}}_t$. For such $t$, define the lift ${\widehat{\gamma}}_t$ of $\gamma_t$ by ${\widehat{\gamma}}_t = (\varphi_U^t |_{{\widehat{P}}_t})^{-1} \circ \gamma_t$. To see that these lifts are Hausdorff close to ${\widehat{\gamma}}_{t_0}$, let $\delta > 0$ be small enough so that for all $t$ with $|t - t_0| < \delta$ we have: 1. There exists $\nu > 0$ such that $|(\varphi_U^t|_{{\widehat{P}}_t})^{-1}(x_1) - (\varphi_U^{t_0}|_{{\widehat{P}}_{t_0}})^{-1}(x_2)| < \frac{{\varepsilon}}{2}$ for all $x_1,x_2 \in {\mathbb{C}}$ with $|x_1 - x_2| < \nu$ and either $x_1 \in O$ or $x_2 \in O$; 2. $d_H(\gamma_t([0,1]), \gamma_{t_0}([0,1])) < \nu$; and 3. $\gamma_t([0,1]) \cap (\partial B_a {\setminus}O) = {\emptyset}$ and $\gamma_t([0,1]) \cap (\partial B_b {\setminus}O) = {\emptyset}$. Given $t$ with $|t - t_0| < \delta$, let $C_{a,t}$ be the component of $\partial B_a {\setminus}X^t$ which contains $z_a$, and let $C_{b,t}$ be the component of $\partial B_b {\setminus}X^t$ which contains $z_b$. Let ${\widehat{C}}_{a,t}$ and ${\widehat{C}}_{b,t}$ be lifts of $C_{a,t}$ and $C_{b,t}$ which contain $(\varphi_U^t |_{{\widehat{P}}_t})^{-1}(z_a)$ and $(\varphi_U^t |_{{\widehat{P}}_t})^{-1}(z_b)$, respectively. By the choice of $B_a$ and $B_b$, the diameters of ${\widehat{C}}_{a,t}$ and ${\widehat{C}}_{t,b}$ are less than $\frac{{\varepsilon}}{4}$. It follows from (iii) that ${\widehat{\gamma}}_t([0,1])$ is contained in $(\varphi_U^t|_{{\widehat{P}}_t})^{-1}(O)$ together with the small region under ${\widehat{C}}_{t,a}$ and the small region under ${\widehat{C}}_{t,b}$. Note that these small regions have diameters less than $\frac{{\varepsilon}}{2}$. This means that for every point ${\widehat{p}}$ in ${\widehat{\gamma}}_t([0,1])$ there is a point ${\widehat{q}} \in {\widehat{\gamma}}_t([0,1]) \cap (\varphi_U^t|_{{\widehat{P}}_t})^{-1}(O)$ such that $|{\widehat{p}} - {\widehat{q}}| < \frac{{\varepsilon}}{2}$. Then, since $q = \varphi_U^t({\widehat{q}}) \in O$, by (ii) there is a point $r \in \gamma_{t_0}([0,1])$ such that $|q - r| < \nu$. If we let ${\widehat{r}}$ be the lift ${\widehat{r}} = (\varphi_U^{t_0}|_{{\widehat{P}}_{t_0}})^{-1}(r) \in {\widehat{\gamma}}_{t_0}([0,1])$, then by (i) we have $|{\widehat{q}} - {\widehat{r}}| < \frac{{\varepsilon}}{2}$. Then by the triangle inequality, $|{\widehat{p}} - {\widehat{r}}| < {\varepsilon}$. Similarly, we can show that for any ${\widehat{r}} \in {\widehat{\gamma}}_{t_0}([0,1])$ there is a point ${\widehat{p}} \in {\widehat{\gamma}}_t([0,1])$ with $|{\widehat{p}} - {\widehat{r}}| < {\varepsilon}$. Thus $d_H({\widehat{\gamma}}_t([0,1]), {\widehat{\gamma}}_{t_0}([0,1])) < {\varepsilon}$. For the remainder of the paper, we fix an arbitrary ${\varepsilon}> 0$. For later use, fix $0 < \nu < \frac{1}{3}$ small enough so that $\frac{8 \nu}{1 - \nu} < \frac{{\varepsilon}}{2}$. To prove \[main2\], it remains to show that there exists $\delta > 0$ such that if $Q$ is a crosscut of $U$ with endpoints $a$ and $b$ with diameter less than $\delta$, there is a family of paths $\gamma_t$ such that (1) $\gamma_t$ is a path in $U^t$ joining $a^t$ and $b^t$ for each $t \in [0,1]$, (2) $\gamma_0$ is homotopic to $Q$ in $U$ with endpoints fixed, (3) ${\mathrm{diam}}(\gamma_t([0,1])) < {\varepsilon}$ for all $t \in [0,1]$, and (4) the sets $\gamma_t([0,1])$ vary continuously in $t$ with respect to the Hausdorff metric. In Section 4.1, we will transform the compactum $X$, so that the crosscut $Q$ becomes the straight line segment $[0,1]$ in the plane, to simplify the ensuing constructions and arguments. We will refer to the transformed plane as the “normalized plane”, and the image of $X$ will be denoted by ${\widetilde{X}}$. In Section 4.2, we will lift the isotopy under an exponential covering map. The domain of the covering map will be called the “exponential plane”, and the preimage of ${\widetilde{X}}$ will be denoted by ${\mathbf{X}}$. In Sections 4.3 and 4.4 we will replace the lift of the crosscut $[0,1]$ of ${\widetilde{X}}$ by an equidistant set which varies continuously in $t$. The projection of this equidistant set to the original plane containing $X^t$ will be shown in Section 4.5 to be the desired path $\gamma_t$. The normalized plane {#sec:norm plane} -------------------- In the following sections, we will make use of a covering map (which we will refer to as the “exponential map”) of the plane minus the endpoints of a crosscut $Q$. In order to simplify the notation and work with a single exponential map below we will normalize the compactum $X$ and the crosscut $Q$ of $X$ with end points $a$ and $b$ so that for all $t$, $a^t = 0$, $b^t = 1$, and $Q$ becomes the straight line segment $(0,1) \subset {\mathbb{R}}$. By composing with translations it is easy to see that given a crosscut $Q$ of $X$ with endpoints $a$ and $b$ we can always assume that the point $a$ is the origin $0$ and that this point remains fixed throughout the isotopy (i.e., $a^t = 0$ for all $t$). Let $Q$ be a crosscut of $U$ with endpoints $0$ and $b$ such that ${\mathrm{diam}}(Q) < \frac{1}{4}$. We will impose further restrictions on the diameter of $Q$ later. Since all arcs in the plane are tame, there exists a homeomorphism $\Theta: {\mathbb{C}}\to {\mathbb{C}}$ such that $\Theta(Q)$ is the straight line segment joining the points $0$ and $b$, $\Theta(0) = 0$, $\Theta(b) = b$ and $\Theta|_{{\mathbb{C}}{\setminus}B(0,2{\mathrm{diam}}(Q))} = {\mathrm{id}}_{{\mathbb{C}}{\setminus}B(0,2{\mathrm{diam}}(Q))}$. Let $L^t: {\mathbb{C}}\to {\mathbb{C}}$ be the linear map of the complex plane defined by $L^t(z) = \frac{1}{\Theta(b^t)} \,z$. Define ${\widetilde{X}} = L^0 \circ \Theta(X)$ and define the isotopy ${\widetilde{h}}: {\widetilde{X}} \times [0,1] \to {\mathbb{C}}$ by $${\widetilde{h}}({\widetilde{x}},t) = L^t \circ \Theta \circ h((L^0 \circ \Theta)^{-1}({\widetilde{x}}),t) = L^t \circ \Theta(x^t) .$$ Here and below we adopt the notation that ${\widetilde{x}} = L^0 \circ \Theta(x)$ for all $x \in X$ and, hence, ${\widetilde{h}}^t({\widetilde{x}}) = {\widetilde{x}}^t = L^t \circ \Theta(x^t)$. As indicated above, we will use ordinary letters to denote objects in the plane containing $X$ and attach a tilde to the corresponding objects in the normalized plane (the plane containing ${\widetilde{X}}$). In the next lemma we establish some simple properties of the induced isotopy ${\widetilde{h}}$. \[sizes\] There exists $\delta > 0$ such that if the crosscut $Q$ of $X$ with endpoints $0$ and $b$ has diameter ${\mathrm{diam}}(Q) < \delta$, then the induced isotopy ${\widetilde{h}}: {\widetilde{X}} \times [0,1] \to {\mathbb{C}}$ has the following properties: 1. ${\widetilde{h}}^0 = {\mathrm{id}}_{{\widetilde{X}}}$, ${\widetilde{X}}$ contains the points $0$ and $1$, the isotopy ${\widetilde{h}}$ fixes these points and the segment $(0,1) \subset {\mathbb{R}}$ in the complex plane is disjoint from ${\widetilde{X}}$; 2. If ${\widetilde{x}}^s \in (0,1)$ for some $s \in [0,1]$, then for each $t \in [0,1]$, $|{\widetilde{x}}^t| < \frac{\nu}{|\Theta(b^t)|}$; and 3. For every component ${\widetilde{C}}$ of ${\widetilde{X}}$ there exists a point ${\widetilde{c}} \in {\widetilde{C}}$ such that for all $t \in [0,1]$, $|{\widetilde{c}}^t| \geq \frac{1}{|\Theta(b^t)|}$. It follows immediately that ${\widetilde{h}}^0 = {\mathrm{id}}|_{{\widetilde{X}}}$, the isotopy ${\widetilde{h}}$ fixes the points $0$ and $1$ and that the interval $(0,1)$ is disjoint from ${\widetilde{X}}$. Hence (i) holds. Since $h$ is uniformly continuous we can choose $0 < \delta < \frac{\nu}{4}$ so that if $x \in X$ and $|x^s| < 2\delta$ for some $s \in [0,1]$, then $|x^t| < \frac{\nu}{2}$ for all $t$. Suppose ${\widetilde{x}}^s \in (0,1)$ for some $s\in [0,1]$, then $x^s \in Q$ and hence $|x^t| < \frac{\nu}{2}$ for all $t$. Then $|{\widetilde{x}}\,^t| < \frac{\nu}{2|\Theta(b^t)|} + \frac{2\delta}{|\Theta(b^t)|} \leq \frac{\nu}{|\Theta(b^t)|}$ using that $\Theta|_{B\setminus B(0,2\delta)}=id$ and so (ii) holds. By the standing assumption on $X$ stated after \[main2\], for every component $C$ of $X$ there exists a point $c \in C$ such that for all $t$, $|c^t| > 1$. Note that $\Theta(c^t) = c^t$ for all $t$. Hence, $|{\widetilde{c}}\,^t| \geq \frac{|c^t|}{\Theta(b^t)|} \geq \frac{1}{|\Theta(b^t)|}$ for all $t$ and (iii) holds. The exponential plane {#sec:exp plane} --------------------- Define the covering map $${\widetilde{\mathrm{exp}}}: \; {\mathbb{C}}{\setminus}\{(2n+1)\pi i: n \in \mathbb{Z}\} \;\to\; {\mathbb{C}}{\setminus}\{0,1\}$$ by $${\widetilde{\mathrm{exp}}}(z) = \frac{e^z}{e^z + 1} .$$ The function ${\widetilde{\mathrm{exp}}}$ is periodic with period $2\pi i$, and satisfies $$\lim_{{\mathbb{R}}(z) \rightarrow \infty} {\widetilde{\mathrm{exp}}}(z) = 1, \quad \lim_{{\mathbb{R}}(z) \rightarrow -\infty} {\widetilde{\mathrm{exp}}}(z) = 0, \quad {\widetilde{\mathrm{exp}}}({\mathbb{R}}) = (0,1) ,$$ and has poles at each point $(2n+1)\pi i$, $n \in \mathbb{Z}$. Note that ${\widetilde{\mathrm{exp}}}$ is the composition of the maps $e^z$ and the Möbius transformation $f(w) = \frac{w}{w+1}$. Hence the vertical line through a point $x \in {\mathbb{R}}$ is first mapped (by the covering map $e^z$) to the circle with center $0$ and radius $e^x$ and, if $x \neq 0$, then mapped by $f$ to the circle with center $\frac{e^{2x}}{e^{2x} - 1}$ and radius $\left| \frac{e^{x}}{e^{2x} - 1} \right|$. The imaginary axis is mapped to the vertical line through the point $x = \frac{1}{2}$ with the points at the poles $(2n+1)\pi i$ mapped to infinity. Denote by boldface ${\mathbf{X}}$ the preimage of ${\widetilde{X}}$ under the covering map ${\widetilde{\mathrm{exp}}}$, and in general we will use boldface letters to represent points and subsets of the exponential plane (the plane containing ${\mathbf{X}}$). The isotopy ${\widetilde{h}}$ of ${\widetilde{X}}$ lifts to an isotopy ${\mathbf{h}}$ of ${\mathbf{X}}$; that is, ${\mathbf{h}}: {\mathbf{X}} \times [0,1] \to {\mathbb{C}}$ is the map satisfying ${\mathbf{h}}^0 = {\mathrm{id}}_{{\mathbf{X}}}$ and ${\widetilde{\mathrm{exp}}}({\mathbf{h}}({\mathbf{x}}, t)) = {\widetilde{h}}({\widetilde{\mathrm{exp}}}({\mathbf{x}}), t)$ for every ${\mathbf{x}} \in {\mathbf{X}}$ and all $t \in [0,1]$. As above, given a point ${\mathbf{x}} \in {\mathbf{X}}$ (a subset ${\mathbf{A}} \subseteq {\mathbf{X}}$) and $t \in [0,1]$, denote ${\mathbf{x}}^t = {\mathbf{h}}({\mathbf{x}}, t)$ (respectively, ${\mathbf{A}}^t = {\mathbf{h}}({\mathbf{A}}, t)$). For each $n \in \mathbb{Z}$ and each $r > 0$, let ${\mathbf{E}}_n(r) = B((2n+1) \pi i, r)$ be the ball of radius $r$ centered at the point $(2n+1) \pi i$. \[ball-like\] There exists $0 < K < \pi$ such that for any $0 < r \leq K$, 1. $\displaystyle {\widetilde{\mathrm{exp}}}\left( \bigcup_{n \in \mathbb{Z}} {\mathbf{E}}_n(r) \right) \subset {\mathbb{C}}{\setminus}B \left( 0,\frac{1}{2r} \right)$; 2. $\displaystyle {\widetilde{\mathrm{exp}}}\left( {\mathbb{C}}{\setminus}\bigcup_{n \in \mathbb{Z}} {\mathbf{E}}_n(r) \right) \subset B \left( 0,\frac{2}{r} \right)$. For any $n \in \mathbb{Z}$ and sufficiently small $|z|$, we have $$e^{(2n+1)\pi i + z} = -e^z \approx -1 - z$$ and hence ${\widetilde{\mathrm{exp}}}((2n+1)\pi i + z) \approx \frac{1 + z}{z}$. In particular, there exists $0 < K < \pi$ such that for all $|z| \leq K$ $$\frac{1}{2|z|} \leq |{\widetilde{\mathrm{exp}}}((2n+1)\pi i + z)| \leq \frac{2}{|z|} .$$ Let $S_n = \partial B((2n+1)\pi i, r)$. Then, by the above inequality, $T = {\widetilde{\mathrm{exp}}}(\bigcup_n S_n)$ is an essential simple closed curve in the annulus centered around the origin $0$ with inner radius $\frac{1}{2r}$ and outer radius $\frac{2}{r}$. Since ${\widetilde{\mathrm{exp}}}$ is periodic, all $S_n$ have the same image $T$, and ${\widetilde{\mathrm{exp}}}^{-1}(T) = \bigcup_n S_n$. It follows that ${\widetilde{\mathrm{exp}}}(\bigcup_n B((2n+1)\pi i, r) {\setminus}(2n+1)\pi i)$ is contained in the unbounded complementary domain of $T$ and ${\widetilde{\mathrm{exp}}}({\mathbb{C}}{\setminus}\bigcup_n B((2n+1)\pi i, r))$ is contained in the bounded complementary domain of $T$. Hence, ${\widetilde{\mathrm{exp}}}(\bigcup_n B((2n+1)\pi i, r)) \subset {\mathbb{C}}{\setminus}B(0,\frac{1}{2r})$ and ${\widetilde{\mathrm{exp}}}({\mathbb{C}}{\setminus}\bigcup_n B((2n+1)\pi i, r)) \subset B(0,\frac{2}{r})$. Components of ${\mathbf{X}}^t$ {#sec:exp components} ------------------------------ We say a component ${\mathbf{C}}$ of ${\mathbf{X}}^t$ ($t \in [0,1]$) is *unbounded to the right* (respectively *left*) if ${\mathrm{proj}}_{\mathbb{R}}({\mathbf{C}}) \subseteq {\mathbb{R}}$ is not bounded from above (respectively from below). For convenience we denote the horizontal strip $\{x + iy \in {\mathbb{C}}: x \in {\mathbb{R}},\; 2n\pi < y < 2(n+1)\pi\}$ simply by ${\mathbf{HS}}_n$. Observe that since ${\widetilde{X}} \cap (0,1) = \emptyset$ and ${\widetilde{\mathrm{exp}}}^{-1}((0,1)) = \bigcup_{n \in \mathbb{Z}} \{x + iy \in {\mathbb{C}}: x \in {\mathbb{R}},\; y = 2n\pi\}$, we have that ${\mathbf{X}} \subset \bigcup_{n \in \mathbb{Z}} {\mathbf{HS}}_n$. \[uniqueB\_n\] There exists $\delta > 0$ such that if the crosscut $Q$ of $X$ with endpoints $0$ and $b$ has diameter ${\mathrm{diam}}(Q) < \delta$, then the following holds for the induced isotopy ${\mathbf{h}}$ of ${\mathbf{X}}$: Given a component ${\mathbf{C}}$ of ${\mathbf{X}}$, let $n \in \mathbb{Z}$ be such that ${\mathbf{C}}$ is contained in the horizontal strip ${\mathbf{HS}}_n$. Let ${\widetilde{D}}$ be the component of ${\widetilde{X}}$ that contains ${\widetilde{\mathrm{exp}}}({\mathbf{C}})$. Then: 1. if ${\widetilde{D}} \cap \{0,1\} = {\emptyset}$, then ${\mathbf{C}}^t \cap {\mathbf{E}}_n \left( \frac{|\Theta(b^t)|}{2} \right) \neq {\emptyset}$ for all $t \in [0,1]$; 2. ${\mathbf{C}}^t \cap {\mathbf{E}}_m \left( \frac{|\Theta(b^t)|}{2\nu} \right) = {\emptyset}$ for all $m \neq n$ and all $t \in [0,1]$; and 3. if ${\widetilde{D}} \cap \{0,1\} \neq {\emptyset}$, then ${\mathbf{C}}$ is unbounded to the left, to the right, or both. Furthermore, there exist for each $k \in \mathbb{Z}$ components ${\mathbf{L}}_k$ and ${\mathbf{R}}_k$ of ${\mathbf{X}} \cap {\mathbf{HS}}_k$ such that for all $t \in [0,1]$, ${\mathbf{L}}_k^t$ is unbounded to the left and ${\mathbf{R}}_k^t$ is unbounded to the right. Moreover, these may be chosen so that either ${\mathbf{L}}_k^t \cap {\mathbf{E}}_k \left( \frac{|\Theta(b^t)|}{2} \right) \neq {\emptyset}$ for all $k \in \mathbb{Z}$ or ${\mathbf{R}}_k^t \cap {\mathbf{E}}_k \left( \frac{|\Theta(b^t)|}{2} \right) \neq {\emptyset}$ for all $k \in \mathbb{Z}$. ![An illustration of an example of the set ${\mathbf{X}}^t$ at $t = 0$ (above) and at a later moment $t > 0$ (below). The horizontal lines are the preimages of $(0,1)$ under ${\widetilde{\mathrm{exp}}}$, and the balls depicted are the sets ${\mathbf{E}}_n \left( \frac{|\Theta(b^t)|}{2} \right)$.[]{data-label="fig:exp plane"}](ExponentialPlane_0.pdf) ![An illustration of an example of the set ${\mathbf{X}}^t$ at $t = 0$ (above) and at a later moment $t > 0$ (below). The horizontal lines are the preimages of $(0,1)$ under ${\widetilde{\mathrm{exp}}}$, and the balls depicted are the sets ${\mathbf{E}}_n \left( \frac{|\Theta(b^t)|}{2} \right)$.[]{data-label="fig:exp plane"}](ExponentialPlane_t.pdf) Adopt the notation introduced in the Lemma and assume ${\mathbf{C}}$ is contained in the horizontal strip ${\mathbf{HS}}_n$. Let $0 < K < \pi$ be as in \[ball-like\]. Choose $\delta > 0$ so small that $\frac{|\Theta(b^t)|}{\nu} < K$ for all $t$. Suppose that ${\widetilde{D}} \cap \{0,1\} = {\emptyset}$. Then ${\widetilde{\mathrm{exp}}}({\mathbf{C}}) = {\widetilde{D}}$. By \[sizes\](iii), we can choose ${\widetilde{c}} \in {\widetilde{D}}$ such that $|{\widetilde{c}}^t| \geq \frac{1}{|\Theta(b^t)|}$ for all $t$. By \[ball-like\](ii), ${\widetilde{\mathrm{exp}}}\left( {\mathbb{C}}{\setminus}{\mathbf{E}}_n \left( \frac{|\Theta(b^t)|}{2} \right) \right) \subset B(0,\frac{1}{|\Theta(b^t)|})$. Hence we can choose ${\mathbf{c}}^0 \in {\mathbf{E}}_n \left( \frac{|\Theta(b^0)|}{2} \right) \cap {\mathbf{C}}$ such that ${\widetilde{\mathrm{exp}}}({\mathbf{c}}^0) = {\widetilde{c}}^0$, and then ${\mathbf{c}}^t \in {\mathbf{C}}^t \cap {\mathbf{E}}_n \left( \frac{|\Theta(b^t)|}{2} \right)$ for all $t$. This completes the proof of (i). Note that for all $n \in \mathbb{Z}$, ${\widetilde{\mathrm{exp}}}({\mathbb{R}}\times \{2n\pi i\}) = (0,1) \subset {\mathbb{R}}$ and, hence, ${\mathbf{X}} \cap ({\mathbb{R}}\times \{2n\pi i\}) = {\emptyset}$ for all $n \in \mathbb{Z}$. To see that ${\mathbf{C}}^t \cap {\mathbf{E}}_m \left( \frac{|\Theta(b^t)|}{2\nu} \right) = {\emptyset}$ for $m \neq n$ and all $t$, note first that this is the case at $t = 0$ since ${\mathbf{C}}^0 = {\mathbf{C}} \subset {\mathbf{HS}}_n$. In order for a point ${\mathbf{x}}^s \in {\mathbf{C}}^s$ to enter a ball ${\mathbf{E}}_m \left( \frac{|\Theta(b^s)|}{2\nu} \right)$ with $n \neq m$ for some $s > 0$, it would first have to cross one of the horizontal boundary lines of ${\mathbf{HS}}_n$, say ${\mathbf{x}}^u \in {\mathbb{R}}\times \{2n\pi i\}$ for some $0 < u < s$. Then ${\widetilde{\mathrm{exp}}}({\mathbf{x}}^u) = {\widetilde{x}}^u \in (0,1) \subset {\mathbb{R}}$. Hence by \[sizes\](ii), $|{\widetilde{x}}\,^t| < \frac{\nu}{|\Theta(b^t)|}$ for all $t$. Since by \[ball-like\](i), ${\widetilde{\mathrm{exp}}}\left( {\mathbf{E}}_m \left( \frac{|\Theta(b^t)|}{2\nu} \right) \right) \subset {\mathbb{C}}{\setminus}B \left( 0, \frac{\nu}{|\Theta(b^t)|} \right)$ for all $t$, ${\mathbf{x}}^s \notin {\mathbf{E}}_m \left( \frac{|\Theta(b^s)|}{2\nu} \right)$, a contradiction. This completes the proof of (ii). Suppose next that ${\widetilde{D}} \cap \{0,1\} \neq {\emptyset}$. Then ${\widetilde{\mathrm{exp}}}({\mathbf{C}}) = {\widetilde{C}}$ is a component of ${\widetilde{D}} {\setminus}\{0,1\}$ such that $\overline{{\widetilde{C}}} \cap \{0,1\} \neq {\emptyset}$. Hence ${\mathbf{C}}$ is unbounded to the left or to the right (or both). This completes the proof of (iii). There must exist components ${\widetilde{L}}$ and ${\widetilde{R}}$ of ${\widetilde{X}} {\setminus}\{0,1\}$ such that $0$ is in the closure of ${\widetilde{L}}$ and $1$ is in the closure of ${\widetilde{R}}$. For each $k \in \mathbb{Z}$, let ${\mathbf{L}}_k$ be the lift of ${\widetilde{L}}$ under ${\widetilde{\mathrm{exp}}}$ which is contained in the strip ${\mathbf{HS}}_k$, and similarly define ${\mathbf{R}}_k$. Then since the closure of ${\widetilde{L}}\,^t$ contains $0$ and the closure of ${\widetilde{R}}\,^t$ contains $1$ for all $t \in [0,1]$, we have that for each $k \in \mathbb{Z}$, the lift ${\mathbf{L}}_k^t$ is unbounded to the left and the lift ${\mathbf{R}}_k^t$ is unbounded to the right for all $t \in [0,1]$. Finally, by \[sizes\](iii), there exists a component ${\widetilde{S}}$ of ${\widetilde{X}} {\setminus}\{0,1\}$ whose closure contains $0$ or $1$, which contains a point ${\widetilde{c}} \in {\widetilde{S}}$ such that $|{\widetilde{c}}\,^t| \geq \frac{1}{|\Theta(b^t)|}$. Then, as in the proof of (ii), the component ${\mathbf{S}}_k^t$ of ${\widetilde{\mathrm{exp}}}^{-1}({\widetilde{S}}^t)$ which contains the lift ${\mathbf{c}}^t_k \in {\mathbf{HS}}_k$ of ${\widetilde{c}}$ under ${\widetilde{\mathrm{exp}}}$ is unbounded to the left or to the right for all $k$ and $t$ and intersects ${\mathbf{E}}_k \left( \frac{|\Theta(b^t)|}{2} \right)$ as required. Let ${\mathbf{A}}$ denote the set of all points of ${\mathbf{X}}$ above ${\mathbb{R}}$ and ${\mathbf{B}}$ the set of all points of ${\mathbf{X}}$ below ${\mathbb{R}}$. Recall that ${\mathbf{X}} \cap {\mathbb{R}}= {\emptyset}$, so ${\mathbf{X}} = {\mathbf{A}} \cup {\mathbf{B}}$. For each $t \in [0,1]$, let $$\mathfrak{A}^t = {\mathbf{A}}^t \cup \bigcup_{n \geq 0} \overline{{\mathbf{E}}_n \left( \frac{|\Theta(b^t)|}{2} \right) } \quad\textrm{and}\quad \mathfrak{B}^t = {\mathbf{B}}^t \cup \bigcup_{n < 0} \overline{{\mathbf{E}}_n \left( \frac{|\Theta(b^t)|}{2} \right) } .$$ Then $\mathfrak{A}^t$ and $\mathfrak{B}^t$ are disjoint closed sets, and by \[uniqueB\_n\], each component of $\mathfrak{A}^t$ and of $\mathfrak{B}^t$ is either unbounded to the left or to the right. \[lem:vertical strip\] For each $r > 0$, there exists a lower bound $\ell \in {\mathbb{R}}$ (respectively upper bound $u \in {\mathbb{R}}$) such that for all $t \in [0,1]$, if $c + di \in {\mathbf{A}}^t$ (respectively ${\mathbf{B}}^t$) and $|c| \leq r$, then $d \geq \ell$ (respectively $d \leq u$). Let ${\mathbb{I}}$ denote the imaginary axis, so that $[-r,r] \times {\mathbb{I}}$ is the strip in the plane between the vertical lines through $r$ and $-r$. By uniform continuity of ${\widetilde{h}}$ and the fact that ${\widetilde{h}}$ leaves $0$ and $1$ fixed, there must exist for each $r > 0$ an $r' > r$ such that for all ${\mathbf{x}} \in {\mathbf{X}}$, if ${\mathbf{x}}^s \in ((-\infty,-r'] \cup [r',\infty)) \times {\mathbb{I}}$ for some $s \in [0,1]$ then for all $t \in [0,1]$, ${\mathbf{x}}^t \notin [-r,r] \times {\mathbb{I}}$. Given a point ${\mathbf{x}} \in {\mathbf{A}} \cap ([-r',r'] \times {\mathbb{I}})$, let ${\widetilde{x}} = {\widetilde{\mathrm{exp}}}({\mathbf{x}})$ be the corresponding point of ${\widetilde{X}}$. Every time ${\mathbf{x}}$ travels vertically within the strip $[-r',r'] \times {\mathbb{I}}$ a distance $2\pi$, the point ${\widetilde{x}}$ travels around a disk of fixed radius (depending on $r'$) centered at $0$ or at $1$. By uniform continuity and compactness of $X$, this can only happen a uniformly bounded number of times. The result follows. \[compact in strip\] Let ${\mathbf{C}}$ is any component of ${\mathbf{X}}$. Then for any $r > 0$ and any $t \in [0,1]$, the set ${\mathbf{C}}^t \cap \{x + yi: x \in [-r,r]\}$ is compact. Because the set ${\mathbf{X}}^t$ is periodic with period $2\pi i$, there exists an integer $k$ such that if ${\mathbf{D}}$ is the copy of ${\mathbf{C}}$ shifted vertically by $2\pi k$, then without loss of generality ${\mathbf{C}} \subset {\mathbf{A}}$ and ${\mathbf{D}} \subset {\mathbf{B}}$. Then by \[lem:vertical strip\], ${\mathbf{C}}^t$ is bounded below in the strip $\{x + yi: x \in [-r,r]\}$, and ${\mathbf{D}}^t$ is bounded above in this strip. By periodicity, it follows that ${\mathbf{C}}^t$ is also bounded above in this strip. Given two distinct components ${\mathbf{C}},{\mathbf{D}}$ of ${\mathbf{X}}$ which are both unbounded to the right (respectively, to the left), we say that ${\mathbf{C}}$ *lies above* ${\mathbf{D}}$ if there is some $R > 0$ such that for all $x \in {\mathbb{R}}$ with $x \geq R$ (respectively, $x \leq -R$), $\max(y \in {\mathbb{R}}: x + iy \in {\mathbf{C}}) > \max(y \in {\mathbb{R}}: x + iy \in {\mathbf{D}})$ and also $\min(y \in {\mathbb{R}}: x + iy \in {\mathbf{C}}) > \min(y \in {\mathbb{R}}: x + iy \in {\mathbf{D}})$. Note that it follows immediately from the definition of $\mathfrak{A}^0$ and $\mathfrak{B}^0$ that if ${\mathbf{C}}$ and ${\mathbf{D}}$ are components of $\mathfrak{A}^0$ and $\mathfrak{B}^0$, respectively, which are unbounded on the same side, then ${\mathbf{C}}$ lies above ${\mathbf{D}}$. The following Lemma follows from this fact. The proof, which is left to the reader, is very similar to the proof of Lemma 2.5 in [@ot10]. \[lem:dichotomy stable\] There exists $\delta > 0$ such that if the crosscut $Q$ of $X$ with endpoints $0$ and $b$ has diameter ${\mathrm{diam}}(Q) < \delta$, then the following holds for the induced isotopy ${\mathbf{h}}$ of ${\mathbf{X}}$: Let ${\mathbf{C}}$ and ${\mathbf{D}}$ be components of $\mathfrak{A}^0$ and $\mathfrak{B}^0$, respectively, which are both unbounded to the same side. Then ${\mathbf{C}}^t$ lies above ${\mathbf{D}}^t$ for all $t \in [0,1]$. Consequently, if ${\mathbf{E}}$ and ${\mathbf{F}}$ are components of ${\mathbf{A}}$ and ${\mathbf{B}}$, respectively, which are both unbounded to the same side, then ${\mathbf{E}}^t$ lies above ${\mathbf{F}}^t$ for all $t \in [0,1]$. Equidistant set between ${\mathbf{A}}^t$ and ${\mathbf{B}}^t$ {#sec:equi path} ------------------------------------------------------------- For the remainder of this section, we assume that $\delta > 0$ is chosen so that the conclusions of \[uniqueB\_n\] and \[lem:dichotomy stable\] hold. We also assume that the crosscut $Q$ has diameter less than $\delta$. Recall that disjoint closed sets $A_1$ and $A_2$ in ${\mathbb{C}}$ are *non-interlaced* if whenever $B(c,r)$ is an open disk contained in the complement of $A_1 \cup A_2$, there are disjoint arcs $C_1,C_2 \subset \partial B(c,r)$ such that $A_1 \cap \partial B(c,r) \subset C_1$ and $A_2 \cap \partial B(c,r) \subset C_2$. We allow for the possibility that $C_1 = {\emptyset}$ in the case that $A_2 \cap \partial B(c,r) = \partial B(c,r)$, and vice versa. \[lem:noninterlaced\] ${\mathbf{A}}^t$ and ${\mathbf{B}}^t$ are non-interlaced for all $t \in [0,1]$. Fix $t \in [0,1]$. Let $B \subset {\mathbb{C}}{\setminus}({\mathbf{A}}^t \cup {\mathbf{B}}^t)$ be a round open ball, and suppose for a contradiction that there exist points ${\mathbf{a}}_1,{\mathbf{a}}_2 \in \partial B \cap {\mathbf{A}}^t$ and ${\mathbf{b}}_1,{\mathbf{b}}_2 \in \partial B \cap {\mathbf{B}}^t$ such that the straight line segment $\overline{{\mathbf{a}}_1 {\mathbf{a}}_2}$ separates ${\mathbf{b}}_1$ and ${\mathbf{b}}_2$ in $\overline{B}$. Let ${\mathbf{A}}_1$ and ${\mathbf{A}}_2$ be the components of ${\mathbf{a}}_1$ and ${\mathbf{a}}_2$, respectively, in $\mathfrak{A}^t$, and let ${\mathbf{B}}_1$ and ${\mathbf{B}}_2$ be the components of ${\mathbf{b}}_1$ and ${\mathbf{b}}_2$ in $\mathfrak{B}^t$. Then $[{\mathbf{A}}_1 \cup {\mathbf{A}}_2] \cap [{\mathbf{B}}_1 \cup {\mathbf{B}}_2] = {\emptyset}$ and by the remarks immediately following the definition of $\mathfrak{A}^t$ and $\mathfrak{B}^t$, each of these four components is either unbounded to the left or unbounded to the right. Consider an arc $S$ in $\overline{B} {\setminus}({\mathbf{B}}_1 \cup {\mathbf{B}}_2)$ joining ${\mathbf{a}}_1$ and ${\mathbf{a}}_2$. Then ${\mathbf{A}}_1 \cup {\mathbf{A}}_2 \cup S$ separates the plane into at least two components, and ${\mathbf{B}}_1$ and ${\mathbf{B}}_2$ must lie in different components of ${\mathbb{C}}{\setminus}({\mathbf{A}}_1 \cup {\mathbf{A}}_2 \cup S)$. It is then straightforward to see by considering cases that there exist $i,j \in \{1,2\}$ such that ${\mathbf{B}}_i$ lies above ${\mathbf{A}}_j$, a contradiction with \[lem:dichotomy stable\]. For each $t \in [0,1]$, let ${\mathbf{M}}_t = {\mathrm{Equi}}({\mathbf{A}}^t,{\mathbf{B}}^t)$. In light of \[lem:noninterlaced\], ${\mathbf{M}}_t$ is a $1$-manifold by \[thm:manifold\]. \[M-disjoint\] For each $t \in [0,1]$ and each $n \in \mathbb{Z}$, ${\mathbf{M}}_t \cap {\mathbf{E}}_n\!\left( \frac{(1-\nu) |\Theta(b^t)|}{4\nu} \right) = {\emptyset}$. In particular, ${\mathbf{M}}_t \cap {\mathbf{E}}_n \!\left( \frac{|\Theta(b^t)|}{2} \right) = {\emptyset}$. Let $n \in \mathbb{Z}$ and assume that $n \geq 0$ (the case $n < 0$ proceeds similarly). Since $0<\nu<\frac{1}{3}$, $\frac{(1-\nu) |\Theta(b^t)|}{4\nu}>\frac{|\Theta(b^t)|}{2}$, so ${\mathbf{E}}_n \!\left( \frac{|\Theta(b^t)|}{2} \right) \subset {\mathbf{E}}_n \!\left( \frac{(1-\nu) |\Theta(b^t)|}{4\nu} \right)$. By \[uniqueB\_n\], there is a component ${\mathbf{C}}$ of ${\mathbf{X}}$ such that ${\mathbf{C}}^t \cap E_n \!\left( \frac{|\Theta(b^t)|}{2} \right) \neq {\emptyset}$ for all $t \in [0,1]$. Since $n \geq 0$, ${\mathbf{C}} \subset {\mathbf{A}}$. On the other hand, given any component ${\mathbf{D}}$ of ${\mathbf{B}}$, we have by \[uniqueB\_n\](i) that ${\mathbf{D}}^t \cap {\mathbf{E}}_n \!\left( \frac{|\Theta(b^t)|}{2\nu} \right) = {\emptyset}$ for all $t \in [0,1]$. Thus ${\mathbf{B}}^t \cap {\mathbf{E}}_n \!\left( \frac{|\Theta(b^t)|}{2\nu} \right) = {\emptyset}$ for all $t \in [0,1]$. It follows that any point $x \in {\mathbf{E}}_n \!\left( \frac{(1-\nu) |\Theta(b^t)|}{4\nu} \right)$, the distance from $x$ to ${\mathbf{A}}^t$ is less than $\frac{|\Theta(b^t)|}{2} + \frac{(1-\nu) |\Theta(b^t)|}{4\nu} = \frac{(1+\nu) |\Theta(b^t)|}{4\nu}$, while the distance from $x$ to ${\mathbf{B}}^t$ is a greater than $\frac{|\Theta(b^t)|}{2\nu} - \frac{(1-\nu) |\Theta(b^t)|}{4\nu} = \frac{(1+\nu) |\Theta(b^t)|}{4\nu}$. Thus ${\mathbf{M}}_t \cap {\mathbf{E}}_n \!\left( \frac{(1-\nu) |\Theta(b^t)|}{4\nu} \right) = {\emptyset}$ for all $n$. \[connM\] For each $t$ the set ${\mathbf{M_t}}$ is a connected 1-manifold. Moreover, the vertical projection of ${\mathbf{M}}_t$ to the real axis ${\mathbb{R}}$ is onto. Since by \[lem:noninterlaced\] ${\mathbf{A}}^t$ and ${\mathbf{B}}^t$ are non-interlaced, by \[thm:manifold\], ${\mathbf{M}}_t$ is a 1-manifold which separates ${\mathbf{A}}^t$ from ${\mathbf{B}}^t$. By \[M-disjoint\], ${\mathbf{M}}_t$ is disjoint from $\bigcup_n {\mathbf{E}}_n \!\left( \frac{|\Theta(b^t)}{2} \right)$ and, hence, ${\mathbf{M}}_t$ separates $\mathfrak{A}^t$ from $\mathfrak{B}^t$ (recall that $\mathfrak{A}^t$ and $\mathfrak{B}^t$ were defined above \[lem:vertical strip\]). Since all components of $\mathfrak{A}^t$ and $\mathfrak{B}^t$ are unbounded, no component of ${\mathbf{M}}_t$ is a simple closed curve and every component is a copy of ${\mathbb{R}}$ with both ends converging to infinity. By \[lem:vertical strip\] each end of a component of ${\mathbf{M}}_t$ either converges to $-\infty$ or $+\infty$. Fix $t$ and let ${\mathbf{M}}'$ be a component of ${\mathbf{M}}_t$. Note that for all ${\mathbf{x}} \in {\mathbf{M}}'$ there exists a set of points ${\mathbf{A}}_x^t \subset {\mathbf{A}}^t$ closest to $x$ and ${\mathbf{B}}_x^t \subset {\mathbf{B}}^t$ closest to $x$ and that $\bigcup_{{\mathbf{x}} \in {\mathbf{M}}'} {\mathbf{A}}^t_x$ and $\bigcup_{x \in {\mathbf{M}}'} {\mathbf{B}}^t_x$ are separated by the line ${\mathbf{M}}'$. For $x \in {\mathbf{M}}'$, let $r_x$ denote the distance from $x$ to ${\mathbf{A}}_x^t$ (equivalently, to ${\mathbf{B}}_x^t$). If both ends of ${\mathbf{M}}'$ are unbounded to the same side, say on the left side, then ${\mathbb{C}}{\setminus}{\mathbf{M}}'$ has two complementary components $P$ and $Q$, with $P$ only unbounded to the left (see \[fig:Mt projection\]). Assume that $\bigcup_{{\mathbf{x}} \in {\mathbf{M}}'} {\mathbf{A}}^t_x \subset P$ (the case $\bigcup_{{\mathbf{x}} \in {\mathbf{M}}'} {\mathbf{B}}^t_x \subset P$ is similar). Note that since $P$ contains no components of ${\mathbf{A}}^t$ which are unbounded to the right, $P$ must contain components of ${\mathbf{A}}^t$ which are unbounded to the left. ![An illustration of the situation described in the proof of \[connM\].[]{data-label="fig:Mt projection"}](MtProjection.pdf) Let $z \in {\mathbf{M}}'$. Then ${\mathbf{M}}' {\setminus}\{z\}$ consists of two rays ${\mathbf{M}}^+$ and ${\mathbf{M}}^-$ and we may assume that ${\mathbf{M}}^+$ lies above ${\mathbf{M}}^-$. Choose $z_n \in {\mathbf{M}}^+$ monotonically converging to $-\infty$ and ${\mathbf{b}}_n \in {\mathbf{B}}^t_{z_n}$. Since the radii $r_{z_n}$ are uniformly bounded, ${\mathbf{b}}_n$ also converges to $-\infty$. Let ${\mathbf{H}}_n$ be the component of ${\mathbf{B}}^t$ that contains ${\mathbf{b}}_n$. If ${\mathbf{H}}_n$ is unbounded to the left, by \[lem:dichotomy stable\] it must lie below the unbounded components of ${\mathbf{A}}^t$ in $P$ and hence must “go around” ${\mathbf{M}}'$ as ${\mathbf{H}}_1$ does in Figure \[fig:Mt projection\]. If ${\mathbf{H}}_n$ is not unbounded to the left, then either it intersects some ${\mathbf{E}}_k(\frac{|\Theta(b^t)|}{2})$ for some $k < 0$ (as ${\mathbf{H}}_2$ does in \[fig:Mt projection\]), or it is unbounded to the right (as ${\mathbf{H}}_3$ is in \[fig:Mt projection\]). In any case it is clear that there exists $c \in {\mathbb{R}}$ such that every component ${\mathbf{H}}_n$ intersects the vertical line $x = c$. For each $n$ let $d_n$ be such that the point $(c,d_n) \in {\mathbf{H}}_n$. By \[lem:vertical strip\], the sequence $d_n$ is bounded and, hence has an accumulation point $d_\infty$. By \[compact in strip\], the component of ${\mathbf{B}}^t$ which contains $d_\infty$ is unbounded to the left, and clearly it lies above the unbounded components of ${\mathbf{A}}^t$ in $P$, a contradiction with \[lem:dichotomy stable\]. Hence, the vertical projection of ${\mathbf{M}}'$ to the real axis ${\mathbb{R}}$ is onto. The proof that ${\mathbf{M}}_t = {\mathbf{M}}'$ is connected is similar and is left to the reader. \[pathM\] For each $t \in [0,1]$, the set ${\widetilde{\mathrm{exp}}}({\mathbf{M}}_t) \cup \{0,1\}$ is the image of a path ${\widetilde{\gamma}}_t$ in ${\widetilde{U}}^t$ joining $0$ and $1$. Let ${\mathbb{I}}$ denote the imaginary axis, so that $[-r,r] \times {\mathbb{I}}$ is the strip in the plane between the vertical lines through $r$ and $-r$. By \[lem:vertical strip\], for each $r > 0$, ${\widetilde{\mathrm{exp}}}(([-r,r] \times {\mathbb{I}}) \cap {\mathbf{M}}_t)$ is compact. Together with \[connM\], this implies that we can choose a parameterization $\alpha: (0,1) \to {\mathbf{M}}_t$ so that: $$\lim_{s \to 0^+} {\widetilde{\mathrm{exp}}}\circ \alpha(s) = \{0\}$$ and $$\lim_{s \to 1^-} {\widetilde{\mathrm{exp}}}\circ \alpha(s) = \{1\} .$$ Define the path ${\widetilde{\gamma}}_t: [0,1] \to {\widetilde{\mathrm{exp}}}({\mathbf{M}}_t) \cup \{0,1\}$ by ${\widetilde{\gamma}}_t(s) = {\widetilde{\mathrm{exp}}}\circ \alpha(s)$ for $s \in (0,1)$, and ${\widetilde{\gamma}}_t(0) = 0$ and ${\widetilde{\gamma}}_t(1) = 1$. Then ${\widetilde{\gamma}}_t$ is the required path. Proof of \[main2\] {#sec:proof main2} ------------------ In this section we complete the proof of \[main2\]. Recall that ${\varepsilon}> 0$ is a fixed arbitrary number, and $0 < \nu < \frac{1}{3}$ has been chosen so that $\frac{8 \nu}{1 - \nu} < \frac{{\varepsilon}}{2}$. Choose $0 < \delta < \frac{{\varepsilon}}{4}$ small enough so that the conclusions of \[uniqueB\_n\] and \[lem:dichotomy stable\] hold (and therefore the results from \[sec:equi path\] also hold). For each $t \in [0,1]$, let $\gamma_t = (L^t \circ \Theta)^{-1} \circ {\widetilde{\gamma}}_t$. This $\gamma_t$ is a path in $U^t$ joining $0$ and $b^t$. \[paths small\] ${\mathrm{diam}}(\gamma_t([0,1])) < {\varepsilon}$ for all $t \in [0,1]$. By \[M-disjoint\], for all $t \in [0,1]$ and $n \in \mathbb{Z}$, ${\mathbf{M}}_t \cap {\mathbf{E}}_n \!\!\left( \frac{(1-\nu) |\Theta(b^t)|}{4\nu} \right) = {\emptyset}$. By \[ball-like\](ii), we have ${\widetilde{\mathrm{exp}}}({\mathbf{M}}_t) \subset B \!\left( 0, \frac{8\nu}{(1-\nu) |\Theta(b^t)|} \right)$. Then\ $(L^t)^{-1}({\widetilde{\mathrm{exp}}}({\mathbf{M}}_t)) \subset B \!\left( 0, \frac{8\nu}{(1-\nu)} \right)$. By the choice of $\nu$, and since $\Theta$ is a homeomorphism of ${\mathbb{C}}$ which is the identity outside of $B(0, 2\delta) \subset B(0, \frac{{\varepsilon}}{2})$, it then follows that $\gamma_t([0,1]) = (L^t \circ \Theta)^{-1}({\widetilde{\mathrm{exp}}}({\mathbf{M}}_t)) \subset B(0, \frac{{\varepsilon}}{2})$. \[paths cont\] The sets $\gamma_t([0,1])$ vary continuously in the Hausdorff metric, and $\gamma_0$ is homotopic to $Q$ with endpoints fixed. By \[pathM\], ${\widetilde{\gamma}}_t$ is a path in ${\widetilde{U}}^t$ with endpoints $0$ and $1$. To see that ${\widetilde{\gamma}}_0$ is homotopic to ${\widetilde{Q}} = (0,1)$ note first that since ${\mathbf{A}}^0$ is above the real axis and ${\mathbf{B}}^0$ is below the real axis, for each $(x,y) \in {\mathbf{M}}_0$ the vertical segment from $(x,0)$ to $(x,y)$ is disjoint from ${\mathbf{X}}^0$. Hence we can construct a homotopy $k$ between ${\mathbf{M}}_0$ and ${\mathbb{R}}$ which fixes the x-coordinate of each point in ${\mathbf{M}}_0$ and decreases the absolute value of the $y$-coordinate to zero. Then ${\widetilde{\mathrm{exp}}}\circ k$ is the required homotopy between ${\widetilde{\gamma}}_0$ and ${\widetilde{Q}}$ with endpoints fixed. Hence, $\gamma_0 = (L^0 \circ \Theta)^{-1} \circ {\widetilde{\gamma}}_0$, is homotopic to $Q$ as required. Suppose $t_i \to t_\infty$. It is easy to see that $\limsup {\mathbf{M}}_{t_i} \subseteq {\mathbf{M}}_{t_\infty}$ by the definition of the equidistant sets ${\mathbf{M}}_t$. Since, by \[connM\], each ${\mathbf{M}}_{t_i}$ and ${\mathbf{M}}_{t_\infty}$ is a connected $1$-manifold whose vertical projection to the real axis ${\mathbb{R}}$ is onto, it follows that $\liminf {\mathbf{M}}_{t_i} \supseteq {\mathbf{M}}_{t_\infty}$. Thus $\lim {\mathbf{M}}_{t_i} = {\mathbf{M}}_{t_\infty}$. It follows that $\gamma_t([0,1]) = (L^t \circ \Theta)^{-1} \circ {\widetilde{\mathrm{exp}}}({\mathbf{M}}_t)$ is continuous in the Hausdorff metric. Combined with \[liftexist\], Claims \[paths small\] and \[paths cont\] complete the verification of condition (ii) of \[main1technical\]. Therefore, by \[main1technical\], the isotopy $h$ of the compactum $X$ can be extended to the entire plane ${\mathbb{C}}$. This completes the proof of \[main2\]. In \[main1\] we have given necessary and sufficient conditions for an isotopy of a uniformly perfect compact set to extend to an isotopy of the plane. These conditions involve the existence of an extension of the isotopy over sufficiently small crosscuts while controlling the size of the image. The following problem remains open. Are there intrinsic properties on $X$ and the isotopy $h$ of $X$, which do not involve the existence of extensions over small crosscuts, that characterize when an isotopy of $X$ can be extended over the plane? [^1]: The first named author was partially supported by NSERC grant RGPIN 435518 [^2]: The third named author was partially supported by NSERC grant OGP-0005616 [^3]: We are indebted to Professor R. D. Edwards who communicated a related problem to us.
--- abstract: 'Boolean nested canalizing functions (NCF) have important applications in molecular regulatory networks, engineering and computer science. In this paper, we study the certificate complexity of NCF. We obtain the formula for $b$ - certificate complexity, $C_0(f)$ and $C_1(f)$. Consequently, we get a direct proof of the certificate complexity formula of NCF. Symmetry is another interesting property of Boolean functions. We significantly simplify the proofs of some recent theorems about partial symmetry of NCF. We also describe the algebraic normal form of the $s$-symmetric nested canalizing functions. We obtain the general formula of the cardinality of the set of all $n$-variable $s$-symmetric Boolean NCF for $s=1,\cdots,n$. Particularly, we obtain the cardinality formula for the set of all the strongly asymmetric Boolean NCFs.' address: | [$^{1}$Department of Mathematics, Winston-Salem State University, NC 27110,USA]{}\ \ \ \ \ author: - 'Yuan Li$^1$, Frank Ingram$^2$ and Huaming Zhang$^3$' title: Results of nested canalizing functions --- [^1] Introduction {#sec-intro} ============ Nested Canalizing Functions (NCFs) were introduced in [@Kau2]. It was shown in [@Abd2] that the class of nested canalizing functions is identical to the class of so-called unate cascade Boolean functions, which have been studied extensively in engineering and computer science. It was shown in [@But] that this class produces the binary decision diagrams with the shortest average path length. Thus, a more detailed mathematical study of NCF has applications to problems in engineering as well. Recently, canalizing and (partially) nested canalizing functions received a lot of attention [@He; @Shm; @Abd2; @Mor; @Yua4; @Cla2; @Lay; @Win; @Mur2; @Yua; @Yua1; @Yua3]. In [@Coo], Cook et al. introduced the notion of sensitivity as a combinatorial measure for Boolean functions by providing lower bounds on the time needed by CREW PRAM (Concurrent Read Exclusive Write Parallel Random Access Machine). It was extended by Nisan [@Nis] to block sensitivity. Certificate complexity was first introduced by Vishkin and Wigderson [@Vis]. In [@Yua1], a complete characterization for nested canalizing functions is obtained via its unique algebraic normal form. Based on the algebraic normal form of NCFs, explicit formulas for the number of nested canalizing functions and the average sensitivity of any NCF were provided. In [@Yua; @Yua3], the formula of the (maximal) sensitivity of any NCF was obtained based on a characterization of NCF from [@Yua1]. It was showed that the block sensitivity and the $l$-block sensitivity are the same as the sensitivity for NCF. In [@Mor], the author proved sensitivity is same as the certificate complexity for read-once functions. We know certificate complexity of NCF is same as the sensitivity since NCF function is read-once. In this paper, we obtain the formula of $b$ - certificate complexity, $C_0(f)$ and $C_1(f)$ of NCF. Hence, as a by product, we obtain an a direct proof of the certificate complexity formula which is still same as the formula of sensitivity of NCF [@Yua; @Yua3]. Recently, Hao Huang proved the long standing Sensitivity Conjecture [@Hao]. Actually, for any Boolean function $f$, Hao Huang proved that $bs(f)\leq 2 s(f)^4$, where $bs(f)$ is the block sensitivity of $f$ and $s(f)$ is the sensitivity of $f$. Symmetric Boolean functions have important applications in code theory and cryptography and have been intensively studied in literature. In section 4, based on a Theorem 4.2 in [@Yua1], we study the properties of symmetric nested canalizing functions. We significantly simplify the proofs of some theorems in [@Dan]. We also investigate the relation between the layer number and the symmetric level for NCFs. For $1\leq s\leq n$, we obtain the explicit formula of the number of $n$-variable $s$-symmetric Boolean NCFs. When $s=n$, this number is the cardinality of all the strongly asymmetric NCFs. Through an example, we find the enumeration in Theorem 3.8 in [@Dan] is incomplete. Actually, we prove the cardinality of all the $n$-variable strongly asymmetric NCFs with maximal layer number is $n!2^{n-1}$. Hence, all the strongly asymmetric NCFs are more than $n!2^{n-1}$ when $n\geq 4$. Preliminaries {#2} ============= In this section, we introduce the definitions and notations. Let $\mathbb{F}=\mathbb{F}_{2}=\{0,1\}$. If $f$: $\mathbb{F}^n\longrightarrow \mathbb{F}$, it is well known [@Lid] that $f$ can be expressed as a polynomial, called the algebraic normal form (ANF): $$f(x_{1},x_{2},\dots,x_{n})=\bigoplus_{0\leq k_i\leq 1,i=1,\dots,n}a_{k_{1}k_{2}\dots k_{n}}{x_{1}}^{k_{1}}{x_{2}}^{k_{2}}\dots{x_{n}}^{k_{n}}$$ where each $a_{k_{1}k_{2}\dots k_{n}}\in\mathbb{F}$. The symbol $\oplus$ stands for addition modulo 2. \[def2.1\] Let $f$ be a Boolean function in $n$ variables. Let $\sigma$ be a permutation on $\{1,2,\dots,n\}$. The function $f$ is nested canalizing (NCF) in the variable order $x_{\sigma(1)},\dots,x_{\sigma(n)}$ with canalizing input values $a_{1},\dots,a_{n}$ and canalized values $b_{1},\dots,b_{n}$, if it can be represented in the form $f(x_{1},\dots,x_{n})=\left\{ \begin{array} [c]{ll}b_{1} & x_{\sigma(1)}=a_{1},\\ b_{2} & x_{\sigma(1)}= \overline{ a_{1}}, x_{\sigma(2)}=a_{2},\\ b_{3} & x_{\sigma(1)}= \overline{ a_{1}}, x_{\sigma(2)}= \overline{ a_{2}}, x_{\sigma(3)}=a_{3},\\ \vdots & \\ b_{n} & x_{\sigma(1)}= \overline{ a_{1}}, x_{\sigma(2)}= \overline{ a_{2}},\dots,x_{\sigma(n-1)}= \overline{ a_{n-1}}, x_{\sigma(n)}=a_{n},\\ \overline{b_{n}} & x_{\sigma(1)}= \overline{ a_{1}}, x_{\sigma(2)}= \overline{ a_{2}},\dots,x_{\sigma(n-1)}= \overline{ a_{n-1}}, x_{\sigma(n)}=\overline{ a_{n}}. \end{array} \right. $ Where $\overline{a}=a\oplus 1$.The function f is nested canalizing if f is nested canalizing in the variable order $x_{\sigma(1)},\dots,x_{\sigma(n)}$ for some permutation $\sigma$. \[th1\][@Yua1] Given $n\geq2$, $f(x_{1},x_{2},\dots,x_{n})$ is nested canalizing iff it can be uniquely written as $$\label{eq2.1} f(x_{1},x_{2},\dots,x_{n})=M_{1}(M_{2}(\cdots(M_{r-1}(M_{r}\oplus 1)\oplus 1)\cdots)\oplus 1)\oplus b.$$ Where $M_{i}=\prod_{j=1}^{k_{i}}(x_{i_{j}}\oplus a_{i_{j}})$, $i=1,\dots,r$, $k_{i}\geq1$ for $i=1,\dots,r-1$, $k_{r}\geq2$, $k_{1}+ \dots + k_{r}=n$, $a_{i_{j}}\in\mathbb{F}_{2}$, $\{i_{j}\mid j=1,\dots,k_{i}, i=1,\dots,r\}=\{1,\dots,n\}$. Because each NCF can be uniquely written as and the number $r$ is uniquely determined by $f$, we can define the following. \[def2.2\] [@Cla2; @Yua1]The layer structure of NCF $f$ written as in is defined as the vector $(k_1,k_2,\cdots, k_r)$, where $r$ is the number of layers and $k_i$ is the size of the ith layer, $i=1,2,\cdots,r$. Certificate Complexity of NCF ============================= Let $\mathbf{x}=(x_1,\dots,x_n)\in \mathbb{F}^n$, $[n]=\{1,\dots,n\}$. For any subset $S$ of $[n]$, we form $\mathbf{x}^S$ by complementing those bits in $\mathbf{x}$ indexed by elements of $S$. We write $\mathbf{x}^i$ for $\mathbf{x}^{\{i\}}$. [@Ken; @Rub] The sensitivity of $f$ at $\mathbf{x}$, $s(f;\mathbf{x})$, is the number of indices $i$ such that $f(\mathbf{x})\neq f(\mathbf{x}^i)$. The sensitivity of $f$, denoted $s(f)$, is $Max_{\mathbf{x}}s(f;\mathbf{x})$. In the above definition, $s(f)=Max_{\mathbf{x}}s(f;\mathbf{x})$ =$Max_{\mathbf{x}\in \{0,1\}^n}s(f;\mathbf{x})$. [@Nis] The block sensitivity $bs(f;\mathbf{x})$ of $f$ at $\mathbf{x}$ is the maximum number of disjoint subsets $B_1,\dots,B_r$ of $[n]$ such that, for all $j$, $f(\mathbf{x})\neq f(\mathbf{x}^{B_j})$. We refer to such a set $B_j$ as a block. The block sensitivity of $f$, denoted $bs(f)$, is $Max_{\mathbf{x}}bs(f;\mathbf{x})$. [@Ken] The $l$-block sensitivity of $f$ at $\mathbf{x}$, $bs_{l}(f;\mathbf{x})$, is the maximum number of disjoint subsets $B_1,\dots,B_r$ of $[n]$ such that, for all $j$, $|B_j|\leq l$ and $f(\mathbf{x})\neq f(\mathbf{x}^{B_j})$. The $l$-block sensitivity of $f$, denoted $bs_{l}(f)$, is $Max_{\mathbf{x}}bs_l(f;\mathbf{x})$. Obviously, we have $0\leq s(f;\mathbf{x})\leq bs_l(f;\mathbf{x})\leq bs(f; \mathbf{x})\leq n$ and $0\leq s(f)\leq bs_l(f)\leq bs(f)\leq n$.\ Certificate complexity was first introduced by Vishkin and Wigderson [@Vis]. This measure was initially called sensitive complexity. In the following, we will slightly modify (actually, simplify) the definition of certificate but the definition of certificate complexity will be the same. Given Boolean function $f(x_1,x_2,\cdots, x_n)$ and a word $\alpha=(a_1,a_2,\cdots,a_n)\in \mathbb{F}^n $, if $\{i_1,i_2,\cdots,i_k\}\subset [n]$ and the restriction function $f(x_1,x_2,\cdots, x_n)|_{x_{i_1}=a_{i_1},\cdots,x_{i_k}=a_{i_k}}$ is a constant function, the constant is $f(\alpha)$, then we call the subset $\{i_1,i_2,\cdots,i_k\}$ a certificate of the function $f$ on the word $\alpha$. The certificate complexity $C(f,\alpha)$ of $f$ on $\alpha$ is defined as the smallest cardinality of a certificate of $f$ on $\alpha$. The certificate complexity $C(f)$ of $f$ is defined as $max\{C(f,y)|y\in \mathbb{F}^n\}$. The $b$-certificate complexity $C_b(f)$ of $f$, $b\in \mathbb{F}$, is defined as $max\{C(f,y)|y\in \mathbb{F}^n, f(y)=b\}$. Obviously, $C(f)=max\{C_0(f), C_1(f)\}$. From the definition we know $1\leq C(f)\leq n$. Since a certificate for a word will have to contain at least one index of the variable in each sensitive block, we have $bs(f)\leq C(f)$. Let $f(x_1,x_2,x_3)=x_1x_2x_3\oplus x_1x_2\oplus x_3$ and $g(x_1,x_2,x_3)=x_1x_2x_3$. We list the certificate complexity of $f$ on every word in Table 1. It is easy to check $C(g,(1,1,1))=3$ and $C(g,\alpha)=1$ , where $\alpha\neq (1,1,1)$. Hence, $C(g)=3$. $\alpha$ $f(\alpha)$ $C(f,\alpha)$ $ Minimal\quad certificates$ ---------- ------------- --------------- ------------------------------ (0,0,0) 0 2 {1,3},{2,3} (0,0,1) 1 1 {3} (0,1,0) 0 2 {1,3} (0,1,1) 1 1 {3} (1,0,0) 0 2 {2,3} (1,0,1) 1 1 {3} (1,1,0) 1 2 {1,2} (1,1,1) 1 1 {3} : $C(f)=C_0(f)=C_1(f)=2$, $f(x_1,x_2,x_3)=x_1x_2x_3\oplus x_1x_2\oplus x_3$[]{data-label="table_ex"} \[lm3.1\] Let $f(x_{1},\dots,x_{n})$ be a Boolean function, $\sigma$ be a permutation on $[n]$, $\beta=(b_1,\dots,b_n)\in \mathbb{F}^n$. If $g=f(x_{\sigma(1)},\dots,x_{\sigma(n)})$ and $h=f(x_1\oplus b_1,\dots, x_n\oplus b_n)$, then the certificate complexities of $f$, $f\oplus 1$, $g$, and $h$ are the same. Because $f(x_1,x_2,\cdots, x_n)|_{x_{i_1}=a_{i_1},\cdots,x_{i_k}=a_{i_k}}$ is a constant function if and only if $f(x_{\sigma(1)},x_{\sigma(2)},\cdots, x_{\sigma(n)})|_{x_{\sigma(i_1)}=a_{i_1},\cdots,x_{\sigma(i_k)}=a_{i_k}}$ is a constant function. Hence, $C(f,\alpha)=C(g,\alpha)$ for any $\alpha=(a_1,\dots,a_n)\in \mathbb{F}^n$, then $C(f)=C(g)$. Because $f(x_1,x_2,\cdots, x_n)|_{x_{i_1}=a_{i_1},\cdots,x_{i_k}=a_{i_k}}$ is a constant function if and only if $h=f(x_1\oplus b_1,\dots, x_n\oplus b_n)|_{x_{i_1}=a_{i_1}\oplus b_{i_1} ,\cdots,x_{i_k}=a_{i_k}\oplus b_{i_k}}$ is a constant function. Hence, $C(f,\alpha)=C(h,\alpha+\beta)$ for any $\alpha$ and given $\beta$. We get $C(f)=C(h)$ since $\alpha\longmapsto \alpha \oplus\beta$ is a bijection over $\mathbb{F}^n$. Because $f$ is a constant if and only if $f\oplus 1$ is a constant, we get $C(f)=C(f\oplus 1)$. Actually, $C_0(f)=C_1(f\oplus 1)$ and $C_1(f)=C_0(f\oplus 1)$. In the following, we assume $$\label{eq3.1} f(x_{1},x_{2},\dots,x_{n})=f_r=M_{1}(M_{2}(\cdots(M_{r-1}(M_{r}\oplus 1)\oplus 1)\cdots)\oplus 1)$$ and $M_1=x_1\cdots x_{k_1}$, $M_2=x_{k_1+1}\cdots x_{k_1+k_2}$,$\cdots$, $M_r=x_{k_1+\cdots+k_{r-1}+1}\cdots x_n$.\ Let $\mathbf{x}=(\mathbf{x}_1,\dots,\mathbf{x}_r)$, where $\mathbf{x}_1=(x_1,\cdots,x_{k_1})$, $\mathbf{x}_2=(x_{k_1+1},\cdots,x_{k_1+k_2})$, $\cdots$, $\mathbf{x}_r=(x_{k_1+\cdots+k_{r-1}+1},\cdots, x_n)$. First, use induction we can rewrite the equation as the following $$\label{eq3.2} f(x_{1},x_{2},\dots,x_{n})=f_r=M_{1}M_2\cdots M_r\oplus M_{1}M_2\cdots M_{r-1}\oplus\cdots \oplus M_1M_2\oplus M_1.$$ We have \[lm3.2\] If $f(x_{1},x_{2},\dots,x_{n})=x_1x_2\cdots x_n$, then $C_0(f)=1$, $C_1(f)=n$. Hence, $C(f)=n$. Actually, $C(f,(1,1,\cdots,1))=n$, $f(1,1,\cdots,1)=1$ and $C(f,\alpha)=1$, $f(\alpha)=0$ with $\alpha\neq (1,1,\cdots,1)$. We already obtained the certificate complexity of $f_r$ when $r=1$. We are ready to prove the following theorem. \[th3.1\] If $f(x_{1},x_{2},\dots,x_{n})=f_r=M_{1}(M_{2}(\cdots(M_{r-1}(M_{r}\oplus 1)\oplus 1)\cdots)\oplus 1)$ and $M_1=x_1\cdots x_{k_1}$, $M_2=x_{k_1+1}\cdots x_{k_1+k_2}$,$\cdots$, $M_r=x_{k_1+\cdots+k_{r-1}+1}\cdots x_n$, $r\geq 2$, then $C_0(f_r)=\left\{ \begin{array} [c]{ll}k_2+k_4+\cdots +k_{r-1}+1, \quad 2\nmid r\\ k_2+k_4+\cdots +k_{r},\quad \quad \quad 2\mid r\\ \end{array} \right. $, $C_1(f_r)=\left\{ \begin{array} [c]{ll}k_1+k_3+\cdots+k_r, \quad \quad \quad 2\nmid r\\ k_1+k_3+\cdots+k_{r-1}+1, \quad 2\mid r\\ \end{array} \right. $and $C(f_r)=\left\{ \begin{array} [c]{ll}\max \{k_1+k_3+\cdots+k_r,k_2+k_4+\cdots +k_{r-1}+1\}, 2\nmid r\\ \max \{k_1+k_3+\cdots+k_{r-1}+1, k_2+k_4+\cdots +k_{r}\}, 2\mid r\\ \end{array} \right. $. We will use induction on $r$ to prove the first formula, the proof of the second one is similar. If $r=2$, then $f_r=f_2=M_1M_2+M_1=M_1(M_2\oplus 1)$. We will calculate $C(f_2,\alpha)$ for every $\alpha$ such that $f(\alpha)=0$. Since $f(\alpha)=M_1(M_2\oplus 1)(\alpha)=0$ if and only if $M_1=0$ or $M_1=1$ but $M_2=1$, we divide all the $\alpha$ into two disjoint groups. Group 1: $M_1=0$ At this moment, there is at least one of the bits of $\alpha$ in the first layer must be 0. Obviously, for such $\alpha$, $C(f_2,\alpha)=1$. Group 2: $M_1=1$ and $M_2=1$ At this moment, there is only one possibility, namely, $\alpha=(1,1,\cdots,1)$. It is easy to check $C(f_2,(1,1,\cdots,1))=k_2$ since $k_2$ is the number of the variables in $M_2$. Take the maximal value, we get $C_0(f_2)=k_2$.\ If $r=3$, then $f_3=M_1(M_2(M_3\oplus 1)\oplus1)=0$ $\Longleftrightarrow$ $M_1=0$ or $M_2(M_3\oplus 1)=1$ $\Longleftrightarrow$ $M_1=0$ or $M_2=1$ but $M_3=0$. There are two disjoint groups. Group A: $M_1=0$ In this group, the certificate complexity for each word is 1. Group B: $M_1=1$, $M_2=1$ and $M_3=0$ In this group, $\alpha=(\overbrace{1,\cdots,1}^{k_1},\overbrace{1,\cdots,1}^{k_2},\overbrace{*,\cdots,*,0,*,\cdots,*}^{k_3})$. First of all, if we just assign the values of the variables in $M_1$ and $M_2$ ( all of them in $\alpha$ are 1s), because $f_3=M_1M_2M_3\oplus M_1M_2\oplus M_1$, the variables in $M_3$ will never disappear (which means the function is not constant). So, we must chose some variables in $M_3$ to assign the value. Obvious, chose 0 bits of $\alpha$ in $M_3$ to assign, then $f_3$ will be reduced to $M_1(M_2\oplus 1)$. Obviously, Chose all the bits on $M_2$ to assign is necessary and sufficient to make $f_3$ zero. So, in this group, for any $\alpha$, we have $C(f_3,\alpha)=k_2+1$. In summary, take the maximal value, we get $C_0(f_3)=k_2+1$ Now we assume the first formula is true for any NCF with no more than $r-1$ layers. Let us consider $f(x_{1},x_{2},\dots,x_{n})=f_r=M_{1}(M_{2}(\cdots(M_{r-1}(M_{r}\oplus 1)\oplus 1)\cdots)\oplus 1)$= $M_{1}M_2\cdots M_r\oplus M_{1}M_2\cdots M_{r-1}\oplus\cdots \oplus M_1M_2\oplus M_1$. Let $g(x_{k_1+k_2+1},\cdots,x_n)=M_3\cdots M_r\oplus M_3\cdots M_{r-1}\oplus\cdots\oplus M_3M_4\oplus M_3$, we get $f_r=M_1(M_2(g\oplus 1)\oplus 1)=M_1M_2g\oplus M_1M_2\oplus M_1$. It is clear that $f_r=0$ $\Longleftrightarrow$ $M_1=0$ or $M_1=1$, $M_2=1$ and $g=0$. We will evaluate $C(f_r,\alpha)$ for all $\alpha\in \mathbb{F}$ with $f(\alpha)=0$ in the following Case 1: $M_1=0$ (There is at least one 0 bit in the first layer of $\alpha$) In this case, the certificate complexity of the word is 1. Case 2: $M_1=1$, $M_2=1$ and $g=0$ In this case, $\alpha=(\overbrace{1,\cdots,1}^{k_1},\overbrace{1,\cdots,1}^{k_2},\alpha')$, where $\alpha'$ is a word with length $n-k_1-k_2$. Obviously, we have $f_r(\alpha)=0$ if and only if $g(\alpha')=0$. For a fixed $\alpha'$ (equivalently, a fixed $\alpha$), we try to reduce $f_r=M_1M_2g\oplus M_1M_2\oplus M_1$ to zero by assigning values of $\alpha$ to the variables of $f_r$. Since $M_1M_2$ will never be zero, we must try to reduce $g$ to zero first. Once $g$ is zero, we get $f_r=M_1(M_2\oplus 1)$. Hence, we have $C(f_r,\alpha)=k_2+C(g,\alpha')$. Hence, $\max \{C(f_r,\alpha)|\alpha, f_r(\alpha)=0\}=k_2+\max \{C(g,\alpha')|\alpha',g(\alpha')=0\}=k_2+C_0(g)$ Since $g$ is a NCF with $r-2$ layers (the first layer is $M_3$, the second layer is $M_4$ and so on), by the induction assumption, we have $C_0(g)=\left\{ \begin{array} [c]{ll}k_4+k_6+\cdots +k_{r-1}+1, \quad 2\nmid (r-2)\\ k_4+k_6+\cdots +k_{r},\quad \quad \quad 2\mid (r-2)\\ \end{array}. \right. $ Hence, $\max \{C(f_r,\alpha)|\alpha, f_r(\alpha)=0\}=k_2+C_0(g)=$ $k_2+\left\{ \begin{array} [c]{ll}k_4+k_6+\cdots +k_{r-1}+1, \quad 2\nmid (r-2)\\ k_4+k_6+\cdots +k_{r},\quad \quad \quad 2\mid (r-2)\\ \end{array} \right.=$ $\left\{ \begin{array} [c]{ll}k_2+k_4+\cdots +k_{r-1}+1, \quad 2\nmid r\\ k_2+k_4+\cdots +k_{r},\quad \quad \quad 2\mid r\\ \end{array} \right. $. Since for any word in case 1, the certificate complexity is only 1, in summary, we get $C_0(f_r)=\left\{ \begin{array} [c]{ll}k_2+k_4+\cdots +k_{r-1}+1, \quad 2\nmid r\\ k_2+k_4+\cdots +k_{r},\quad \quad \quad 2\mid r\\ \end{array} \right. $. Because $C(f)=max\{C_0(f), C_1(f)\}$, we get the third formula. Because of Lemma \[lm3.1\], we have \[Cor1\] If any NCF is written as the one in Theorem \[th1\], then $C(f_r)=\left\{ \begin{array} [c]{ll}\max \{k_1+k_3+\cdots+k_r,k_2+k_4+\cdots +k_{r-1}+1\}, 2\nmid r\\ \max \{k_1+k_3+\cdots+k_{r-1}+1, k_2+k_4+\cdots +k_{r}\}, 2\mid r\\ \end{array} \right. $. Hence, the certificate complexity of NCF is uniquely determined by it layer structure $(k_1,k_2,\cdots,k_r)$ The above formula is same to the sensitivity formula $s(f_r)$ in [@Yua; @Yua3], so, \[Cor2\] We have $\lceil \frac{n+2}{2}\rceil \leq C(f_r)\leq n$ for any NCF $f$. Both the lower and upper bounds are tight. Because in [@Yua; @Yua3], these bounds are tight for $s(f_r)$. Symmetric Properties of NCF =========================== In 1938, Shannon [@Sha] recognized that symmetric functions have particularly efficient switch network implementation. Since then, a lot of research have been done on symmetric or partially symmetric Boolean functions. Symmetry detection is important in logic synthesis, technology mapping, binary decision diagram minimization, and testing [@Arn; @Das; @Ala]. In [@Dan], the authors investigated the symmetric and partial symmetric properties of Boolean NCFs. They also presented an algorithm for testing whether a given partial symmetric function is a NCF. In this section, We will use a formula in [@Yua1] to give very simple proofs for several theorems in [@Dan]. We will also study the relation between the number of layers $r$ and the level of the partial symmetry $s$ (the function is $s$-symmetric) of NCFs. Furthermore, we will obtain the formula of the number of $n$-variable $s$-symmetric NCF functions. Particularly, We obtained the formula of the number of all the strongly asymmetric NCF functions. Through an explicit example, we show that the enumeration in Theorem 3.8 in [@Dan] is incomplete. We will start this section by providing some basic definitions and notations. A permutation over $[n]=\{1,2,\cdots,n\}$ is a bijection from $[n]$ to $[n]$. It is well known that a permutation can be written as the product of disjoint cycles. A $t$-cycle, $(i_1i_2\cdots i_t)$, $\{i_1,\cdots,i_t\}\subset [n]$, will send $i_k$ to $i_{k+1}$ for $k=1,2,\cdots,t-1$ and send $i_t$ to $i_1$. Namely, $i_1\longmapsto i_2\longmapsto\cdots \longmapsto i_t\longmapsto i_1$. A $2$-cycle is called a transpositions. Any permutation can be written as a product of (may not be disjoint) transpositions. In fact, $(12\cdots n)=(n-1 n)\cdots (2n)(1n)$. Given Boolean function $f(x_1,\cdots, x_n)$, if there is a $2$-cycle $\sigma=(ij)$ such that $f(x_1,\cdots, x_n)=f(x_{\sigma (1)},\cdots, x_{\sigma (n)})$, namely, $f(\cdots, x_i,\cdots ,x_j,\cdots)=f(\cdots, x_j,\cdots, x_i,\cdots)$, then we call variable $x_i$ is equivalent to $x_j$ and this is written as $i\stackrel{f}{\backsim}j$. Obviously, $i\stackrel{f}{\backsim}j$ is an equivalence relation over $[n]$. We call $\bar{i}=\{j|j\stackrel{f}{\backsim}i\}$ a symmetric class of $f$. Of course, we have $\bar{i}=\bar{j}\Longleftrightarrow i\stackrel{f}{\backsim}j$. Let $[n]/\stackrel{f}{\backsim}=\{\bar{i}|i\in [n]\}$ and $s=|[n]/\stackrel{f}{\backsim}|$ be the cardinality of $[n]/\stackrel{f}{\backsim}$, we call $f(x_1,\cdots, x_n)$ $s$-symmetric. Please note, $s$-symmetric in this paper is equivalent to properly $s$-symmetric in [@Dan]. Let $f(x_1,x_2,x_3,x_4,x_5,x_6)=x_1x_2x_3\oplus x_4x_5\oplus x_6$, then $\bar{1}=\bar{2}=\bar{3}=\{1,2,3\}$, $\bar{4}=\bar{5}=\{4,5\}$, $\bar{6}=\{6\}$. This function is $3$-symmetric. If there is an index $i$ such that $|\bar{i}|\geq 2$, i.e, $s=|[n]/ \stackrel{f}{\backsim}|\leq n-1$, then we call $f$ is partially symmetric. If $s=1$, We call $f$ totally symmetric or symmetric. $n$-symmetric is also called not partially symmetric. For the application of $1$-symmetric (totally symmetric) Boolean function in cryptography, Anne Canteaut and Marion Videau [@Ann] presented an extensive study in 2005, more results on (totally) symmetric Boolean function can be found in [@Mit; @Sav; @Cai; @Mai; @Cus1; @Cus2; @Xia; @Fra; @Na]. A Boolean function $f(x_1,\cdots, x_n)$ is strongly asymmetric if $f(x_1,\cdots, x_n)=f(x_{\sigma (1)},\cdots, x_{\sigma (n)})$ implies $\sigma$ is identity. Obviously, if a Boolean function is strongly asymmetric then it is $n$-symmetric. Let $f(x_1,x_2,x_3,x_4,x_5,x_6)=x_1x_2\oplus x_2x_3\oplus x_3x_4\oplus x_4x_5\oplus x_5x_1\oplus x_6$, it is easy to check that $f$ is $6$-symmetric (not partially symmetric)but not strongly asymmetric since $f(x_1,x_2,x_3,x_4,x_5,x_6)=f(x_{\sigma(1)},x_{\sigma(2)},x_{\sigma(3)},x_{\sigma(4)},x_{\sigma(5)},x_{\sigma(6)})$ for $\sigma=(12345)$. In the following, we will frequently use the unique formula in Theorem \[th1\]. In the equation , we call $a_{i_j}$ the canalizing input of the variable $x_{i_j}$. \[prop1\](Theorem 3.1 in [@Dan]) Let $\bar{i}$ be a symmetric class for a Boolean NCF $f$, then $\{x_j|j\in \bar{i}\}$ must be in the same layer with same canalizing input. This follows immediately from the uniqueness of the equation . As a matter of fact, in each layer $M_j$, for $j=1,\cdots,r$, there are either one or two symmetric classes. One class has canalizing input 0, the other one has canalizing input 1. Obviously, if one layer has more than 2 variables, then there are at least two variables have the same canalizing inputs. Hence, this layer has a symmetric class with at least 2 variables. On the other hand, all the variables from different layers must belong to different symmetric classes, and each layer contributes at most two symmetric classes. From equation , the last layer has at least two variables, so, we have $r\leq n-1$. In summary, we have \[prop2\] For $n\geq2$, let $(k_1,\cdots,k_r)$ be the layer structure of a Boolean NCF $f$, if $k_j\geq3$ for some $j$. Then $f$ is partially symmetric. Besides, if NCF $f$ is $s$-symmetric, then $\lceil \frac{s}{2}\rceil \leq r\leq min\{n-1,s\}$. Let $r$ be the number of layers of $s$-symmetric NCF $f$, then $r\leq s\leq min\{2r,n\}$ The following property is also a straightforward application of the uniqueness of equation . (Theorem 3.2 in [@Dan]) If Boolean NCF $f$ contains $r_1$ layers with only one canalizing input, and $r_2$ layers with two distinct canalizing inputs. Then, $f$ is ($r_1+2r_2$)-symmetric. (Theorem 3.7 in [@Dan]) A $n$-variable Boolean NCF is strongly asymmetric iff it is $n$-symmetric. We already know that strongly asymmetry implies $n$-symmetry. If NCF $f$ is $n$-symmetric, i.e., not partially symmetric, then each layer has at most two variables by proposition \[prop2\]. If there is a permutation $\sigma$ such that $f(x_{\sigma (1)},\cdots, x_{\sigma (n)})=f(x_1,\cdots, x_n)$, let $\sigma=\sigma_1\cdots\sigma_m$ be a product of disjoint cycles, then we have $f(x_{\sigma_j (1)},\cdots, x_{\sigma_j (n)})=f(x_1,\cdots, x_n)$ for $j=1,\cdots,m$. Because of the uniqueness of the equation , we know $x_{\sigma_j(i)}$ and $x_i$ must be in the same layer. Since each layer has at most two variables, we know $\sigma_j$ for $j=1,\cdots,m$ are all identity or transpositions (with length 1 or 2). But if there is a transpositions, then $f$ will be partially symmetric. Hence, all the $\sigma_i$ are identities. Therefore, $\sigma$ is identity and $f$ is strongly asymmetric. There are $240$ $4$-variable strongly asymmetric NCFs. Let $n=4$, and $f(x_1,x_2,x_3,x_4)$ be $4$-symmetric NCF, or equivalently, strongly asymmetric NCF. By proposition \[prop2\], the layer number $r$ is either $2$ or $3$. Case 1: $r=2$ Let $(k_1,k_2)$ be the layer structure. First, we know $k_2\geq 2$ since $M_2$ is the last layer. Second, $f$ is $n$-symmetric, so $k_2\leq 2$ by proposition \[prop2\]. Therefore, $k_2=2$, hence, $k_1=2$ and we get $f=M_1(M_2\oplus 1)\oplus a$, $M_1=(x_i\oplus b)(x_j\oplus b\oplus 1)$ and $M_2=(x_k\oplus c)(x_l\oplus c\oplus 1)$, where $\{i,j,k,l\}=[4]=\{1,2,3,4\}$. So, obviously, there are $\binom{4}{2}\binom{2}{2}2^3=48$ distinct $4$-variable strongly asymmetric NCFs. Case 2: $r=3$ Let $(k_1,k_2,k_3)$ be the layer structure, we have $k_3=2$, $k_1=k_2=1$ and $f=M_1(M_2(M_3\oplus 1)\oplus 1)\oplus a$, $M_1=x_i\oplus b$, $M_2=x_j\oplus c$ and $M_3=(x_k\oplus d)(x_l\oplus d\oplus 1)$. Obviously, there are $\binom{4}{1}\binom{3}{1}\binom{2}{2}2^4=192$ such $4$-variable strong asymmetric NCFs. In total, there are $240$ $4$-variable strong asymmetric NCFs. In theorem 3.8 in [@Dan], it was claimed that the number of $n$-variable strongly asymmetric NCFs is $n!2^{n-1}$, when $n=4$, this number is $192$. Since $192<240$, it is clear the enumeration in [@Dan] is incomplete by the above example. The function in Example 4 of [@Dan] can be written as $f(x_1,x_2,x_3)=M_1(M_2\oplus1)$, where $M_1=(x_1\oplus 1)x_2x_3$, $M_2=(x_4\oplus 1)(x_5\oplus 1)x_6$. It is clear this function has two layers since the last layer must has at least two variables. In the following we will count the number of $s$-symmetric NCF For $s=1,\cdots,n$. Let $N(n,s)$ be the cardinality of the set of all the $n$-variable $s$-symmetric Boolean NCFs. First, we have (Proposition 3.9 in [@Dan]) If $n\geq 2$, then $N(n,1)=4$ Since $f$ is 1-symmetric, i.e., totally symmetric, then the layer number $r$ must be one and all the canalizing inputs must be the same. So, $f$ must be one of the following functions: $x_1\cdots x_n$, $x_1\cdots x_n\oplus 1$, $(x_1\oplus 1)\cdots (x_n\oplus 1)$, $(x_1\oplus 1)\cdots (x_n\oplus 1)\oplus 1$. We have For $n\geq 2$, the number of all the $n$ variable $n$-symmetric NCFs ( Strongly asymmetric NCFs) is $$N(n,n)=2\sum_{\substack{\lceil \frac{n}{2}\rceil \leq r\leq n-1}}\sum_{\substack{k_{1}+\cdots+k_{r}=n\\1\leq k_{i}\leq 2,i=1,\dots,r-1, k_{r}=2}}\frac{n!}{k_{1}!k_{2}!\cdots k_{r}!}2^r.$$ If $n\geq 3$, then it can be simplified as $$N(n,n)=\sum_{\substack{\lceil \frac{n}{2}\rceil \leq r\leq n-1}}\sum_{\substack{k_{1}+\cdots+k_{r-1}=n-2\\1\leq k_{i}\leq 2,i=1,\dots,r-1, }}\frac{n!}{k_{1}!k_{2}!\cdots k_{r-1}!}2^r.$$ By Theorem \[th1\], we have $$f(x_{1},x_{2},\dots,x_{n})=M_{1}(M_{2}(\cdots(M_{r-1}(M_{r}\oplus 1)\oplus 1)\cdots)\oplus 1)\oplus b.$$ 1\. It is clear that $b$ has two choices. 2\. By proposition \[prop2\], we get $\lceil \frac{n}{2}\rceil \leq r\leq n-1$. 3\. For each layer structure $(k_1,\cdots,k_r)$, $k_1+\cdots+k_r=n$, $1\leq k_i\leq 2$ (Proposition \[prop2\]) for $i=1,\cdots,r-1$ and $k_r=2$, there are $$\binom{n}{k_1}\binom{n-k_1}{k_2}\binom{n-k_1-k_2}{k_3}\cdots\binom{n-k_1-\cdots- k_{r-1}}{k_r}=\frac{n!}{k_{1}!k_{2}!\cdots k_{r}!}$$ ways to distribute the $n$ variables to each layer $M_j$, $j=1,\cdots,r$. 4\. For each layer $M_j$, $j=1,\cdots,r$, it is either $x_i\oplus a$ or $(x_k\oplus a)(x_l\oplus a\oplus 1)$, in any case, there are two choices. Hence, totally, $2^r$ choices. Combine all the information above, we obtain the formula of $N(n,n)$. When $n=2,3,4$, we simplify the above formula and get $N(2,2)=4$ and $N(3,3)=24$ and $N(4,4)=240$. We have obtained the formula of $N(n,1)$ and the formula of $N(n,n)$. In the following, We will find the formula $N(n,s)$ for $n\geq 3$ and $2\leq s\leq n-1$. Let $n\geq 3$ and $2\leq s\leq n-1$, then $N(n,s)$, the number of all the $n$-variable $s$-symmetric NCFs, is $$2\sum_{\substack{\lceil \frac{s}{2}\rceil \leq r\leq s}}\sum_{\substack{k_{1}+\cdots+k_{r}=n\\1\leq k_{i},i=1,\dots,r-1, k_{r}\geq 2}}\frac{n!}{k_{1}!k_{2}!\cdots k_{r}!}\sum_{\substack{t_{1}+\cdots+t_{r}=s\\1\leq t_{i}\leq min\{2,k_i\},1\leq i\leq r}}\prod_{\substack{1\leq i\leq r}}((t_i-1)(2^{k_i}-2)+1-(-1)^{t_i}).$$ By Theorem \[th1\], we have $$f(x_{1},x_{2},\dots,x_{n})=M_{1}(M_{2}(\cdots(M_{r-1}(M_{r}\oplus 1)\oplus 1)\cdots)\oplus 1)\oplus b.$$ 1\. It is clear that $b$ has two choices. 2\. By proposition \[prop2\], we get $\lceil \frac{s}{2}\rceil \leq r\leq s$. 3\. For each layer structure $(k_1,\cdots,k_r)$, $k_1+\cdots+k_r=n$, $1\leq k_i$ for $i=1,\cdots,r-1$ and $k_r\geq 2$, there are $$\frac{n!}{k_{1}!k_{2}!\cdots k_{r}!}$$ ways to distribute the $n$ variables to each layer $M_j$, $j=1,\cdots,r$. 4\. Each layer $M_i$, $i=1,\cdots,r$, will contribute $t_i$ symmetric classes. Where $1\leq t_i\leq min\{2,k_i\}$ for $i=1,\cdots,r$ and $t_1+\cdots+t_r=s$ since $f$ is $s$-symmetric. 5\. For each fixed layer $M_i$ with fixed variable set $\{x_{i_j}|j=1,\cdots,k_i\}$, $i=1,\cdots r$, We know $M_{i}=\prod_{j=1}^{k_{i}}(x_{i_{j}}\oplus a_{i_{j}})$, totally, there are $2^{k_i}$ choices for $M_i$. Two of them will contribute one symmetric class (all canalizing inputs $a_{i_j}$ are equal) and $2^{k_i}-2$ of them will contribute two symmetric classes. Since $(t_i-1)(2^{k_i}-2)+1-(-1)^{t_i}=\left\{ \begin{array} [c]{ll}2, \quad \quad \quad t_i=1\\ 2^{k_i}-2,\quad t_i=2\\ \end{array} \right. $ We know there are $(t_i-1)(2^{k_i}-2)+1-(-1)^{t_i}$ choices of $M_i$ to contribute $t_i$ symmetric classes for $t_i=1,2$. Combine all the information above, we obtain the formula of $N(n,s)$. We have $$\sum_{j=1}^{n}N(n,j)=2^{n+1}\sum_{r=1}^{n-1}\sum_{\substack{k_{1}+\cdots+k_{r}=n\\1\leq k_{i},i=1,\dots,r-1, k_r\geq 2 }}\frac{n!}{k_{1}!k_{2}!\cdots k_{r}!}.$$ The right side is the cardinality of the set of all the $n$-variable Boolean NCFs according to [@Yua1]. When $n\geq 2$, it is clear that $N(n,s)\geq 1$, $ s=1,\cdots,n$, so there exists $s$-symmetric NCF for any $s$. Consequently, for any $s$, there exists NCF which is not $s$-symmetric. Particularly, there exists $n$-variable NCF that is not $(n-1)$-symmetric (Corollary 3.3 in [@Dan]). From Corollary 4.9 in [@Yua1], the number of NCFs with layer number $r$ is $$2^{n+1}\sum_{\substack{k_{1}+\cdots+k_{r}=n\\1\leq k_{i},i=1,\dots,r-1, k_r\geq 2 }}\frac{n!}{k_{1}!k_{2}!\cdots k_{r}!}.$$ When $r$ is the maximal value $n-1$, the above number can be simplified as $n!2^n$ The number of all the $n$-variable strongly asymmetric NCFs with maximal layer number is $n!2^{n-1}$. Hence, the number of all the $n$-variable partially symmetric NCFs with maximal layer number is also $n!2^{n-1}$. Since $r=n-1$, then $k_1=\cdots=k_{n-2}=1$, $k_{n-1}=2$, the results follows from equation . This proposition implies $N(n,n)>n!2^{n-1}$ when $n\geq 4$ since there are some other strongly asymmetric NCFs with layer number less than $n-1$. Conclusion ========== In this paper, we obtained the formula of the $b$ - certificate complexity , hence, the formula of the certificate complexity of any NCF. For symmetric or partially symmetric NCFs, we significantly simplified some proofs in [@Dan]and studied the relation between layer number $r$ and the symmetry level number $s$. We obtained the formula of the cardinality of all the $n$-variable $s$-symmetric Boolean NCFs. Particularly, we obtained the number of all the $n$-variable strongly asymmetric Boolean NCFs and we pointed out that the number of all the strongly symmetric NCFs is more than $n!2^{n-1}$ when $n\geq 4$. [99]{} Anne Canteaut and Marion Videau, Symmetric Boolean Functions ,IEEE Transactions on Information Theory, VOL. 51, NO. 8, August 2005 pp. 2791-2811. R. F. Arnold and M. A. Harrison, Algebraic Properties of Symmetric and Partially Symmetric Boolean Functions , IEEE Transactions of Electronic Computers, Volume: EC-12 Issue: 3, June 1963, pp. 244-251. S. R. Das and C. L. Sheng, On Detecting Total or Partial Symmetry of Switching Functions , IEEE Transactions on Computers, Volume: C-20 , Issue: 3, March 1971, pp. 352-355. Alna Mishchenko, Fast Computation of Symmetries in Boolean Function , IEEE Transactions on Computer-Aided Design of Integrated Circuits and System, Vol. 22, NO. 11, November 2003 pp. 1588-1593. J. T. Butler, T. Sasao, and M. Matsuura, Average path length of binary decision diagrams , IEEE Transactions on Computers, 54 (2005), pp. 1041–1053. S. A. Cook, C. Dwork, R. Reischuk, Upper and lower time bounds for parallel random access machines without simultaneous writes , SIAM J. Comput, 15 (1986), pp. 87-89. C. J. Mitchell, Enumerating Boolean Functions of Cryptographic Significance , J. Crypto., vol. 2, no 3, pp. 155-170, 1990. P. Savicky, On the Bent Boolean Functions That Are Symmetric , Europ. J. Combin, vol. 15, pp. 407-410, 1994. J.-Y. Cai, F. Green, and T. Thierauf On the Correlation of Symmetric Functions , Math. Syst. Theory, vol. 29, no. 3, pp. 245-258, 1996. S. Maitra and P.Sarker, Maximum Nonlinearity of Symmetric Boolean Functions on Odd Number of Variables , IEEE Transactions on Information Theory, Vol 48, no. 9, pp.2626-2630, Sep. 2002. T. W. Cusick, Yuan Li, $k$-th Order Symmetric SAC Boolean Functions and Bisecting Binomial Coefficients , Discrete Applied Mathematics, 149 (2005) 73-86. T. W. Cusick. Yuan Li, Pantalimon St$\breve{a}$nic$\breve{a}$, Balanced Symmetric Functions over $GF(p)$ , IEEE Transactions on Information Theory, Vol 54, pp.1304-1307, 2008. Yuan Li, Zhaohong Xiang, To Determine Symmetric $PC(k)$ Boolean Functions by Its Definition , Sichuan Daxue Xuebao, 44 (2007) No.2, 209-212. Francis N. Castro, Oscar E. Gonz$\acute{a}$lez, and Luis A. Medina, Diophantine Equations With Binomial Coefficients and Perturbation of Symmetric Boolean Functions , IEEE Transactions on Information Theory, Vol 64, NO.2, Feb, pp.1347-1360, 2018. Na Li, Wen-Feng Qi, Symmetric Boolean Functions Depending on an Odd Number of Variables With Maximum Algebraic Immunity , IEEE Transactions on Information Theory, Vol 52, NO.5, May, pp.2271-2273, 2006. Daniel J. Rosenkrantz, Madhav V. Marathe, S. S. Ravi, Richard E. Stearns, Symmetric Properties of Nested Canalyzing Functions , Discrete Mathematics and Theoretical Computer Science, DMTCS vol. 21:4, 2019, $\#$19. Q. He, M. Macauley, Stratification and enumeration of Boolean functions by canalizing depth , Physica D 314 (2016), pp. 1-8. Ilya Shmulevich and Stuart A. Kauffman, Activities and Sensitivities in Boolean Network Models , Physical Review Letters Volume 93, Number 4, 23 JULY 2004, 048701. A. Jarrah, B. Ropasa and R. Laubenbacher, Nested Canalizing, Unate Cascade, and Polynomial Functions, Physica D 233 (2007), pp. 167-174. Moriznmi H., Sensitivity, Block Sensitivity, and Certificate Complexity of Unate Functions and Read-Once Functions , In: Diaz J., Lanese I., Sangiorgi D. (eds) Theoretical Computer Science. TCS 2014. Lecture Notes in Computer Science, vol 8705. Springer, Berlin, Heidelberg Claus Kadelka, Yuan Li, Jack Kuipers, John O Adeyeye, Reinhard Laubenbacher, Multistate Nested Canalizing Functions and Their Networks , Theoretical Computer Science, 675,2 (2017), 1-14. Claus Kadelka, Jack Kuipers, Reinhard Laubenbacher, The influence of canalization on the robustness of Boolean networks , Physica D, 353-354 (2017), 39-47. Lori Layne, Elena Dimitrova, Matthew Macauley, Nested Canalyzing Depth and Network Stability , Bull Math Biol, 74 (2012), pp. 422-433. Winfried Just, The steady state system problem is NP-hard even for monotone quadratic Boolean dynamical systems , Preprint,2006 S. A. Kauffman, C. Peterson, B. samuelesson, C. Troein, Random Boolean Network Models and the Yeast Transcription Network, Proc. Natl. Acad. Sci 100 (25) (2003), pp. 14796-14799. Claire Kenyon and Samuel Kutin, Sensitivity, block sensitivity, and $l$-block sensitivity of Boolean functions , Information and Computation, 189 (2004), pp. 43-53. D.Murrugarra and R.Laubenbacher, The number of multistate nested canalizing functions (2012), Physica D: Nonlinear Phenomena, 241, 929-938. Yuan Li, John O Adeyeye, Sensitivity and Block Sensitivity of Nested Canlyzing Function , Arxiv:1209.1597v1 \[cs.DM\] 7 Sep 2012 Yuan Li, John O Adeyeye, David Murrugarra, Boris Aguilar and Reinhard Laubenbacher, Boolean Nested Canalizing Functions: A Comprehensive Analysis , Theoretical Computer Science, 481, (2013), 24-36. Yuan Li, John O Adeyeye, Maximal sensitivity of Boolean Nested Canalizing Functions , Theoretical Computer Science, 791, (2019), 116-122. R. Lidl and H. Niederreiter, Finite Fields , Cambridge University Press, New York (1977). N. Nisan, CREW PRAMs and decision tree , SIAM J. Comput, 20 (6) (1991), PP. 999-1070. David Rubinstein, Sensitivity VS. block sensitivity of Boolean functions , Combinatorica 15 (2) (1995), pp. 297-299 . C.E. Shannon, A Symbolic Analysis of Relay and Switching Circuits , AIEE Trans, 57: 713-723, 1938. U. Vishkin and A. Widgerson, Trade-offs between depth and width in parallel computation , SIAM J. Comput. 14 (1985) 303-314. Hao Huang, Induced subgraphs of hypercubes and a proof of the Sensitivity Conjecture , Annals of Mathematics, vol 190, No. 3, November, (2019), pp. 945-955. [^1]: Mathematics Subject Classification: 05A05, 05A15
--- abstract: 'We define and study a notion of discrete homology theory for metric spaces. Instead of working with simplicial homology, our chain complexes are given by Lipschitz maps from an $n$-dimensional cube to a fixed metric space. We prove that the resulting homology theory satisfies a discrete analogue of the Eilenberg-Steenrod axioms, and prove a discrete analogue of the Mayer-Vietoris exact sequence. Moreover, this discrete homology theory is related to the discrete homotopy theory of a metric space through a discrete analogue of the Hurewicz theorem. We study the class of groups that can arise as discrete homology groups and, in this setting, we prove that the fundamental group of a smooth, connected, metrizable, compact manifold is isomorphic to the discrete fundamental group of a ‘fine enough’ rectangulation of the manifold. Finally, we show that this discrete homology theory can be coarsened, leading to a new non-trivial coarse invariant of a metric space.' author: - Hélène Barcelo - Valerio Capraro - 'Jacob A. White' title: Discrete Homology Theory for Metric Spaces --- Introduction {#se:introduction} ============ Discrete homotopy theory is a discrete analogue of homotopy theory, associating a bigraded sequence of groups to a simplicial complex, capturing its combinatorial structure, rather than its topological structure. Originally called A-theory, it was developed in [@Kr-La98], [@Ba-Kr-La-We01], [@Ba-La05], and [@Ba-Ba-Lo-La06], which built on the work of Atkin [@At74],[@At76]. Discrete homotopy theory can be equivalently defined for finite connected graphs, resulting in an algebraic invariant of finite connected graphs and graph homomorphisms, in the same way that classical homotopy theory gives invariants of topological spaces and continuous maps: it associates a sequence $A_i(G)$ of groups to a finite connected graph. Discrete homotopy theory has been applied to different areas, including subspace arrangements [@Ba-Se-Wh11], amenability of graphs [@Ca12a], and pattern recognition [@Ca12b]. Discrete homotopy theory can also be generalized to arbitrary metric spaces, depending on a parameter $r$. The rationale for generalizing to arbitrary metric spaces will be given after Theorem \[th:lastintro\]. Is there a discrete homology theory that is related to discrete homotopy theory in the same way that classical homology is related to classical homotopy theory? The goal of this paper is to answer this question. We construct a collection of functors from the category of metric spaces and Lipschitz maps to the category of abelian groups. We define a discrete analogue of the Eilenberg-Steenrod axioms. Our first theorem is: The discrete homology theory satisfies the discrete analogue of the Eilenberg-Steenrod axioms. In the process, we define a notion of discrete cover, and prove an analogue of the Mayer-Vietoris exact sequence. Continuing our analogy with classical homology, we prove the Hurewicz theorem in dimension 1. The abelianization of the discrete fundamental group of a metric space (at scale $r$) is isomorphic to its first discrete homology group (at scale $r$). Then we study the class of groups that can arise as discrete homology groups of a metric space and we obtain the following proposition. For any abelian group $G$ and $\bar n\in\mathbb N$, there is a finite connected graph $\Gamma$ such that the $\bar n$-dimensional discrete homology group of $\Gamma$ (at scale $1$) is $G$ and all others are trivial. It is worth mentioning that, in order to prove the previous proposition, we obtain a new result in discrete homotopy theory: \[th:lastintro\] Let $R$ be a ‘fine enough’ rectangulation of a compact, metrizable, smooth and path-connected manifold $M$, then the classical fundamental group of $M$ is isomorphic to the discrete fundamental group of $R$ at scale $1$. A second motivation for the present paper comes from coarse geometry. Informally speaking, coarse geometry is the study of a metric space from a large scale point of view. Coarse algebraic topology is useful for approaching problems from different areas. For instance, Roe’s coarse cohomology [@Ro93] was invented to perform index theory on non-compact manifolds and Higson and Roe’s coarse homology [@Hi-Ro95] was used to formulate coarse versions of Baum-Connes’s and Novikov’s conjectures [@Yu95]. Other coarse homology theories have been discovered over the last two decades, including Block-Weinberger’s *uniformly finite homology* [@Bl-We92] and Nowak-Špakula’s *controlled coarse homology* [@No-Sp10]. The second goal of this paper is to construct a new coarse invariant obtained from the discrete homology groups. Hence, instead of restricting ourselves to finite graphs, we work in the more general setting of metric spaces $X$ and we construct discrete homology groups depending on a scaling parameter $r$. Since a typical coarse object is a sequence of discrete objects we consider $\widetilde{DH}_{n,r}(X)$, the countably infinite direct product of copies of ${{{DH_{n, r}(X)}}}$: The direct limit of $\widetilde{DH}_{n,r}(X)$, as $r\to\infty$, is a coarse invariant of $X$. This coarse homology theory is a relevant new object with many interesting open questions which we will study in the sequel to this paper.\ The paper is organized as follows: in the next section, we review some terminology from discrete homotopy theory, and extend discrete homotopy theory to metric spaces. We also introduce discrete homology theory at scale $r$. In Section \[se:eilenberg-steenrod\], we give discrete analogues of the Eilenberg-Steenrod axioms, and prove the discrete analogue of the Mayer-Vietoris exact sequence. The Eilenberg-Steenrod axioms allow us to relate discrete homology with discrete homotopy equivalence, in the same way that classical homology is related to classical homotopy equivalence. The results in this section also justify refering to our theory as ‘discrete homology’ theory. Then we study the discrete analogue of the Hurewicz map in Section \[se:hurewicz\], which is followed by studying different constructions in Section \[se:constructions\]. In Section \[se:coarse\], we define the discrete coarse homology theory. Finally, we end the paper with some conclusions, and open problems.\ **Acknowledgements.** We thank Eric Babson, Dennis Dreesen, Antoine Gournay, Thibault Pillon and Nick Wright for helpful discussions. Discrete homotopy theory and Cubical discrete homology theory {#se:homology} ============================================================= First, we recall definitions from discrete homotopy theory of graphs, and extend them to metric spaces. Given two metric spaces $(X,d_X)$, $(Y,d_Y)$ and $r>0$, a function $f:X\to Y$ is *$r$-Lipschitz* if $d_Y(f(x_1),f(x_2))\leq rd_X(x_1,x_2)$, for all $x_1,x_2\in X$. When $X$ is the vertex set of a connected graph $G$, there is a well-defined metric $d_G$, the length of the shortest path connecting two vertices. For connected graphs, the definition of graph homomorphism $f: G \to H$ appearing in [@Ba-Kr-La-We01] is equivalent to saying that the map $f$ is $1$-Lipschitz between the corresponding metric spaces $(V(G), d_G)$ and $(V(H), d_H)$. In the following definition, $\{0,\ldots m\}$ is equipped with the metric $d(a,b)=|a-b|$ and the cartesian product $X\times \{0, \ldots m \}$ is equipped with the $\ell^1$-metric. Let $X, Y$ be metric spaces, and let $f,g:X \to Y$ be $r$-Lipschitz maps. Then $f$ and $g$ are $r$-discrete homotopic if there exists a non-negative integer $m$ and an $r$-Lipschitz map $F: X \times \{0, \ldots m \} \to Y$ such that $F(-, 0) = f$ and $F(-, m) = g$. This definition generalizes the notion of discrete homotopic maps appearing in [@Ba-Kr-La-We01]. Since our definition of two $r$-Lipschitz maps being ‘homotopic’ involves $\{0, \ldots, m \}$, a finite discrete metric space, we continue to call the resulting theory discrete homotopy theory. Now we define the discrete fundamental group of a metric space (at scale $r$). Let $(X,d)$ be a metric space. An *r-path connecting x to y* is a finite sequence of points $x_0x_1\ldots x_nx_{n+1}$ such that $x_0=x$, $x_{n+1}=y$ and $d(x_i,x_{i+1})\leq r$, for all $i$. Fix a base point $p\in X$, an *r-loop based at p* is an $r$-path such that $x_0=x_{n+1}=p$. Denote by $\mathcal C_r(X,p)$ the set of $r$-loops based at $p$. The set $\mathcal C_r(X,p)$ is a group under the operation of concatenation of $r$-loops, that is, $$(x_0x_1\ldots x_{m}x_{m+1})(y_0y_1\ldots y_{n}y_{n+1}):=x_0x_1\ldots x_{m}x_{m+1}y_0y_1\ldots y_{n}y_{n+1}.$$ \[defin:r-homotopy\] Every $r$-loop $x_0 x_1 \ldots x_n x_{n+1}$ is $r$-homotopy equivalent to $x_0 x_1 \ldots x_{n+1} p$. Moreover, given two $r$-loops $x_0 x_1 \ldots x_n x_{n+1}$ and $y_0y_1\ldots y_n y_{n+1}$ of the same length, then the $r$-loops are $r$-homotopy equivalent if there is a formal matrix $$\left( \begin{array}{ccccc} x_0 & x_1 & \ldots & x_n & x_{n+1} \\ z_0^1 & z_1^1 & \ldots & z_{n}^1 & z_{n+1}^1 \\ \vdots & \vdots & \ldots & \vdots & \vdots \\ z_0^t & z_1^{t} & \ldots & z_{n}^{t} & z_{n+1}^t \\ y_0 & y_1 & \ldots & y_{n}& y_{n+1} \\ \end{array} \right)$$ where - every row is an $r$-loop based at $p$, - every column is an $r$-path based at $p$. The $r$-homotopy equivalence, denoted by $\sim_r$, is an equivalence relation on $\mathcal C_r(X,p)$ which agrees with the operation of concatenation. \[defin:fundamental group at scale r\] The discrete fundamental group at scale $r$ is $$A_{1,r}(X,p):=\mathcal C_{r}(X,p)/\sim_r.$$ A metric space $(X,d)$ is *connected at scale $r>0$* if for all $x,y\in X$, there are $x_0,x_1,\ldots,x_{n},x_{n+1}\in X$ such that $x_0=x$, $x_{n+1}=y$, and $d(x_i,x_{i+1})\leq r$, for all $i$. If $(X,d)$ is connected at scale $r$, then this group does not depend on $p$. So it will be denoted by $A_{1,r}(X)$. Observe that if $X$ is a finite connected graph and $r=1$, then $A_{1,1}(X)$ is the discrete fundamental group of the graph as defined in [@Ba-Kr-La-We01]. There are higher homotopy groups $A_{n,r}(X)$. For $r = 1$, and when $X$ is a graph, they are defined in [@Ba-Kr-La-We01]. Alternatively, discrete homotopy theory is defined as a cubical homotopy theory in [@Ba-Ba-Lo-La06]. Now we define our discrete homology theory. Note that discrete homotopy theory involves formal matrices, which can be thought of as a finite subspace of the metric space $\mathbb{Z}^2$. Higher discrete homotopy groups also involve finite subspaces of $\mathbb{Z}^n$, which can be thought of as a cubical subdivision of a cube. With this motivation, we define our discrete homology groups in terms of Lipschitz maps from discrete cubes. Let $Q_n$ denote the $n$-dimensional cube, represented by the set of points $\{(a_1,\ldots,a_n)\subseteq\mathbb R^n : a_i\in\{0,1\}\}$, equipped with the Hamming distance [@Ha50]. A *singular $(n,r)$-cube* is an $r$-Lipschitz map $\sigma : Q_n\to X$. Let $L_{n,r}(X)$ be the free abelian group generated by all singular $(n,r)$-cubes. Let $\sigma : Q_n\to X$ be an $(n,r)$-cube, with $n\geq1$. Let $i\in\{1,\ldots,n\}$. - The $i$-th front face of $\sigma$ is the singular $(n-1,r)$-cube $$(A_i^n\sigma)(a_1,\ldots,a_{n-1})=\sigma(a_1,\ldots,a_{i-1},0,a_i,\ldots,a_n),$$ - The $i$-th back face of $\sigma$ is the singular $(n-1,r)$-cube $$(B_i^n\sigma)(a_1,\ldots,a_{n-1})=\sigma(a_1,\ldots,a_{i-1},1,a_i,\ldots,a_n).$$ A singular $(n,r)$-cube $\sigma$ is *degenerate* if $A_i^n \sigma = B_i^n \sigma$ for some $i$. Let $D_{n,r}(X)$ be the free abelian group generated by all degenerate $(n,r)$-cubes and $C_{n,r}(X) = L_{n,r}(X) / D_{n,r}(X)$, whose elements are called $(n,r)$-chains in $X$. The *boundary* of a singular $(n,r)$-cube $\sigma : Q_n\to X$ is $$\partial \sigma:=\sum_{i=1}^n(-1)^{i}\left(A_i^n\sigma-B_i^n\sigma\right).$$ The boundary operator extends to a group homomorphism $\partial_n : L_{n,r}(X)\to L_{n-1,r}(X)$. Since $\partial_n(D_{n,r}(X)) \subseteq D_{n-1,r}(X)$, we obtain a map $\partial_n: C_{n,r}(X)\to C_{n-1,r}(X)$ and a chain complex $(\mathcal{C}, \partial)$. Let ${{DH_{\ast, r}(X)}}$ denote the homology groups of the resulting chain complex; these are the *discrete homology groups at scale $r$*. More precisely, define $B_{n,r}(X):=\text{Im}(\partial_{n+1})$, the *group of (n,r)-boundaries*, and $Z_{n,r}(X):=\text{ker}(\partial_{n})$, the *group of (n,r)-cycles*. Since $\delta_n\circ\delta_{n+1}=0$, one has $B_{n,r}(X)\subseteq Z_{n,r}(X)$ and $${{{DH_{n, r}(X)}}}:=Z_{n,r}(X)/B_{n,r}(X).$$ Similarly, let $A \subset X$ be a subspace of $X$. Then one can show that $\partial_n(C_{n,r}(A)) \subseteq C_{n-1,r}(A)$, so that there is a map $\partial_n: C_{n,r}(X, A) \to C_{n-1, r}(X, A)$, where $C_{n,r}(X,A) = C_{n,r}(X) / C_{n,r}(A)$. The resulting homology of this chain complex is the *relative* homology ${{DH_{\ast, r}(X, A)}}$. Discrete Analogue of the Eilenberg-Steenrod Axioms {#se:eilenberg-steenrod} ================================================== The purpose of this section is to show that the discrete homology theory ${{DH_{*, r}(X)}}$ satisfies a discrete analogue of the classical Eilenberg-Steenrod axioms [@Ei-St45] and a discrete analogue of the classical Mayer-Vietoris exact sequence [@Ma29],[@Vi30]. In order to prove these results, we need a notion of ‘discrete open cover’. As motivation for our definition, observe that, for the classical results, the notion of open set is not important in itself, but rather the following geometric property: that a simplex can be replaced with a sum of simplices, which each lie in the interior of some open set in the cover. This is our motivation for the following discrete analogue of cover. \[defin:n-dimensional cover\] An *n-dimensional discrete cover* at scale $r>0$ of a metric space $X$ is a family $\mathcal U=\{U_i\}$ of subsets of $X$ such that: 1. $X=\bigcup U_i$, 2. For all non-degenerate $(n,r)$-cubes $\sigma$, $\sigma\in U_i$ for some $i$. Two dimensional discrete covers (for graphs and with $r=1$) have already appeared in [@Ba-Kr-La-We01] to prove the Seifert-van Kampen Theorem for discrete fundamental groups. \[rem:2-dimension does not imply n-dimension\] [Two dimensional discrete covers are not necessarily $3$-dimensional discrete covers. Take the 3-dimensional Hamming cube $Q_3$, and consider the cover made of its six 2-dimensional faces. This is a 2-dimensional cover, but not a 3-dimensional cover.]{} \[defin:discrete open cover\] A *discrete cover* at scale $r$ is an $n$-dimensional discrete cover, for all $n$. Given subsets $A,B$ of $X$, consider the canonical inclusions $i^1:A \cap B \to A$, $i^2:A \cap B \to B$, $j^1:A \to A \cup B$, $j^2: B \to A \cup B$.\ Let $X$ be a metric space and let $\{A, B\}$ be a discrete cover of $X$ at scale $r > 0$. Then we have the following long exact sequence: where diag = $(i^1_*, i^2_*)$ and diff = $j^1_* - j^2_*$. Consider the following sequence of chain complexes: $$\xymatrix@1{ 0 \ar[r] & C_{*,r}(A \cap B)\ar[r]^-{\mbox{diag}} & C_{*,r}(A) \oplus C_{*,r}(B) \ar[r]^-{\mbox{diff}} & C_{*,r}(A \cup B) \ar[r] & 0 }$$ By the zig-zag lemma, if the above sequence is short exact, then we obtain a long exact sequence in homology. Injectivity of diag, and exactness at $C_{n,r}(A) \oplus C_{n,r}(B)$ are both clear. To prove surjectivity of diff, consider a singular $(n,r)$-cube $\sigma$ in $A \cup B$. Since $\{A, B\}$ is a discrete cover, then $\sigma$ lies in $A$ or $B$, and hence in the image of diff. Now we give a counterexample to the Mayer-Vietoris Theorem when $\{A, B\}$ is not a discrete cover. \[ex:no MV\] [Consider the graph $C_5$, with vertex set $\{x_1,\ldots,x_5\}$, where $x_i,x_{i+1}$ and $x_0,x_5$ are adjacent. Let $A=\{x_1\}$ and $B=\{x_2,\ldots,x_5\}$. Then $H_{n,1}(A)=H_{n,1}(B)=H_{n,1}(A\cap B)=\{0\}$, for all $n$, hence $H_{1,1}(C_5)=\{0\}$, if the Mayer-Vietoris Theorem holds for $\{A, B \}$. However, Theorem \[th:abelianization1\] can be used to show that $H_{1,1}(C_5) = \mathbb{Z}$. ]{} Let $Met$ be the category whose objects are triples $(X,Y, r)$ where $X$ is a metric space, $Y$ is a subspace of $X$, and $r$ is a real number. The morphisms are pairs $(f,k): (A,B,r) \to (X,Y,kr)$ where $f: A \to Y$ is $k$-Lipschitz, and $f(B) \subseteq Y$. We can now formulate the discrete analogue of the Eilenberg-Steenrod axioms. A *discrete homology theory at scale $r$* consists of: - A sequence ${{{DH_{n, r}(-)}}}$ of functors from $Met$ to the category of abelian groups. - A natural transformation $\partial: {{{DH_{n, r}(X,A)}}} \to {{DH_{n-1, r}(A)}}$ These functors are subject to the following axioms: 1. (homotopy) Discrete homotopic maps induce the same map in discrete homology. That is, if $f:(X,A)\to(Y,B)$ is homotopic to $g:(X,A)\to(Y,B)$, then their induced maps are the same. 2. (excision) Let $\{A, B\}$ be a discrete cover of $X$. Then the inclusion $(B, A\cap B)\to (X, A)$ induces isomorphisms ${{{DH_{n, r}(B,A\cap B)}}} \to {{{DH_{n, r}(X,A)}}}$, for all $n$. 3. (dimension) Let $P$ be the one-point space; then ${{{DH_{n, r}(P,\emptyset)}}}=\{0\}$, for all $n\geq1$ and for all $r>0$. 4. (long exact sequence) Each pair $(X,A)$ induces a long exact sequence via the inclusion maps $i:A\to X$ and $j:(X,\emptyset)\to(X,A)$: $$\xymatrix@1{ \ldots \ar[r] & {{{DH_{n, r}(A)}}}\ar[r]^-{i_*} & {{{DH_{n, r}(X)}}} \ar[r]^-{j_*} & {{{DH_{n, r}(X,A)}}} \ar[r]^-{\partial_*} & {{DH_{n-1, r}(A)}}\ar[r] & \dots }$$ We have defined the functors ${{DH_{\ast, r}(X,A)}}$. The natural transformations $\partial: {{{DH_{n, r}(X,A)}}} \to {{DH_{n-1, r}(A)}}$ is defined as follows: Let $\sigma$ be a representative of a class in ${{{DH_{n, r}(X)}}}{A}$, and let $\tau$ be an $n$-chain in $X$ which maps onto $\sigma$ (under the quotient map $C_{n,r}(X) \mapsto C_{n,r}(X) / C_{n,r}(A)$). Then $\partial([\sigma]) = [ \partial(\sigma)]$. One can check that $\partial(\sigma) \in C_{n-1,r}(A)$, and that the map is well-defined. \[th:eilenbergsteenrod\] The functors ${{DH_{\ast, r}(X,A)}}$ form a discrete homology theory. We prove the result item-by-item. 1. (homotopy) Suppose $f$ and $g$ are $k$-Lipschitz, and discrete homotopic. Let $F: X \times [m] \to Y$ be a discrete homotopy from $f$ to $g$. For $\sigma \in C_{n,r}(X)$, let $\Phi(\sigma) \in C_{n+1}(Y) = \sum_{i=0}^{m-1} \sigma_i$, where $\sigma_i$ is the $(n+1,kr)$-cube such that $A_1 \sigma_i = F(\sigma, i)$ and $B_1 \sigma_i = F(\sigma, i+1)$. The map $\Phi$ is a chain homotopy from $C_{\star, r}(X)$ to $C_{\star, kr}(Y)$; the result follows from homological algebra. 2. (Excision) Let $\mathcal U=\{A,B\}$ and let $C_{n,r}^{\mathcal U}(X)$ to be the subgroup of $C_{n,r}(X)$ generated by singular (n,r)-cubes with image in $A$ or in $B$. Then $C_{n,r}^{\mathcal U}(X)=C_{n,r}(X)$, for all $n$, since $\mathcal U$ is a discrete cover. Hence we have an isomorphism in homology $$\begin{aligned} \label{eq:iso} H^{\mathcal U}(X)\simeq {{{DH_{n, r}(X)}}}.\end{aligned}$$ Now observe that the map $$C_{n,r}(B)/C_{n,r}(A\cap B)\to C_{n,r}^{\mathcal U}(X)/C_{n,r}(A),$$ induced by inclusion, is obviously an isomorphism since both quotient groups are free with basis the singular (n,r)-cubes in B which do not lie in A. Hence, using (\[eq:iso\]), we have an isomorphism in homology $${{{DH_{n, r}(B,A\cap B)}}}\simeq {{{DH_{n, r}(X,A)}}}.$$ 3. (dimension) Observe that, for $n\geq1$ and $r>0$, every singular $(n,r)$ cube is degenerate and so we have $C_{n,r}=\{0\}$. Consequently, ${{{DH_{n, r}(P)}}}$ is trivial for all $n\geq1$ and for all $r>0$. 4. (long exact sequence) As in the classical case, the maps $i$ and $j$ give rise to a short exact sequence of chain complexes; the result follows from the zig-zag lemma. Note that the zig-zag lemma gives the existence and uniqueness of the natural transformation $\partial: {{{DH_{n, r}(X,A)}}} \to {{DH_{n-1, r}(A)}}$. Hurewicz theorem in dimension one {#se:hurewicz} ================================= The aim of this section is to prove an analogue of the Hurewicz theorem for discrete homology theory; ${{DH_{1, r}(X)}}$ is isomorphic to the abelianization of the discrete fundamental group of $X$ at scale $r$. Definitions \[defin:r-homotopy\] and \[defin:fundamental group at scale r\] are used in this section. \[th:abelianization1\] Let $(X,d)$ be a metric space which is connected at scale $r$. Then ${{DH_{1, r}(X)}}$ is isomorphic to the abelianization of $A_{1,r}(X)$. We construct a map $$\phi:A_{1,r}(X,p)\to {{DH_{1, r}(X)}}$$ such that 1. $\phi$ is a group homomorphism, 2. $\phi$ is surjective, 3. $\text{Ker}(\phi)=[A_{1,r}(X,p),A_{1,r}(X,p)]$, the commutator subgroup of $A_{1,r}(X,p)$.\ **Step 1. Definition of $\phi$.** Let $[\gamma]\in A_{1,r}(X,p)$ and choose a representative $\gamma=x_0x_1\ldots x_nx_{n+1}$, with $x_0=x_{n+1}=p$, such that $x_i\neq x_{i+1}$, for all $i$. This condition and the fact that $d(x_i,x_{i+1})\leq r$ imply that the maps $\sigma_i:Q_1\to X$, defined as $\sigma_i(0)=x_i$, $\sigma_i(1)=x_{i+1}$, are non-degenerate $(1,r)$-cubes, for all $i$. Therefore $\sum_i\sigma_i\in C_{1,r}(X)$. Now observe that $$\partial\left(\sum_i\sigma_i\right)=(x_1-x_0)+(x_2-x_1)+\ldots+(x_{n+1}-x_n)=0,$$ since $x_0=x_{n+1}$. Consequently $\sum_i\sigma_i\in Z_{1,r}(X)$. Denote by $\overline\gamma$ the image of $\sum_i\sigma_i$ down in ${{DH_{1, r}(X)}}$ and define $\phi([\gamma])=\overline\gamma$.\ **Step 2. $\phi$ is well defined.** Let $\gamma_1=px_1x_2\ldots x_mp$ and $\gamma_2=py_1y_2\ldots y_np$ be two $r$-homotopy equivalent $r$-loops. Therefore, we have a homotopy matrix $$\left( \begin{array}{ccccc} p & x_1 & \ldots & x_s & p \\ p & z_1^1 & \ldots & z_{s}^1 & p \\ \vdots & \vdots & \ldots & \vdots & \vdots \\ p & z_1^{t} & \ldots & z_{s}^{t} & p \\ p & y_1 & \ldots & y_{s}& p \\ \end{array} \right)$$ Observe that all *squares* $\{p,p,z_1^1,x_1\}, \{x_1,z_1^1,z_2^1,x_2\}, \ldots$ forming this homotopy matrix are in fact singular $(2,r)$-cubes. Consider the $(2,r)$-chain $\tau$ given by summing over all the squares appearing in the matrix. To compute the boundary, observe that the sides that belong inside the matrix are ran once in one direction and once in the other, so that the boundary of $\tau$ is equal to the boundary of the matrix, which is $\gamma_1\gamma_2^{-1}$. In other words, $\overline\gamma_1=\overline\gamma_2$.\ **Step 3. $\phi$ is a group homomorphism.** This follows by direct computation. Indeed, if $\gamma_1=px_1x_2\ldots x_mp$ and $\gamma_2=py_1y_2\ldots y_np$. One has $$\overline\gamma_1=\sum_i\sigma_i,\qquad\qquad\overline\gamma_2=\sum_i\mu_i,$$ where $\sigma_i(0)=x_i, \sigma_i(1)=x_{i+1}, \mu_i(0)=y_i, \mu_i(1)=y_{i+1}$. Consequently, $$\begin{aligned} \label{eq:first} \overline\gamma_1+\overline\gamma_2=\sum_i\sigma_i+\sum_i\mu_i.\end{aligned}$$ On the other hand, $\gamma_1\gamma_2=px_1\ldots x_mpy_1\ldots y_np$. Therefore, $$\overline{\gamma_1\gamma_2}=\sum_i\nu_i,$$ where $\nu_i=\sigma_i$, for $i\in\{0,\ldots,m\}$, and $\nu_{m+i}=\mu_i$, for $i=\{0,\ldots,n\}$. This is then clearly equal to (\[eq:first\]).\ **Step 4. $\phi$ is surjective.** Let $\lambda$ be a singular $(1,r)$-cycle, say $\lambda=\sum_{i=1}^kn_i\sigma_i$, where $\sigma_i:\{0,1\}\to X$ are such that $d(\sigma_i(0),\sigma_i(1))\leq r$, for all $i$. Since $\partial\lambda=0$, one has $$\begin{aligned} \label{eq:boundary} \sum_{i=1}^kn_i\left(\sigma_i(1)-\sigma_i(0)\right)=0.\end{aligned}$$ Let $S:=\{\sigma_i(0),\sigma_i(1) : i=1,\ldots,k\}$. For a given $q\in S$, - let $m_q$ be the sum of the coefficients of $q$ in (\[eq:boundary\]). Observe that $m_q=0$, by Equation (\[eq:boundary\]). - let $\beta_q$ be an $r$-path connecting $p$ and $q$. For all $i=1,\ldots,k$, define the $r$-loop $\eta_i:=\beta_{\gamma_i(0)}\sigma_i\beta_{\gamma_i(1)}$ and define $$\gamma:=\eta_1^{n_1}\eta_2^{n_2}\ldots\eta_k^{n_k}.$$ We now show that $\phi(\gamma)=\lambda$. Indeed, $$\begin{aligned} \phi(\gamma)\\ &=\sum_{i=1}^kn_i\sigma_i-\sum_{i=1}^kn_i(\beta_{\gamma_i(1)}-\beta_{\gamma_i(0)})\\ &=\sum_{i=1^k}n_i\sigma_i-\sum_{q\in S}m_q\beta_q\\ &=\sum_{i=1}^kn_i\sigma_i\\ &=\lambda.\end{aligned}$$ **Step 5. The kernel of $\phi$ is equal to the commutator subgroup of $A_{1,r}$.** Since $\text{Im}(\phi)$ is an abelian subgroup of ${{DH_{1, r}(X)}}$, then $[A_{1,r},A_{1,r}]\subseteq \text{ker}(\phi)$. To prove the other inclusion, let $\gamma$ be an $r$-loop based at $p$ whose homotopic class $[\gamma]$ belongs to $\text{ker}(\phi)$. Since $\overline\gamma=0$, then $\gamma$ is the boundary of a $(2,r)$-chain $\sigma=\sum_{i=1}^kn_i\sigma_i$, where $$\sigma_i:\{(0,0),(1,0),(0,1),(1,1)\}\to X$$ are such that the distance of the images of two adjacent vertices is at most $r$. Now, write $\partial\sigma_i$ as the sum of its faces; that is, $$\partial\sigma_i=a_i^++b_i^++a_i^-+b_i^-,$$ where $$\begin{aligned} \label{eq:faces1} a_i^+(0)=b_i^-(1)=\sigma_i(0,0),\qquad\qquad a_i^+(1)=b_i^+(0)=\sigma_i(1,0),\end{aligned}$$ $$\begin{aligned} \label{eq:faces2} a_i^-(0)=b_i^+(1)=\sigma_i(1,1),\qquad\qquad a_i^-(1)=b_i^-(0)=\sigma_i(0,1).\end{aligned}$$ We have $$\begin{aligned} \label{eq:identity} \gamma=\partial\left(\sum_in_i\sigma_i\right)=\sum_in_i(a_i^++b_i^++a_i^-+b_i^-).\end{aligned}$$ Fix the following notation. Let $$L:=\{(a_i^+,b_i^+,a_i^-,b_i^-) : i=1,\ldots,k\},$$ and let $S$ be the set of endpoints of the $(1,r)$-chain $a_i^+,b_i^+,a_i^-$, and $b_i^-$. For all $q\in S$, let $\eta_q$ be an $r$-path joining $p$ and $q$. For all $\theta\in L$, let $m_\theta$ be the sum of the coefficients of $\theta$ in Equation (\[eq:identity\]). Now, for each singular $(2,r)$-cube, consider the following four $(2,r)$-paths: $$\beta_{a_i^+(0)}a_i^+\beta_{a_i^+(1)}^{-1},\qquad\qquad \beta_{b_i^+(0)}b_i^+\beta_{b_i^+(1)}^{-1},$$ $$\beta_{a_i^-(0)}a_i^-\beta_{a_i^-(1)}^{-1},\qquad\qquad \beta_{b_i^-(0)}b_i^-\beta_{b_i^-(1)}^{-1}.$$ Denote by $\eta_i$ the concatenation of all these paths. Using Equations (\[eq:faces1\]) and \[eq:faces2\], we can show that $\eta_i$ is $r$-homotopy equivalent to the constant loop. Indeed $$\begin{aligned} \eta_i & = & \beta_{a_i^+(0)}a_i^+\beta_{a_i^+(1)}^{-1}\beta_{b_i^+(0)}b_i^+\beta_{b_i^+(1)}^{-1}\beta_{a_i^-(0)}a_i^-\beta_{a_i^-(1)}^{-1}\beta_{b_i^-(0)}b_i^-\beta_{b_i^-(1)}^{-1}\\ & = & \beta_{a_i^+(0)}a_i^+\beta_{a_i^+(1)}^{-1}\beta_{a_i^+(1)}b_i^+\beta_{b_i^+(1)}^{-1}\beta_{b_i^+(1)}a_i^-\beta_{a_i^-(1)}^{-1}\beta_{a_i^-(1)}b_i^-\beta_{b_i^-(1)}^{-1}\\ & = & \beta_{a_i^+(0)}a_i^+b_i^+a_i^-b_i^-\beta_{a_i^+(0)}^{-1}.\end{aligned}$$ To see that this latter loop is homotopy equivalent to the constant loop, write first $\beta_{a_i^+(0)}=px_1x_2\ldots x_m\sigma_i(0,0)$, so as we have $$\begin{aligned} \eta_i & = & px_1x_2\ldots x_m\sigma_i(0,0)\sigma_i(1,0)\sigma_i(1,1)\sigma_i(0,1)\sigma_i(0,0)x_mx_{m-1}\ldots x_2x_1p \\ & \equiv & px_1x_2\ldots x_m\sigma_i(0,0)\sigma_i(1,0)\sigma_i(1,0)\sigma_i(0,0)\sigma_i(0,0)x_mx_{m-1}\ldots x_2x_1p\\ & \equiv & px_1x_2\ldots x_m\sigma_i(0,0)\sigma_i(0,0)\sigma_i(0,0)\sigma_i(0,0)\sigma_i(0,0)x_mx_{m-1}\ldots x_2x_1p\\ & \equiv & p.\end{aligned}$$ Now $\gamma = (\gamma_0, \ldots, \gamma_k, \gamma_0)$. Let $\tau_i$ be the singular $(1,r)$-cube such that $\tau_i(0) = \gamma_i$ and $\tau_i(1) = \gamma_{i+1}$. Let $\omega$ be the concatenation $\prod_{i=0}^k\beta_{\tau_i(0)} \tau_i \beta_{\tau_i(1)}^{-1}$, which is homotopy equivalent to $\gamma$. Now set $\delta=\eta_1^{n_1}\eta_2^{n_2}\ldots\eta_k^{n_k}$, and we see that $\delta \omega^{-1}$ is homotopy equivalent to $\gamma^{-1}$. However, for each $\theta \in L$, the loop $\beta_{\theta(0)} \theta \beta_{\theta(1)}^{-1}$ appears $m_{\theta}$ times in $\delta$, and $-m_{\theta}$ times in $\omega^{-1}$. Thus each such loop appears in $\delta \omega^{-1}$ with exponent summing to $0$; hence $\delta \omega^{-1}$ map to the identity in the abelianization of $A_{1,r}(X)$. Thus $[\gamma]$ is in the commutator of $A_{1,r}(X)$. Groups appearing as homology groups {#se:constructions} =================================== In this section, we describe groups which appear as discrete homology groups (and discrete homotopy groups). Our main result is Corollary \[cor:groups appearing as homology groups\], that is based on facts of intrinsic interest: the classical fundamental group of a compact, smooth, metrizable, path-connected manifold is the discrete fundamental group of a metric space (and thus the first homology group is also a discrete homology group). We also give a construction which is an analogue of suspension. Let $M$ be a compact, metrizable, smooth, and path connected manifold. Let $\ell>0$ be the infimum of the lengths of all non-contractible loops on $M$. We say that a rectangulation $R$ of the manifold is ‘fine enough’ if each side of the rectangulation has length at most $\ell/5$. \[th:discretization\] Let $R$ be a ‘fine enough’ rectangulation of a compact, metrizable, smooth, and path-connected manifold $M$. Let $X$ be the natural graph associated to the rectangulation $R$. Then $\pi_1(M) \simeq A_{1,1}(X)$. Let $p\in X$ be a base point. We can associate to every loop in $M$ based at $p$ a discrete loop in $X$ by the following procedure. Let $\gamma:[0,1]\rightarrow M$ be a continous loop in $M$ based at $p$. For every $t\in[0,1]$ associate to $\gamma(t)$ one of its closest points in $X$. By continuity of $\gamma$, one can do this in such a way to obtain a 1-loop $\tilde\gamma=x_0x_1\ldots x_n$ in $R$. This procedure preserves the homotopy equivalence. Indeed, let $H:[0,1]\times[0,1]\rightarrow M$ be an homotopy between loops $\gamma_0$ and $\gamma_1$ in $M$. Set $\widetilde H_{\frac{i}{N}}:=\widetilde H\left(\frac{i}{N},t\right)$. Since $M$ is a compact metric space, by the Heine-Cantor theorem, one can find $N>0$ such that $\widetilde{H}_\frac{i}{N}$ and $\widetilde {H}_{\frac{i+1}{N}}$ are consecutive steps of a discrete homotopy in $R$, for all $i$.The sequence $\widetilde\gamma_0\widetilde {H}_{\frac 1N}\widetilde {H}_{\frac 2N}\ldots \widetilde {H}_{1-\frac1N}\widetilde\gamma_1$ then gives rise to a homotopy between $\widetilde{\gamma}_0$ and $\widetilde{\gamma}_1$. Note that the converse does not hold a priori for an arbitrary triangulation and this is why we need a *fine enough* rectangulation. For such rectangulations, we have that if $\tilde\gamma_0=px_1\ldots x_sp$ and $\tilde\gamma_1=py_1\ldots y_sp$ are discrete homotopy equivalent, then $\gamma_0$ and $\gamma_1$ are homotopy equivalent. Indeed, let $\widetilde{\gamma}_0$ and $\widetilde{\gamma}_1$ be two loops in $X$ that are homotopy equivalent and consider a homotopy matrix between them. $$\left( \begin{array}{ccccc} p & x_1 & \ldots & x_s & p \\ p & z_1^1 & \ldots & z_{s}^1 & p \\ \vdots & \vdots & \ldots & \vdots & \vdots \\ p & z_1^{t} & \ldots & z_{s}^{t} & p \\ p & y_1 & \ldots & y_{s}& p \\ \end{array} \right)$$ Now, to every row and every column of the homotopy matrix, associate the path in $M$ that is obtained by connecting adjacent points in the homotopy matrix via the edge in the rectangulation connecting them. Because no rectangule of the rectangulation can contain a non-contractible loop, we know that the loop associated this way to the first row is homotopy equivalent to $\gamma_0$ and the loop associated to the last row is homotopy equivalent to $\gamma_1$. Moreover, we know that all *squares* $ppz_1^1x_1p$, $x_1z_1^1z_2^1x_2^1x_1^1$, etc. do not contain any contractible loop in $M$, because of our choice of the rectangulation. In other words, all these squares, seen as loops in $M$ as above, are contractible and therefore, they can be filled by a standard homotopy. Gluing all these homotopies, one obtain a standard homotopy between the loop associated to the first row (that is homotopy equivalent to $\gamma_0$) and the loop associated to the last row (that is homotopy equivalent to $\gamma_1$). In conclusion, $\gamma_0$ and $\gamma_1$ are homotopy equivalent in $M$. Since it is easily seen that the correspondance we defined between paths in $M$ and discrete paths in $X$ is surjective and preserves concatenation, it then gives rise to an isomorphism between the fundamental groups of $M$ and the discrete fundamental group of $X$, as claimed. Next we give a construction that is similar to suspension of a topological space. \[defin:suspension\] Let $X$ be a metric space, and $r > 0$. Take $Y = X \times \{0, 1, 2, 3 \}$, equipped with the $\ell^1$-metric, and then quotient $Y$ by identifying all points whose second coordinate is $0$, and identifying all points whose second coordinate is $3$. Denote this metric space by $X_s$. Let $0$ ($t$, respectively) denote the class in $X_s$ corresponding to the points of $Y$ where the second coordinate is $0$ ($3$, respectively). \[prop:suspension\] For all $n$, we have ${{{DH_{n, r}(X)}}} \simeq {{DH_{n+1, r}(X_s)}}$. Let $A = X_s \setminus \{0 \}$, and $B = X_s \setminus \{t \}$. We claim that $\{A, B \}$ forms a discrete cover. Note that $A \cap B$ is $r$-homotopy equivalent to $X$, and $A, B$ are both contractible. The result then follows from the Mayer-Vietoris exact sequence. \[cor:groups appearing as homology groups\] For all abelian group $G$ and for all $\bar n\in\mathbb N$, there is a finite connected simple graph $\Gamma$ such that $${DH_{n, 1}(\Gamma)}= \begin{cases} G,\qquad\text{if }n=\bar n \\ 0,\qquad\text{if }n\leq\bar n \end{cases}$$ \[cor:discrete sphere\] There is graph $S^n$ with the property $${DH_{k, 1}(S^n)}= \begin{cases} \mathbb{Z},\qquad\text{if }k= n \\ 0,\qquad\text{if }k\neq n \end{cases}$$ Let $X, Y$ be *extended* metric spaces. This means that $d_X: X \times X \mapsto [0, \infty]$ and similarly fro $d_Y$. Then we can define an extended metric space on $X \sqcup Y$: $$d_{X \sqcup Y}(a,b) = \left \{ \begin{array}{cc} d_X(a,b) & a, b \in X \\ d_Y(a,b) & a,b \in Y \\ \infty & \end{array} \right.$$ Then clearly ${{{DH_{n, r}(X \sqcup Y)}}} \simeq {{{DH_{n, r}(X)}}} \times {{{DH_{n, r}(Y)}}}$. For any sequence $G_i$ of abelian groups, there is an extended metric space $X$ such that ${{{DH_{n, r}(X)}}} \simeq G_n$. The construction uses disjoint union (which is why we must work with extended metric spaces). A new coarse homology theory {#se:coarse} ============================ As mentioned in the Introduction, the main motivation to work with metric spaces is to construct a new coarse invariant. Since a typical coarse object is a sequence of finite objects, the idea is to take the direct limit, as $r$ goes to infinity, of the countably infinite direct product of the discrete homology theory. To make this idea formal, we first recall some basic facts about direct limits of groups. A direct system of groups is a net $(G_r, \phi_{r,s})_{r,s\in R, r\leq s}$, where $R$ is a directed set, $G_r$ are groups, and $\phi_{r,s}:G_r\to G_s$ are group homomorphisms such that $\phi_{s,t}\circ\phi_{r,s}=\phi_{r,t}$, for all $r\leq s\leq t$, and $\phi_{r,r}=Id_{G_r}$, for all $r$. In our case, the directed set $R$ is an unbounded interval of positive real numbers $R=[r_0,\infty)$. \[defin:homomorphism of direct systems\] Let $(G_r,\phi_{r,s})$ and $(H_r,\psi_{r,s})$ be two direct systems of groups. A homomorphism $\alpha:(G_r,\phi_{r,s})\to(H_r,\psi_{r,s})$ of direct systems is a net of group homomorphisms $\alpha_r:G_r\to H_{\rho_r}$ such that - the function $r\to\rho_r$ is increasing, - for all $r\leq s$, one has $$\psi_{\rho_r,\rho_s}\circ\alpha_r=\alpha_s\circ\phi_{r,s}$$ Let $(G_r,\phi_{r,s})$ be a direct system of groups. The direct limit $G=\underrightarrow\lim G_r$ is the quotient of the disjoint union of all the $G_r$’s modded out by the following equivalence relation: $g\in G_r$ is equivalent to $h\in G_s$ if there is $t\geq r,s$ such that $\phi_{r,t}(g)=\phi_{s,t}(h)$. \[rem:directsystemhomomorphism\] [Let $(G_r,\phi_{r,s})$ and $(H_r,\psi_{r,s})$ be two direct systems of groups and let $G$ and $H$ be respectively their direct limits. A homomorphism $\alpha:(G_r,\phi_{r,s})\to(H_r,\psi_{r,s})$ of direct systems induces a canonical group homomorphism $\alpha_*:G\to H$ as follows. Given $[g]_{\sim}\in G$, assume that the representative $g$ belongs to $G_r$, and define $\alpha_*([g]_{\sim})=[\alpha_r(g)]_{\sim}$. One easily checks that $\alpha_*$ is well defined and it is in fact a group homomorphism from $G$ to $H$.]{} Given two homomorphisms $\alpha:(G_r,\phi_{r,s})\to(H_r,\psi_{r,s})$ and $\beta:(H_r,\psi_{r,s})\to(K_r,\chi_{r,s})$, the composition $\beta\circ\alpha:(G_r,\phi_{r,s})\to(K_r,\chi_{r,s})$ is defined through the net of homomorphisms $(\beta\circ\alpha)_r:=\beta_{\rho_r}\circ\alpha_r$, and is a homomorphism of direct systems of groups. The following also holds: $$\begin{aligned} \label{eq:composition} (\beta\circ\alpha)_*=\beta_*\circ\alpha_*.\end{aligned}$$ We need the following Lemma: \[lem:directlimitisomorphism\] Let $\alpha, \beta:(G_r,\phi_{r,s})\to(H_r,\psi_{r,s})$ be two homomorphisms and suppose there exists $r \in R$, such that for all $s \geq r$, we have $\alpha_s = \beta_s$. Then $\alpha_* = \beta_*$. Let $G, H$ be the direct limits of $(G_r,\phi_{r,s})$ and $(H_r,\psi_{r,s})$ respectively, and let $[g]_\sim \in G$. Without loss of generality, suppose we have a representative $g \in G_s$ with $s \geq r$. Then $\alpha_s(g) = \beta_s(g)$, so we have $\alpha_*[g]_\sim = [\alpha_s(g)]_\sim = [\beta_s(g)]_\sim = \beta_*[g]_\sim$. A metric space is *proper* if closed balls are compact. \[def:UBP\] Let $S:(0,\infty)\to(0,\infty)$ be a function. A map $f:X\to Y$ between proper metric spaces is *uniformly S-bornologous* if for all $r>0$ one has $$d_X(x_1,x_2)<r\qquad\qquad\text{implies}\qquad\qquad d_Y(f(x_1),f(x_2))<S(r).$$ A map $f:X\to Y$ is called uniformly bornologous if there is a function $S$ such that $f$ is uniformly $S$-bornologous. We recall that a map $f:X\to Y$ between proper metric space is called *proper* if inverse images of compact sets are compact. Let us introduce some acronyms: - $UBP(X,Y)$ is the set of uniformly bornologous and proper maps $f:X\to Y$. - $UBP_S(X,Y)$ is the set of uniformly $S$-bornologous and proper maps $f:X\to Y$. \[defin:bornotopic\] Let $f,g\in UBP(X,Y)$. They are *bornotopy equivalent* if there is $R>0$ such that $$d_Y(f(x),g(x))<R\qquad\qquad\text{for all }x\in X$$ \[defin:bornotopicequivalence\] A map $f\in UBP(X,Y)$ is a *bornotopy equivalence* if there is $g\in UBP(Y,X)$ such that $f\circ g$ is bornotopy equivalent to Id$_Y$ and $g\circ f$ is bornotopy equivalent to $\operatorname{Id}_X$. Two spaces are called bornotopy equivalent if there is a bornotopy equivalence between them. We said at the beginning of this section that coarse geometry is, informally speaking, the study of a metric space from a large scale point of view. Formally, coarse geometry is the study of the category whose objects are proper metric spaces and whose morphisms are bornologous maps. Therefore, to define a coarse homology theory, we need an homology theory for proper metric spaces that are *coarsely connected* and such that this theory is invariant under bornotopy equivalence. \[defin:coarsely connected\] [A metric space is coarsely connected if it is connected at some scale $r>0$.]{} Throughout the rest of this section, $X$ will be always assumed coarsely connected and the index $r$ will be always assumed to be large enough such that $X$ is connected at scale $r$. The idea to coarsen the discrete homology theory is very simple: a coarse chain will be a sequence of discrete chains, modded out by bounded sets. Formally, we first need to take the direct product of countably many copies of ${{{DH_{n, r}(X)}}}$ and then the direct limit over $r$. So, first set ${{{\widetilde{DH}_{n, r}(X)}}}:=\Pi_{k=1}^{\infty}{{{DH_{n, r}(X)}}}$. For $r\leq s$, denote $\iota_{n,r,s}:{{{DH_{n, r}(X)}}}\to {DH_{n, s}(X)}$ the inclusion and let $\tilde\iota_{n,r,s}=(\iota_{n,r,s})_k: {{{\widetilde{DH}_{n, r}(X)}}}\to {\widetilde{DH}_{n, s}(X)}$ be the component-wise homomorphism. Clearly, for every fixed $n$, $({{{\widetilde{DH}_{n, r}(X)}}},\tilde\iota_{n,r,s})$ is a direct system of groups and so we can define $$\begin{aligned} \label{eq:coarse homology} {{DH_{n, \infty}(X)}}:=\underrightarrow\lim {{{\widetilde{DH}_{n, r}(X)}}}.\end{aligned}$$ 1. Every bounded metric space $X$ has trivial ${{DH_{n, \infty}(X)}}$, coherently with the fact that bounded metric spaces are bornotopy equivalent to one point and so they are trivial from a coarse point of view. 2. An example of a metric space $X$ with non trivial ${DH_{1, \infty}(X)}$ is the coarse Hawaiian earring. This space, whose suggestive name has been advised by Nick Wright, is the following coarse analogue of the standard Hawaiian earring. Consider the space $X=\bigcup X_n$, where $X_n$ is the circle in the Euclidean plane with center in the point $(0,n^2)$ and passing through the origin of the plane. The space $X$, equipped with the subspace-metric inherited by $\mathbb R^2$ is easily seen to have non trivial ${DH_{1, \infty}(X)}$. Our aim is to prove the following \[th:invariance\] If $X,Y$ are coarsely connected proper metric spaces which are bornotopy equivalent through the map $f:X\to Y$, then $${{DH_{n, \infty}(X)}} \simeq {{DH_{n, \infty}(Y)}},\qquad\qquad\text{for all n,}$$ and the isomorphism is given by a canonical map $f_*$. We start with a simple observation. Let $f$ be a uniformly $S$-bornologous map from $X$ to $Y$. Fix $r$ large enough such that $X$ is connected at scale $r$. Since $f$ is uniformly $S$-bornologous, the image under $f$ of any singular $(n,r)$-cube in $X$ is a singular $(n,S(r))$-cube in $Y$. Also, $f$ maps degenerate cubes to degenerate cube and therefore $f$ drops to a quotient homomorphism $\overline f_{n,r}:C_{n,r}(X)\to C_{n,S(r)}(Y)$. This homomorphism maps $ker(\partial_n)\subseteq C_{n,r}(X)$ to $\ker(\partial_n)\subseteq C_{n,S(r)}(Y)$ and $\text{Im}(\partial_{n+1})\subseteq C_{n,r}(X)$ to $\text{Im}(\partial_{n+1})\subseteq C_{n,S(r)}(Y)$. Therefore, for large $r$ we have group homomorphisms $f_{n,r}:{{{DH_{n, r}(X)}}}\to {DH_{n, S(r)}(Y)}$, which induce component-wise homorphisms $\tilde{f}_{n,r}:{{{\widetilde{DH}_{n, r}(X)}}}\to{\widetilde{DH}_{n, S(r)}(Y)}$. This net of homomorphisms is clearly an homomorphism of direct systems and therefore we can apply Remark \[rem:directsystemhomomorphism\] and get the following proposition. \[prop:UBPinduceshomo\] Let $f:X\to Y$ be a UBP map between coarsely connected metric spaces. Then, for all $n$, the natural map $f_{n,*}:{{DH_{n, \infty}(X)}} \to {{DH_{n, \infty}(Y)}}$ is a group homomorphism. Now let $f\in UBP_{S_f}(X,Y)$ and $g\in UBP_{S_g}(X,Y)$ be bornotopy equivalent maps. Let $r>0$ and denote $M_r=\max\{S_f(r),S_g(r)\}$. As before, we have systems of group homomorphisms $$f_{n,r}:{{{DH_{n, r}(X)}}} \to {DH_{n, M_r}(Y)}\qquad\qquad\text{and}\qquad\qquad g_{n,r}: {{{DH_{n, r}(X)}}} \to {DH_{n, M_r}(Y)}$$ induced by $f$ and $g$ (Notice that one of the two may have image inside some $H_{n,s}$ with $s<M_r$. In this case we may pass to ${DH_{n, M_r}(Y)}$ by composing with $\iota_{s,M_r}$). Observe that for large enough $r$, the maps $f_{n,r}$ and $g_{n,r}$ are the same, since $f$ and $g$ are bornotopy equivalent, that is their images are point-wise at uniformly bounded distance. Thus, by Lemma \[lem:directlimitisomorphism\], we have the following: \[prop:samehomology\] Let $f, g: X \to Y$ be bornotopy equivalent maps. Then they induce the same map on homology. We can now prove Theorem \[th:invariance\]. Assume $f:X\to Y$ is a bornotopy equivalence and let $g:Y\to X$ its bornologous inverse, that is, $g\circ f\sim\operatorname{Id}_X$ and $f\circ g\sim\operatorname{Id}_Y$. By Proposition \[prop:samehomology\] this tells us that $f \circ g$ and $\operatorname{Id}_Y$ induce the same map on homology, that is, $(f \circ g)_* = \operatorname{Id}_{{{DH_{\infty, r}(X)}}}$ and $(g \circ f)_* = \operatorname{Id}_{{{DH_{\infty, r}(Y)}}}$. By Equation \[eq:composition\], we then get $f_* \circ g_* = \operatorname{Id}_{{{DH_{\infty, r}(X)}}}$ and $g_* \circ f_* = \operatorname{Id}_{{{DH_{\infty, r}(Y)}}}$, that shows that $f_*$ and $g_*$ are both isomorphisms. Conclusions and Open Problems {#se:conclusion} ============================= In this paper, we discussed a discrete homology theory for metric spaces, related to discrete homotopy theory. We proved some analogues of classical results. We also constructed a new coarse homology theory, and showed its relationship to bornotopy equivalence. However, there are many future directions left to go. We list a few open questions below. 1. We proved the Hurewicz theorem in dimension 1. It would interesting to see if the theorem holds also in higher dimension. 2. We proved the *discretization theorem* (Theorem \[th:discretization\]) in dimension 1 and only for the fundamental group. Also in this case, it would be interesting to see if an analogous theorem holds in higher dimension and for both homotopy and homology groups. 3. Let $(X,d)$ be a path-connected metric space. Observe that, consequently, $X$ is connected at scale $r$, for all $r>0$. Therefore, we may consider both the standard fundamental group of $X$, $\pi_1(X)$, and the net of discrete fundamental groups $\{A_{1,r}(X)\}_{r>0}$. Is there any relation between $\pi_1(X)$ and some sort of limit of the $A_{1,r}(X)$’s, as $r\to0$? Of course, one should first clarify which notion of limit would be suitable. Note that, as $r$ decreases, the net $A_{1,r}(X)$ forms an inverse system of groups. So, perhaps, the most natural notion of limit would be that of inverse limit. What makes this problem more interesting is the fact that $\pi_1(X)$ is not isomorphic to an inverse limit of $\{A_{1,r}(X)\}_{r > 0}$ in general. Consider $\mathbb R^2\setminus\{(0,0)\}$ equipped with the Euclidean metric. Then $A_{1,r}(X)=\{0\}$, for all $r>0$, but $\pi_1(X)=\mathbb Z$. 4. An interesting problem would be that of finding a coarse analogue of the excision theorem. 5. Many other open problems would come from comparing our coarse homology theory with previously known coarse homology theories or with other known asymptotic concepts, such as asymptotic dimension, asymptotic cones, etc. [9]{} Atkin R. *An algebra of patterns on a complex, I*, Intern. J. Man-Machine Studies 6 (1974) 285-307. Atkin R. *An algebra of patterns on a complex, II*, Intern. J. Man-Machine Studies 8 (1976) 448-483. Babson E., Barcelo H., de Longueville M., Laubenbacher R. *Homotopy theory of graphs*, J. Alg. Comb. (2006) 24:31-44. Barcelo H., Kramer X., Laubenbacher R., Weaver C. *Foundations of a connectivity theory for simplicial complexes*, Adv. in Appl. Math. 26 (2001) 97-128. Barcelo H., Laubenbacher R. *Perspectives in $A$-homotopy theory and its applications*, Discrete Mathematics 298 (2005) 39-61. Barcelo, H., Severs, C. and J. A. White, *$k$-parabolic subspace arrangements*, Transations of the American Mathematical Society 363 (11) (2011) 6063-6083. Block, J. and S. Weinberger, *Aperiodic Tilings, Positive Scalar Curvature, and Amenability of Spaces*, Journal of the American Mathematical Society, Volume 5, Issue 4, (1992), 907-918. Capraro, V. *Amenability, locally finite spaces, and bi-Lipschitz embeddings*, Expositiones Mathematicae. Forthcoming. Capraro, V. *A notion of continuity in discrete spaces and applications*, Applied General Topology 14 (1) (2013), 61-72 Eilenberg, S. and N. E. Steenrod, *Axiomatic approach to homology theory*, Proc. Nat. Acad. Sci. U. S. A. 31, (1945), 117-120. Hamming, R.W. *Error detecting and error correcting codes*, Bell System Technical Journal 29 (2) (1950), 147-160. Higson, N. and J. Roe *On the coarse Baum-Connes conjecture, Novikov Conjectures, Index Theorems, and Rigidity*, Volume 2 (S.C.Ferry, A.Ranicki, and J.Rosenberg, eds.), London Mathematical Society Lecture Note Series, vol. 227, Cambridge University Press, 1995, pp. 227–254. Kramer X., Laubenbacher R. *Combinatorial homotopy of simplicial complexes and complex information networks*, in: D.Cox, B.Sturmfels (Eds.), Applications of Computational Algebraic Geometry, vol. 53, Proceedings of the Symposium in Applied Mathematics, American Mathematical Society, Providence, RI, 1998. Mayer, W. *Über abstrakte Topologie*, Monatshefte für Mathematik 36 (1) (1929), 1-42. Nowak, P. and J. Špakula, *Controlled coarse homology and isoperimetric inequalities*, Journal of Topology 3 (2) (2010), 443-462. Roe, J. *Coarse cohomology and index theory on complete riemannian manifolds*, Memoirs of the American Mathematical Society, vol. 497, American Math- ematical Society, 1993. Vietoris, L. *Über die Homologiegruppen der Vereinigung zweier Komplexe*, Monatshefte für Mathematik 37 (1930) 159-162. Yu, G. *On the coarse Baum-Connes conjecture*, K -theory 9 (1995) 199–221.
--- abstract: '.8 cm The rare $B\rightarrow K^{*}\nu\bar{\nu}$ decay when $K^{*}$ meson is longitudinally or transversely polarized is analysed in the context of the fourth generation model. A significant enhancement to the missing energy spectrum over the SM is recorded.' author: - | \ [T. BARAKAT]{} [^1]\ [Engineering Faculty, Cyprus International University]{}\ [Lefkoşa, Mersin 10 - Turkey ]{} --- Introduction ============= .8cm The theoretical and experimental investigations of the rare decays has been a subject of continuous interest in the existing literature. The experimental observation of the inclusive $b\rightarrow X_{s}\gamma$ \[1\], and exclusive $B\rightarrow K^{*}\gamma$ \[2\] decays, together with the recent CLEO \[3\] upper limits on the exclusive decays $B \rightarrow K^{*}\ell^{+}\ell^{-}$ which are less than one order of magnitude above the SM predictions, stimulated the study of rare B meson decays on a new footing. These decays take place via flavor-changing neutral currents (FCNC) which are absent in the Standard Model (SM) at tree level and appear only at the loop level. The inclusive $B \rightarrow X_{s}\nu \bar{\nu}$ decay rate is very sensitive to extensions of the SM, and provides a unique source of constrains on some ’new physics’ scenarios which predict a large enhancement of this decay mode. Therefore, the study of $b \rightarrow s\nu \bar{\nu}$, together with the search for $b \rightarrow s\ell^{+} \ell^{-}$, and $ b\rightarrow s$ gluon processes, with a refinement of the measurement of $B \rightarrow X_{s}\gamma$ will allow to exploit a complete program to test the SM properties at the loop level and constrain various new physics scenarios. The first attempt to experimentally access the decay $b \rightarrow s\nu \bar{\nu}$ will be through the exclusive modes, which will be better investigated at B-factories. Among such modes, the channel $B \rightarrow K^{*}\nu\bar{\nu}$ provokes special interest. The experimental search for $B \rightarrow K^{*}\nu\bar{\nu}$ decays can be performed through the large missing energy associated with the two neutrinos, together with an opposite side fully reconstructed B meson. The SM has been exploited to establish a bound on the branching ratio of the above-mentioned decay of the order $\sim 10^{-5}$, which can be quite measurable for the upcoming $KEK$ and SLAC B-factories. However, in SM there are three generations, and yet, there is no theoretical argument to explain why there are three and only three generations in SM, and there is neither an experimental evidence for a fourth generation nor does any experiment exclude such extra generations. On this basis, serious attempts to study the effects of the fourth generation on the rare B meson were made by many authors. For examples, the effects of the fourth generation on the branching ratio of the $B \rightarrow X_{s}\ell^{+}\ell^{-}$, and the $B \rightarrow X_{s}\gamma$ decays is analysed in \[4\]. In \[5\] the fourth generation effects on the rare exclusive $B \rightarrow K^{*}\ell^{+}\ell^{-}$ decay are studied. In \[6\] the contributions of the fourth generation to the $B_{s}\rightarrow \nu \bar{\nu}\gamma$ decay is investigated. Recently, in \[7\] the effects of the fourth generation on the rare $B\rightarrow K^{*}\nu\bar{\nu}$ decay is discussed. In this work, the missing energy spectrum, and the branching ratio of $B \rightarrow K^{*}\nu\bar{\nu}$ will be investigated when $K^{*}$ meson is longitudinally or transversely polarized in a sequential fourth generation model SM, which we shall call (SM4) hereafter for the sake of simplicity. This model is considered as natural extension of the SM, where the fourth generation model is introduced in the same way the three generations are introduced in the SM, so no new operators appear, and clearly the full operator set is exactly the same as in SM. Hence, the fourth generation will change only the values of the Wilson coefficients via virtual exchange of a up-like quark $\acute{t}$. Subsequently, the missing energy spectrum, and branching ratio of $B \rightarrow K^{*}\nu\bar{\nu}$ are enhanced significantly, as we shall see, a result which is in the right direction at least to help experimental search for $B \rightarrow K^{*}\nu\bar{\nu}$ through $m_{\acute{t}}$, and vice versa. Consequently, this paper is organized as follows: in Section 2, the relevant effective Hamiltonian for the decay $B\rightarrow K^{*}\nu\bar{\nu}$ in a sequential fourth generation model (SM4) is presented; and in section 3, the dependence of the missing energy spectrum, and branching ratio of $B \rightarrow K^{*}\nu\bar{\nu}$ on the fourth generation model parameters for the decay of interest is studied, when $K^{*}$ meson is longitudinally or transversely polarized using the results of the Light- Cone QCD sum rules for estimating form factors; and finally a brief discussion of the results is given. Effective Hamiltonian ===================== In the Standard Model (SM), the process $B\rightarrow K^{*}\nu\bar{\nu}$ is described at quark level by the $b\rightarrow s\nu\bar{\nu}$ transition, and receives contributions from Z-penguin and box diagrams, where dominant contributions come from intermediate top quarks. The effective Hamiltonian responsible for $b\rightarrow s\nu\bar{\nu}$ decay is described by only one Wilson coefficient, namely $C^{(SM)}_{11}$, and its explicit form is \[8\]: $$\begin{aligned} H_{eff}&=&\frac{G_{F} \alpha}{2\pi\sqrt{2}sin^{2}\theta_{w}}C^{(SM)}_{11} V^{*}_{ts}V_{tb}\bar{s}\gamma_{\mu}(1-\gamma_{5})b\bar{\nu}\gamma_{\mu} (1-\gamma_{5})\nu,\end{aligned}$$ where $G_{F}$ is the Fermi coupling constant, $\alpha$ is the fine structure constant (at the Z mass scale), and $V^{*}_{ts}V_{tb}$ are products of Cabibbo-Kabayashi-Maskawa matrix elements. In Eq.(1), the Wilson coefficient $C^{(SM)}_{11}$ in the context of the SM has the following form including $O(\alpha_{s})$ corrections \[9\]: $$\begin{aligned} C^{(SM)}_{11}=\left[X_{0}(x_{t})+\frac{\alpha_{s}}{4\pi}X_{1}(x_{t})\right],\end{aligned}$$ with $$\begin{aligned} X_{0}(x_{t})= \frac{x_{t}}{8}\left[\frac{x_{t} +2}{x_{t}-1}+\frac{3(x_{t}-2)} {(x_{t}-1)^{2}}ln(x_{t})\right],\end{aligned}$$ where $x_{t}=\frac{m^{2}_{t}}{m^{2}_{W}}$, and $$\begin{aligned} X_{1}(x_{t})&=&\frac{4x_{t}^{3}-5x_{t}^{2}-23x_{t}}{3(x_{t}-1)^{2}}- \frac{x_{t}^{4}+x_{t}^{3}-11x_{t}^{2}+x_{t}}{(x_{t}-1)^{3}}ln(x_{t})+ \frac{x_{t}^{4}-x_{t}^{3}-4x_{t}^{2}-8x_{t}}{2(x_{t}-1)^{3}}ln^{2}(x_{t}) \nonumber \\ &+&\frac{x_{t}^{3}-4x_{t}}{(x_{t}-1)^{2}}Li_{2}(1-x_{t})+8x_{t}\frac{\partial X_{0}(x_{t})}{\partial x_{t}} ln(x_{\mu}).\end{aligned}$$ Here $Li_{2}(1-x_{t})=\int_{1}^{x_{t}}\frac{lnt}{1-t}dt$ is a specific function, and $x_{\mu}=\frac{\mu^{2}}{m_{w}^{2}}$ with $\mu=O(m_{t})$. At $\mu=m_{t}$, the QCD correction for $X_{1}(x_{t})$ term is very small (around $\sim 3\%$). From the theoretical point of view, the transition $b\rightarrow s\nu\bar{\nu}$ is a very clean process, since it is practically free from the scale dependence, and free from any long distance effects. In addition, the presence of a single operator governing the inclusive $b \rightarrow s \nu\bar{\nu}$ transition is an appealing property. As has been mentioned in the introduction, no new operators appear, and clearly the full operator set is exactly same as in SM, thus the fourth generation fermion changes only the values of the Wilson coefficients $C^{(SM)}_{11}$ via virtual exchange of the fourth generation up quark $\acute{t}$, i.e: $$\begin{aligned} C_{11}^{SM4}(\mu)&=&C^{(SM)}_{11}(\mu)+\frac{V^{*}_{\acute{t}s}V_{\acute{t}b}} {V^{*}_{tb}V_{ts}}C^{(new)}(\mu),\end{aligned}$$ where $C^{(new)}(\mu)$ can be obtained from $C^{(SM)}_{11}(\mu)$ by substituting $m_{t}\rightarrow m_{\acute{t}}$, and the last terms in these expressions describe the contributions of the $\acute{t}$ quark to the Wilson coefficients. $V_{\acute{t}s}$, and $V_{\acute{t}b}$ are the two corresponding elements of the $4\times 4$ Cabibbo-Kobayashi-Maskawa (CKM) matrix. In deriving Eqs.(5) we factored out the term $V^{*}_{ts}V_{tb}$ in the effective Hamiltonian given in Eq.(1). As a result, we obtain a modified effective Hamiltonian, which represents $b \rightarrow s \nu\bar{\nu}$ decay in the presence of the fourth generation fermion: $$\begin{aligned} H_{eff}=\frac{G_{F}\alpha}{2\pi\sqrt{2}sin^{2}\theta_{w}}V^{*}_{ts}V_{tb} [C_{11}^{(SM4)}] \bar{s}\gamma_{\mu}(1-\gamma_{5})b\bar{\nu}\gamma_{\mu} (1-\gamma_{5})\nu.\end{aligned}$$ However, in spite of such theoretical advantages, it would be a very difficult task to detect the inclusive $b \rightarrow s \nu\bar{\nu}$ decay experimentally, because the final state contains two missing neutrinos and many hadrons. Therefore, only the exclusive channels, namely $B \rightarrow K^{*}(\rho) \nu\bar{\nu}$, are well suited to search for, and constrain for possible “new physics” effects. In order to compute $B \rightarrow K^{*} \nu\bar{\nu}$ decay, we need the matrix elements of the effective Hamiltonian Eq.(6) between the final, and initial meson states. This problem is related to the non-perturbative sector of QCD, and can be solved only by using non-perturbative methods. The matrix element $<K^{*} \mid H_{eff}\mid B>$ has been investigated in a framework of different approaches, such as chiral perturbation theory \[10\], three point QCD sum rules \[11\], relativistic quark model by the light front formalism \[12\], effective heavy quark theory \[13\], and light cone QCD sum rules \[14,15\]. To begin with, let us denote by $P_{B}$, and $P_{K^{*}}$ the four-momentum of the initial and final mesons, and define q=$P_{B}-P_{K^{*}}$ as the four-momentum of the $\nu\bar{\nu}$ pair, and $x\equiv E_{miss}/M_{B}$ the missing energy fraction, which is related to the squared four-momentum transfer $q^{2}$ by: $q^{2}=M^{2}_{B}[2x-1+r^{2}_{K^{*}}]$, where $r_{K^{*}}\equiv M_{K^{*}}/M_{B}$ with $M_{B}$, and $M_{K^{*}}$ being the initial and final meson masses. The hadronic matrix element for the $B \rightarrow K^{*} \nu\bar{\nu}$ can be parameterized in terms of five form factors: $$\begin{aligned} <K^{*}_{h} \mid \bar{s}\gamma_{\mu}(1-\gamma_{5})b\mid B> = \frac{2V(q^{2})}{M_{B}+M_{K^{*}}} \epsilon_{\mu\nu\alpha\beta}\epsilon^{*\nu}(h) P_{B}^{\alpha}P^{\beta}_{K^{*}}\nonumber \\ -i \left[\epsilon_{\mu}^{*}(h)(M_{B}+M_{K^{*}})A_{1}(q^{2}) -[\epsilon^{*}(h).q](P_{B}+P_{K^{*}})_{\mu}\frac{A_{2}(q^{2})}{M_{B}+M_{K^{*}}} \right. \nonumber \\ - \left. q_{\mu}[\epsilon^{*}(h).q]\frac{2M_{K^{*}}}{q^{2}} [A_{3}(q^{2})-A_{0}(q^{2})] \right],\end{aligned}$$ where $\epsilon(h)$ is the polarization 4-vector of $K^{*}$ meson. The form factor $A_{3}(q^{2})$ can be written as a linear combination of the form factors $A_{1}$ and $A_{2}$: $$\begin{aligned} A_{3}(q^{2})=\frac{1}{2M_{K^{*}}}\left[(M_{B}+M_{K^{*}})A_{1}(q^{2})- (M_{B}-M_{K^{*}})A_{2}(q^{2})\right],\end{aligned}$$ with a condition $A_{3}(q^{2}=0)=A_{0}(q^{2}=0)$. From these form factors it is easy to derive the missing energy distribution corresponding to the helicity $h=0,\pm 1$of the $K^{*}$ meson: $$\begin{aligned} \frac{d\Gamma(B \rightarrow K^{*}_{h=0} \nu\bar{\nu})}{dx}= \frac{G_{F}^{2}\alpha^{2}M^{5}_{B}\mid V^{*}_{ts}V_{tb}\mid^{2}}{64\pi^{5}sin^{4}\theta_{w}} \mid C^{SM4}_{11} \mid^{2}\frac{\sqrt{(1-x)^{2}-r^{2}_{K^{*}}}}{r^{2}_{K^{*}}(1+r^{2}_{K^{*}})^{2}} \cdot \nonumber\\ \mid (1+r^{2}_{K^{*}})^{2}(1-x-r^{2}_{K^{*}})A_{1}(q^{2})- 2[(1-x)^{2}-r^{2}_{K^{*}}]A_{2}(q^{2}) \mid^{2},\end{aligned}$$ $$\begin{aligned} \frac{d\Gamma(B \rightarrow K^{*}_{h=\pm 1} \nu\bar{\nu})}{dx}= \frac{G_{F}^{2}\alpha^{2}M^{5}_{B}\mid V^{*}_{ts}V_{tb}\mid^{2}}{64\pi^{5}sin^{4}\theta_{w}} \mid C^{SM4}_{11} \mid^{2}\sqrt{(1-x)^{2}-r^{2}_{K^{*}}} \cdot\nonumber\\ \frac{2x-1+r^{2}_{K^{*}}}{(1+r^{2}_{K^{*}})^{2}} \mid 2 \sqrt{(1-x)^{2}-r^{2}_{K^{*}}}V(q^{2})\mp (1+ r^{2}_{K^{*}})^{2}A_{1}(q^{2}) \mid^{2}. \end{aligned}$$ From Eqs.(9,10), we can see that the missing energy spectrum for $B \rightarrow K^{*} \nu\bar{\nu}$ contains three form factors: V, $A_{1}$, and $A_{2}$. In this work, in estimating the missing energy spectrum, we have used the results of \[16\]: $$\begin{aligned} F(q^{2})=\frac{F(0)}{1-a_{F}(q^{2}/M^{2}_{B})+b_{F}(q^{2}/M^{2}_{B})^{2}},\end{aligned}$$ and the relevant values of the form factors at $q^{2}=0$ are: $$\begin{aligned} A_{1}^{B \rightarrow K^{*}}(q^{2}=0)=0.34\pm 0.05,{~~} with{~~} a_{F}=0.6, {~~}and{~~} b_{F}=-0.023,\end{aligned}$$ $$\begin{aligned} A_{2}^{B \rightarrow K^{*}}(q^{2}=0)=0.28\pm 0.04,{~~} with{~~} a_{F}=1.18, {~~} and{~~} b_{F}=0.281,\end{aligned}$$ and $$\begin{aligned} V^{B \rightarrow K^{*}}(q^{2}=0)=0.46\pm 0.07,{~~} with{~~} a_{F}=1.55,{~~} and{~~} b_{F}=0.575.\end{aligned}$$ Note that all errors, which come out, are due to the uncertainties of the b-quark mass, the Borel parameter variation, wave functions, and radiative corrections are quadrature added in. Finally, to obtain quantitative results we need the value of the fourth generation CKM matrix elements $ V^{*}_{\acute{t}s}V_{\acute{t}b}$. For this aim following \[17\], we will use the experimental results of the decay $BR(B \rightarrow X_{s}\gamma)$ together with $BR(B \rightarrow X_{c}e\bar{\nu_{e}})$ to determine the fourth generation CKM factor $V^{*}_{\acute{t}s}V_{\acute{t}b}$. However, in order to reduce the uncertainties arising from b-quark mass, we consider the following ratio: $$\begin{aligned} R_{quark}=\frac{BR(B \rightarrow X_{s}\gamma)}{BR(B \rightarrow X_{c}e\bar{\nu_{e}})}.\end{aligned}$$ In the leading logarithmic approximation this ratio can be summarized in a compact form as follows \[18\]: $$\begin{aligned} R_{quark}=\frac{\mid V^{*}_{ts}V_{tb} \mid ^{2}}{\mid V_{cb} \mid ^{2}}\frac{6\alpha}{\pi f(z)} \mid C^{SM4}_{7}(m_{b}) \mid ^{2},\end{aligned}$$ where $$\begin{aligned} f(z)=1-8z+8z^{3}-z^{4}-12z^{2}ln(z) {~~~~~~~} with {~~~} z=\frac{m^{2}_{c,pole}}{m^{2}_{b,pole}}\end{aligned}$$ is the phase space factor in $BR(B \rightarrow X_{c}e\bar{\nu_{e}})$, and $\alpha= e^{2}/4\pi$. In the case of four generation there is an additional contribution to $B \rightarrow X_{s}\gamma$ from the virtual exchange of the fourth generation up quark $\acute{t}$. The Wilson coefficients of the dipole operators are given by: $$\begin{aligned} C^{SM4}_{7,8}(m_{b})=C^{SM}_{7,8}(m_{b})+\frac{ V^{*}_{\acute{t}s}V_{\acute{t}b}} {V^{*}_{ts}V_{tb}}C^{new}_{7,8}(m_{b}),\end{aligned}$$ where $C^{new}_{7,8}(m_{b})$ present the contributions of $\acute{t}$ to the Wilson coefficients, and $V^{*}_{\acute{t}s}V_{\acute{t}b}$ are the fourth generation CKM matrix factor which we need now. With these Wilson coefficients and the experiment results of the decays $BR(B \rightarrow X_{s}\gamma)=2.66 \times 10^{-4}$, together with the semileptonic $BR(B \rightarrow X_{c}e\bar{\nu_{e}})$=$0.103\pm 0.01$ \[19,20\] decay, one can obtain the results of the fourth generation CKM factor $ V^{*}_{\acute{t}s}V_{\acute{t}b}$, wherein, there exist two cases, a positive, and a negative one \[17\]: $$\begin{aligned} ( V^{*}_{\acute{t}s}V_{\acute{t}b}) ^{\pm}=\biggl[\pm \sqrt{ \frac{R_{quark} \mid V_{cb}\mid ^{2}\pi f(z)}{6\alpha \mid V^{*}_{ts}V_{tb}\mid ^{2}}}-C^{(SM)}_{7}(m_{b}) \biggr] \frac{ V^{*}_{ts}V_{tb}}{C^{(new)}_{7}(m_{b})}.\end{aligned}$$ The values for $V^{*}_{\acute{t}s}V_{\acute{t}b}$ are listed in Table 1 \[7\]. A few comments about the numerical values of $(V^{*}_{\acute{t}s}V_{\acute{t}b})^{\pm}$ are in order. From unitarity condition of the CKM matrix we have $$\begin{aligned} V^{*}_{us}V_{ub}+V^{*}_{cs}V_{cb}+V^{*}_{ts}V_{tb}+V^{*}_{\acute{t}s}V_{\acute{t}b}=0.\end{aligned}$$ If the average values of the CKM matrix elements in the SM are used \[19\], the sum of the first three terms in Eq.(20) is about $7.6\times 10^{-2}$. Substituting the value of $(V^{*}_{\acute{t}s}V_{\acute{t}b})^{(+)}$ from Table 1 \[7\], we observe that the sum of the four terms on the left-hand side of Eq.(20) is closer to zero compared to the SM case, since $(V^{*}_{\acute{t}s}V_{\acute{t}b})^{(+)}$ is very close to the sum of the first three terms, but with opposite sign. On the other hand if we consider $(V^{*}_{\acute{t}s}V_{\acute{t}b})^{-}$, whose value is about $ 10^{-3}$, which is one order of magnitude smaller compared to the previous case, and the error in sum of the first three terms in Eq.(20) is about $\pm 0.6\times 10^{-2}$. Therefore, it is easy to see then that, the value of $(V^{*}_{\acute{t}s}V_{\acute{t}b})^{-}$ is within this error range. In summary both $(V^{*}_{\acute{t}s}V_{\acute{t}b})^{+}$, and $(V^{*}_{\acute{t}s}V_{\acute{t}b})^{-}$ satisfy the unitarity condition of CKM, moreover, $\mid (V^{*}_{\acute{t}s}V_{\acute{t}b})\mid ^{-} \leq 10^{-1}\times \mid (V^{*}_{\acute{t}s}V_{\acute{t}b})\mid ^{+}$. Therefore, from our numerical analysis one cannot escape the conclusion that, the $(V^{*}_{\acute{t}s}V_{\acute{t}b})^{-}$ contribution to the physical quantities should be practically indistinguishable from SM results, and our numerical analysis confirms this expectation. We now go on to put the above points in perspective. Numerical Analysis ================== In order to investigate the sensitivity of the missing-energy spectra, and branching ratios of rare $B \rightarrow K_{L}^{*} \nu\bar{\nu}$, and $B \rightarrow K_{T}^{*} \nu\bar{\nu}$ decay (where $K_{L}^{*}$, and $K_{T}^{*}$ stand for longitudinally and transversely polarized $K^{*}$-meson, respectively)in SM4, the following values have been used as input parameters:\ $G_{F}=1.17{~}.10^{-5}~ GeV^{-2}$, $\alpha =1/137$, $m_{b}= 5.0$ GeV, $M_{B}= 5.28$ GeV, $\mid V^{*}_{ts}V_{tb}\mid$=0.045, $M_{K^{*}}=0.892$ GeV, and the lifetime is taken as $\tau(B_{d})=1.56\times 10^{-12}$ s \[20\], also we have run calculations of Eqs.(9,10) adopting the two sets of $(V^{*}_{\acute{t}s}V_{\acute{t}b})^{\pm}$ in Table 1 \[7\]. we present our numerical results for the missing-energy spectra, and branching ratios in series of graphs. In figures (1-4), we show the missing energy distribution to the decay $dBR(B \rightarrow K_{L}^{*} \nu\bar{\nu})/dx$, and $dBR(B \rightarrow K_{T}^{*} \nu\bar{\nu})/dx$ as functions of $x$; $ \frac{1-r^{2}_{K^{*}}}{2}\leq x \leq 1-r_{K^{*}}$, for $m_{\acute{t}}$= 250 GeV, and $m_{\acute{t}}$= 350 GeV. It can be seen their that, when $V^{*}_{\acute{t}s}V_{\acute{t}b}$ takes positive values, i.e. $(V^{*}_{\acute{t}s}V_{\acute{t}b})^{-}$, the missing energy spectrum is almost overlap with that of SM. That is, the results in SM4 are the same as that in SM. But in the second case, when the values of $V^{*}_{\acute{t}s}V_{\acute{t}b}$ are negative, i.e $(V^{*}_{\acute{t}s}V_{\acute{t}b})^{+}$ the curve of the missing energy spectrum is quit different from that of the SM. This can be clearly seen from figures (1-4). The enhancement of the missing energy spectrum increases rapidly, and the missing energy spectrum of the $K^{*}$ meson is almost symmetrical. In figures (5,6), the branching ratio $BR(B \rightarrow K_{L}^{*} \nu\bar{\nu})$, and $BR(B \rightarrow K_{T}^{*} \nu\bar{\nu})$ are depicted as a function of $m_{\acute{t}}$. Figures (5,6) show that for all values of $m_{\acute{t}}\geq 210$ GeV the values of the branching ratios become greater than SM. The enhancement of the branching ratio increases rapidly with the increasing of $m_{\acute{t}}$. In this case, the fourth generation effects are shown clearly. The reason is that $(V^{*}_{\acute{t}s}V_{\acute{t}b})^{+}$ is 2-3 times larger than $V^{*}_{ts}V_{tb}$ so that the last term in Eq.(5) becomes important, and it depends on the $\acute{t}$ mass strongly. Thus the effect of the fourth generation is significant. Whereas, in our approach the predictions for the ratio $B \rightarrow K_{L}^{*} \nu\bar{\nu}/B \rightarrow K_{T}^{*} \nu\bar{\nu}$, as well as the transverse asymmetry $A_{T}$, $$\begin{aligned} A_{T}\equiv \frac{Br(B \rightarrow K_{h=-1}^{*} \nu\bar{\nu})-Br(B \rightarrow K_{h=+1}^{*} \nu\bar{\nu})}{Br(B \rightarrow K_{h=-1}^{*} \nu\bar{\nu})+Br(B \rightarrow K_{h=+1}^{*} \nu\bar{\nu})}\end{aligned}$$ are model-independent. In conclusion, the missing-energy spectra, and branching ratio of rare exclusive semileptonic $B \rightarrow K^{*} \nu\bar{\nu}$ decay has been investigated in the fourth generation model. The effects of possible fourth generation fermion $\acute{t}$ quark mass has been considered, and the sensitivity of the branching ratio, and the missing-energy spectra to $\acute{t}$ quark mass is observed. Finally, note that the results for $B\rightarrow \rho\nu\bar{\nu}$ decay can be easily obtained from $B\rightarrow K^{*}\nu\bar{\nu}$ when the following replacements are done in all equations: $V_{tb}V^{*}_{ts}\rightarrow V_{tb}V^{*}_{td}$ and $m_{K^{*}}\rightarrow m_{\rho}$. In obtaining these results, one must keep in mind that the values of the form factors for $B\rightarrow \rho$ transition generally differ from that of the $B\rightarrow K^{*}$ transition. However, these differences must be in the range of $SU(3)$ violation, namely in the order $(15-20)\%$. [99]{} M. S. Alam et al., CLEO Collaboration, Phys. Rev. Lett. 74 (1995) 2885. R. Ammar et al., CLEO Collaboration, Phys. Rev. Lett. 71 (1993) 674. S. Anderson et al., CLEO Collaboration, hep-ex/0106060 (2001). C.-S. Huang, W.-J. Huo, and Y.-L. Wu, Mod. Phys. A14 (1999) 2453, \[hep-ph/9911203\]. T. M. Aliev, A. $\ddot{O}$zpineci, and M. Savci, Nucl. Phys. B (2000) 275, \[hep-ph/0002061\]. Y. Dinçer, Phys. Lett. B505, (2001) 89, \[hep-ph/0012135\]. T. Barakat, hep-ph/0105116 (2001). T. Barakat, J. Phys. G: Nucl.Part. Phys.24 (1998) 1903. G. Buchalla, and A. J. Buras, Nucl. Phys. B400 (1993) 225; G. Buchalla, A. J. Buras, and M. E. Lautenbacher, Rev. Mod. Phys. 68 (1996) 1125. R. Casalbuoni et al., Phys. Reports 281 (1997) 145. P. Colangelo, F. De Fazio, P. Santorelli, and E. Scrimieri, Phys. Rev. D53 (1996) 3672. W. Jaus, and D. Wyler, Phys. Rev. D41 (1991) 3405; D. Melikhov, N. Nikitin, and S. Simula, Phys. Lett. B410, (1997) 290, \[hep-ph/9704268\]. W. Roberts, Phys. Rev. D54 (1996) 863. T. M. Aliev, A. $\ddot{O}$zpineci, and M. Savci, Phys.Rev. D5 (1996) 4260. P. Ball, and V. M. Braun, Phys. Rev. D55 (1997) 5561. P. Ball, Fermilab-Conf-98/098-T, \[hep-ph/9803501\]; P. Ball, and V. M. Braun, Phys. Rev. D58 (1998) 094016. W.-J. Huo, \[hep-ph/0006110\]. A. J. Buras, TUM-hep-316/98, \[hep-ph/9806471\]. M. S. Alam, Phys. Rev. Lett. 74 (1995) 2885. C. Caso et al., Particle Data Group, Eur.Phys. J. C3 (1998) 1. ![[]{data-label="fig 1"}](l1.ps){width="70.00000%"} ![[]{data-label="fig 2"}](l2.ps){width="70.00000%"} ![[]{data-label="fig 3"}](tr.ps){width="70.00000%"} ![[]{data-label="fig 4"}](tr1.ps){width="70.00000%"} ![[]{data-label="fig 5"}](l4.ps){width="70.00000%"} ![[]{data-label="fig 6"}](tr4.ps){width="70.00000%"} [^1]: electronic address: barakat@ciu.edu.tr
--- abstract: | Style transfer methods produce a transferred image which is a rendering of a content image in the manner of a style image. There is a rich literature of variant methods. However, evaluation procedures are qualitative, mostly involving user studies. We describe a novel quantitative evaluation procedure. One plots effectiveness (a measure of the extent to which the style was transferred) against coherence (a measure of the extent to which the transferred image decomposes into objects in the same way that the content image does) to obtain an EC plot. We construct EC plots comparing a number of recent style transfer methods. Most methods control within-layer gram matrices, but we also investigate a method that controls cross-layer gram matrices. These EC plots reveal a number of intriguing properties of recent style transfer methods. The style used has a strong effect on the outcome, for all methods. Using large style weights does not necessarily improve effectiveness, and can produce worse results. Cross-layer gram matrices easily beat all other methods, but some styles remain difficult for all methods. Ensemble methods show real promise. It is likely that, for current methods, each style requires a different choice of weights to obtain the best results, so that automated weight setting methods are desirable. Finally, we show evidence comparing our EC evaluations to human evaluations. author: - 'Mao-Chuang Yeh' - Shuai Tang - Anand Bhattad - | D. A. Forsyth\ University of Illinois at Urbana Champaign\ {myeh2, stang30, bhattad2, daf}@illinois.edu title: Quantitative Evaluation of Style Transfer --- 18SubNumber[1861]{} Introduction ============ Style transfer methods apply the [*style*]{} from one example image to the [*content*]{} of another; for instance, one might render a camera image (the content) as a watercolor painting (the style). Recent work has shown that highly effective style transfer can be achieved by searching for an image such that early layers of CNN representation match the early layers of the style image and later layers match the later layers of a content image [@gatys2016image]. Content matching is by comparing activations at each location of a feature map. Style matching is achieved by comparing summary statistics – in particular, the gram matrix – of the layers individually. Comparing gram matrices of individual layers ensures that small, medium and large patterns that are common in the style image appear with about the same frequency in the synthesized image, and that spatial co-occurrences between these patterns are about the same in synthesized and style image. The current evaluation of style transfer methods are done primarily by visual inspection on a small set of different styles and content image pairs. To our knowledge, there are no quantitative protocols to evaluate the competence of style transfer apart from user studies  [@li2018closed]. This may be due to the fact that [*styles*]{} are subjective and more subtle to define than textures, hence such effectiveness metric is hard to choose. Furthermore, quick adjustment to a method using user studies is difficult in practice. The quantitative evaluation such as the edge coherence between [*contents*]{} and stylized images is investigated in  [@li2018closed]. Novak and Nikulin noticed that cross-layer gram matrices reliably produce improvement on style transfer ([@novak2016improving]). However, their work was an exploration of variants of style transfer rather than a thorough study to gain insights on style summary statistics. Their primary suggestions are adding more layers for more features, and they don’t pursue cross-layer gram matrices and quantitatively compare variant modifications. In this paper, we offer a comprehensive quantitative evaluation procedure for style transfer methods. We evaluate style transfers on two criteria. [**Effectiveness**]{} measures whether transferred images have the desired style, using divergence between convolutional feature layer distributions of the synthesized image and original image. [**Coherence**]{} measures whether the synthesized images respect the underlying decomposition of the content image into objects, using established procedures together with the Berkeley segmentation dataset BSDS500 [@arbelaez2011contour], and also using a novel measure of segment divergence. We use our measures to compare several style transfer methods quantitatively. In particular, we show that controlling cross-layer, rather than within-layer, gram matrices produces quantitative improvements in style transfer over the original method due to instability in Gatys [[*et al.* ]{}]{}proposed method (henceforth Gatys) [@gatys2016image] as described in Risser [[*et al.* ]{}]{} [@risser2017stable]. We construct explicit models of the symmetry groups for Gatys’ style loss and the cross-layer style loss (improving over Risser [[*et al.* ]{}]{}, who could not construct the groups). We discuss this in detail in section \[sec:symmetry\]. We show experimental evidence that the quantitative improvement over Gatys’ method is due to the difference in symmetry groups. We show qualitative evidence suggesting that these quantitative improvements manifest in real images. Related work {#sec:gatys} ============ Bilinear models are capable of simple image style transfer [@Tenenbaum2000] by factorizing style and content representations, but non-parametric methods like patch-based texture synthesis can deal with much more complex texture fields [@Efros2001]. Image analogies use a rendering of one image in two styles to infer a mapping from a content image to a stylized image [@Hertzmann2001]. Researchers have been looking for versatile parametric methods to control style patterns at different scales to be transferred. Adjusting filter statistics is known to yield texture synthesis [@debonet; @simoncelli]. Gatys [[*et al.* ]{}]{}demonstrated that producing neural network layers with particular summary statistics (i.e Gram matrices) yielded effective texture synthesis [@NIPS2015_5633]. In a following paper, Gatys [[*et al.* ]{}]{}achieved style transfer by searching for an image that satisfies both style texture summary statistics and content constraints [@gatys2016image]. This work has been much elaborated. The search can be replaced with a regression (at one scale [@Johnson2016Perceptual]; at multiple scales [@wang2016multimodal]; with cached [@chen2017stylebank] or learned [@dumoulin2016learned] style representations) or a decoding process that allows efficient adjusting of statistics  [@DBLP:journals/corr/UlyanovVL16; @huang2017arbitrary; @UST; @li2018closed]. Search can be sped up with local matching methods [@chen2016fast]. Methods that produce local maps (rather than pixels) result in photorealistic style transfer [@Shih2014; @Luan2017]. Style transfer can be localized to masked regions [@gatys2016controlling]. The criterion of matching summary statistics is a Maximum Mean Discrepancy condition [@li2017demystifying]. Style transfer has also been used to enhance sketches [@champandard2016semantic].There is a comprehensive review in [@jing2017neural]. Gupta [[*et al.* ]{}]{} [@gupta2017characterizing] study instability in style losses from videos, where they use prior video frames to stabilize current video frame by enforcing a temporal consistency loss. They demonstrate theoretically instability in Gaty’s method is linked to the size of the trace of the gram matrix. They support this argument with experimental evidence that larger traces result in higher instability. Gatys Method ------------ We review the original work of Gatys [[*et al.* ]{}]{}[@gatys2016image] in detail to introduce notation. Gatys finds an image where early layers of a CNN representation match the lower layers of the style image and higher layers match the higher layers of a content image. Write $I_{s}$ (resp. $I_{c}$, $I_{n}$) for the style (resp. content, new) image, and $\alpha$ for some parameter balancing style and content losses ($L_s$ and $L_c$ respectively). Occasionally, we will write $I_n^m(I_c, I_s)$ for the image resulting from style transfer using method $m$ applied to the arguments. We obtain $I_{n}$ by finding $${\operatorname*{argmin}_{I_n}} L_c(I_{n}, I_{c})+\alpha L_s(I_{n}, I_{s})$$ Losses are computed on a network representation, with $L$ convolutional layers, where the $l$’th layer produces a feature map $f^l$ of size $H^l \times W^l \times C^l$ (resp. height, width, and channel number). We partition the layers into three groups (style, content and target). Then we reindex the spatial variables (height and width) and write $f^l_{k,p}$ for the response of the $k$’th channel at the $p$’th location in the $l$’th convolutional layer. The content loss $L_c$ is $$L_c(I_{n}, I_{c}) = \frac{1}{2}\sum_{c} \sum_{k,p} \norm{f^c_{k,p}(I_{n}) - f^c_{k,p}(I_{c})}^2$$ (where $c$ ranges over content layers). The [*within-layer gram matrix*]{} for the $l$’th layer is $$G_{ij}^l(I) = \sum_p \left[f_{i,p}^l(I)\right]\left[f_{j,p}^l(I)\right]^{T}.$$ Write $w_l$ for the weight applied to the $l$’th layer. Then $$L_s^l(I_{n}, I_{s}) = \frac{1}{4{N^l}^2{M^l}^2}\sum_{s}w_l \sum_{i,j}\norm{G^s_{ij}(I_{n})-G^s_{ij}(I_{s})}^2$$ where $s$ ranges over style layers. Gatys [[*et al.* ]{}]{}use Relu1\_1, Relu2\_1, Relu3\_1, Relu4\_1, and Relu5\_1 as style layers, and layer Relu4\_2 for the content loss, and search for $I_{n}$ using L-BFGS [@liu1989limited]. From now on, we write R51 for Relu5\_1, etc. Quantitative Evaluation of Style Transfer {#effcoh} ========================================= A style transfer method should meet two basic tests. The first is [**effectiveness**]{} – does the method produce images in the desired style? The second is [**coherence**]{} – do the resulting images respect the underlying decomposition of the content image into objects? While final judgment should belong to the artist, we construct numerical proxies that can be used to disqualify methods from a final user study. It is essential to test both properties (excellent results on coherence can be obtained by simply not transferring style at all). In this paper, we offer one possible effectiveness statistic and two possible coherence statistics; however, we expect other reasonable choices could apply. [**Effectiveness:**]{} Assume that a style is applied to a content image. We would like to measure the extent to which the result reflects the style. There is good evidence that the distribution of features within lower feature layers of a CNN representation is an effective proxy to capture styles  [@bau2017network]. We expect that individual transferred images might need to have small biases in the distribution of feature layers to account for the content, but over many images the distribution of features should reflect the style distribution. In turn, a strong measure of effectiveness of style transfer for a particular image is the extent to which the distribution of feature layer values produced by the transferred image matches the corresponding distribution for the style image. In notation, write ${{\bf {f}}}^{l}_{p}(I)$ for the vector of responses of all channels at the $p$’th location in the $l$’th convolutional layer for image $I$. Now choose the $i$’th content image, the $j$’th style, and some method $m$. The distribution $P_{t, m}$ of ${{\bf {f}}}^{l}(I^m_n(I_c^i,I_s^j))$ should be similar to the distribution $P_s$ of ${{\bf {f}}}^{l}(I_s^j)$, with perhaps some smoothing resulting from the need to meet content demands. Testing whether two datasets come from the same, unknown, distribution in high dimensions remains tricky (the method of [@gretton2012kernel] is the current best alternative). We do not expect the distributions to be exactly the same; instead, we want to identify obvious (and so suspicious) large differences. The symmetry analysis below suggests that Gatys method will massively increase the variance of ${{\bf {f}}}^{l}(I^g_n(I_c^i,I_s^j))$. Observing major differences is straightforward with relatively crude tools. However, dimension is a problem. Even assuming that each distribution is normal, computing KL divergences is impractical, because the distributions are large and so the estimates of the covariance matrices are unreliable. However, we seek a statistic that is large with high probability when $P_{t,m}$ and $P_s$ are strongly different, and small with high probability when they are similar. A straightforward construction, after [@DeshpandeCVPR2018] is as follows . Write ${{\bf {v}}}_k$ for a random unit vector. We then compute $p_p^m={{\bf {v}}}_k^T{{\bf {f}}}_p^{l}(I^m_n(I_c^i,I_s^j))$ and $p_p^s={{\bf {v}}}_k^T{{\bf {f}}}_p^{l}(I_s^j)$. We assume that these scalar datasets are normally distributed, and compute KL divergence $d({{\bf {v}}}_k)$ from the style distribution to the transferred distribution. We now average over $R$ random unit vectors and form $$E=-\log \left(\frac{1}{R} \sum_{k}d({{\bf {v}}}_k)\right)$$ Large values of this statistic are obtained if there are few random directions in which the two distributions differ; small values suggest there are many such directions and so that the style transfer may not have succeeded. For all our analysis, we choose a single set of 128 random unit vectors that is reused for all methods. [**Coherence:**]{} A style transfer method that eliminates object boundaries would make it hard for humans to interpret the output images, so a reasonable measure of a style transfer method is the extent to which it preserves object boundaries. We have two measures of coherence. Our [*boundary preservation*]{} measure computes the extent to which a boundary prediction algorithm produces true object boundaries for a given method, using the Berkeley segmentation dataset tests BSDS500 [@arbelaez2011contour]. Our [*object coherence*]{} measure computes the extent to which textures are (a) coherent within object boundaries and (b) distinct from object to object. [*Boundary preservation*]{} is treated as a straightforward application of existing methods to evaluate image boundaries. We choose a boundary predictor (we used the contour detection of [@arbelaez2011contour]); we apply the style transfer methods to images from the BSDS500, using multiple style images, to obtain synthesized images; we apply the boundary predictor to the synthesized images; and we compute the area under curve (AUC) of the probability of boundary (Pb) precision-recall curve for every synthesized image. A higher AUC suggests better boundary preservation. As section \[results\] shows, this measure is highly variable depending on the style that is transferred, and so we compute a per-transferred image AUC. This evaluation method is not perfect. Heavily textured styles may confuse the Pb evaluation without confusing human viewers, because the contour detector was not built with very aggressive texture fields in mind (compare typical style transfer images with the “natural” textures used to build BSDS500). In particular, we might have texture fields that are strongly coherent within each object region and different from region to region, but where the contour detector has great difficulty identifying object boundaries. An *object coherence* measure is easy to obtain using the BSDS500 dataset, because each image comes with a ground truth contour mask. We choose some layer $l$, and write ${{\bf {f}}}_{S,i}={{\bf {f}}}_{i}^{l}(I^g_n(I_c^i,I_s^j), S)$ (for brevity) for a feature vector in that layer within some segment $S$, and $\left\{{{\bf {f}}}_{S,i}\right\}$ for all such feature vectors. Write $\mu_S={{{\mathsf{mean}}\left( \left\{{{{\bf {f}}}_S}\right\}\right)}}$, and $\Sigma_b={{ {\small {\mathsf{Covmat}} \left( \left\{ {{\mu_S}} \right\} \right) }}}$ for the between class covariance matrix. Assume that each segment has the same covariance(heteroskedasticity, a tolerable assumption given that the method tries to impose a gram matrix on the layer), and construct the within-class covariance for all locations in a segment $\Sigma_w={{ {\small {\mathsf{Covmat}} \left( \left\{ {{{{\bf {f}}}_{1,1}-\mu_1, \ldots {{\bf {f}}}_{n_S,n_f(S)}-\mu_{n_S}}} \right\} \right) }}}$. Now the largest generalized eigenvalue $\lambda^{\mbox{max}}$ of $(\Sigma_b, \Sigma_w)$ measures the dispersion of the region textures. Notice that $\lambda^{\mbox{max}}\geq 0$, and simple plots (supplementary materials) suggest this has a log-normal distribution over multiple style/content pairs. We therefore use $L_m=\log \lambda^{\mbox{max}}_{m}$ as a score to evaluate a method. Larger values suggest more successful separation of regions.\ [**Summarizing data with the EC plot.**]{} Comparing style transfer methods requires a summary of: the expected effectiveness of a method at any coherence; the effect of style and of weight choice on performance; and the extent to which evidence supports a difference between methods. We compare methods using an effectiveness-coherence (EC) plot, which plots: (a) a scatter plot of EC pairs obtained for various style/content/weight triplets; (b) a Loess regression curve of E regressed against C for these triplets; and (c) standard error regions for the regression. Effectiveness is measured per layer and we show layer 1 plots in section \[ExpPro\] (with others in the supplementary material). Coherence is measured either using per-image AUC of Pb (which does not depend on layers) or using $L_m$, *object coherence*; this depends on the layer (more plots in supplementary material). Cross-layer Style Transfer {#sec:Cross} ========================== Cross-layer style loss ---------------------- We consider a style loss that takes into account between layer statistics. The [**cross-layer, additive (ACG)**]{} loss is obtained as follows. Consider layer $l$ and $m$, both style layers, with decreasing spatial resolution. Write $\uparrow f^{m}$ for an upsampling of $f^m$ to $H^l\times W^l \times K^m$, and consider $$G_{ij}^{l,m}(I) = \sum_{p} \left[ f_{i,p}^l(I)\right]\left[\uparrow {f}_{j,p}^{m}(I)\right]^{T}.$$ as the cross-layer gram matrix, We can form a style loss $$L_s(I, I_{s}) = \sum_{(l, m)\in {\cal L}} w^{l}\sum_{ij} \norm{G^{l,m}_{ij}(I)-G^{l,m}_{ij}(I_s)}^2$$ (where ${\cal L}$ is a set of pairs of style layers). We can substitute this loss into the original style loss, and minimize as before. All results here used a [*pairwise descending*]{} strategy, where one constrains each layer and its successor (i.e. (R51, R41); (R41, R31); etc). Alternatives include an [*all distinct pairs*]{} strategy, where one constrains all pairs of distinct layers. Carefully controlling weights for each layer’s style loss is not necessary in cross-layer gram matrix scenario. [*Constraint counts*]{} for cross-layer gram matrix methods are much lower than for with-in layer methods. For a pairwise descending strategy, we have four cross-layer gram matrices, leading to control of $64\times 128+128\times 256+256\times 512+512\times 512 = 434176 $ parameters; compare within layer gram matrices, which control $64^2+128^2+256^2+2\times512^2 = 610304$ parameters. The experimental results suggest that the number of constraints is a poor way of evaluating a method. Symmetries and Stability {#sec:symmetry} ------------------------ Symmetries in a style transfer loss function occur when there is a transformation available that changes the style transferred image without changing the value of the loss function. Risser [[*et al.* ]{}]{}note instability in Gatys’ method; symptoms are poor and good style transfers of the same style to the same content with about the same loss value [@risser2017stable]. They supply evidence that this behavior can be controlled by adding a histogram loss, which breaks the symmetry. They do not write out the symmetry group as too complicated ( [@risser2017stable], p 4-6). Gupta [[*et al.* ]{}]{} [@gupta2017characterizing] make a strong experimental argument that instability in Gaty’s method is linked to the size of the trace of the gram matrix (larger trace is linked to more instability). One portion of the symmetry group is easy to construct. In particular, we consider affine maps acting on a feature layer, and consider the effect on that layers gram matrix and on the gram matrix of the next layer. Notice this does not exhaust the available symmetries (for example, a spatial permutation of features would not change the gram matrix). We have no construction currently for spatial symmetries. The supplementary materials give a construction for all affine maps that fix the gram matrix for a layer and its parent (deeper networks follow the same lines). It is necessary to assume the map from layer to layer is linear. This is not as restrictive as it may seem; the analysis yields a local construction about any generic operating point of the network. In summary, we have: [**Symmetry group, within layer gram matrices, two layers:**]{} Assuming that the between layer map is affine, with matrix ${{\cal {M}}}$ representing the linear component. With various assumptions about the spatial statistics of layer 1 (supplementary materials), an element of the symmetry group is obtained by: choose ${{\bf {b}}}$ [*not*]{} of unit length, and such that ${{\cal {M}}}{{\bf {b}}}=0$; now factor ${{\cal {I}}}-{{\bf {b}}}{{\bf {b}}}^T={{\cal {A}}}{{\cal {A}}}^T$; choose ${{\cal {U}}}$ orthonormal. Then $({{\bf {b}}}, {{\cal {A}}}{{\cal {U}}})$ is a symmetry of the gram matrices in [*both layers*]{} (i.e the action of this element on layer 1 fixes [*both*]{} gram matrices). In particular, mapping all feature vectors ${{\bf {f}}}_p^1$ to ${{\cal {A}}}{{\cal {U}}}{{\bf {f}}}_p^1+{{\bf {b}}}$ will result in no change in the gram matrix at either layer 1 or layer 2; but the underlying image may change a lot, because ${{\cal {A}}}$ can rescale features and features are shifted. [**Symmetry group, between layer gram matrix, two layers:**]{} Assuming that the between layer map is affine, with matrix ${{\cal {M}}}$ representing the linear component. With various assumptions about the spatial statistics of layer 1 (supplementary materials), the symmetry group is obtained by: choose ${{\cal {U}}}$ orthonormal. Then $({{\cal {U}}})$ is a symmetry of the between layer gram matrix (i.e the action of this element on layer 1 fixes the between layer gram matrix). In particular, mapping all feature vectors ${{\bf {f}}}_p^1$ to ${{\cal {U}}}{{\bf {f}}}_p^1$ will result in no change in the gram matrix at either layer 1 or layer 2; we expect much less change in the underlying image. The between-layer gram matrix loss has very different symmetries to Gatys’ (within-layer) method. In particular, the symmetry of Gatys’ method rescales features while shifting the mean (because in this case ${{\cal {A}}}$ can contain strong rescalings with the right choice of ${{\bf {b}}}$). For the cross-layer loss, the symmetry cannot rescale, and cannot shift the mean. This implies that, if one constructs numerous style transfers with the same style using Gatys’ method, the variance of the layer features should be much greater than that observed for the between layer method. Furthermore, increasing style weights in Gatys method should result in poor style transfers, by exaggerating the effects of the symmetry. Finally, our construction casts light on part Gupta [[*et al.* ]{}]{}’s observation linking large trace to instability. A small trace in the gram matrix implies many small eigenvalues. In turn, rescaling directions with small eigenvalues will change little unless very large scales are applied; but these correspond to very large shifts in the mean, which are difficult to obtain with current random start methods. However, a large trace in the gram matrix implies that there are many directions where a small shift in the mean will result in a small – but visible, because the eigenvalue is big – rescale from ${{\cal {A}}}$ will lead to real changes, and so there is greater instability. Note that this analysis is limited by the fact that strong scales and shifts will likely cause RELU’s to change state, by the fact that it takes no account of the content loss, and by the absence of spatial symmetries. But the analysis exposes the fact that quite large changes in early layers will leave the style loss unchanged. Since we expect that at least some large changes in early layers will produce very little change in content layers (otherwise image classification applications would not work), the analysis is a fair rough guide. Experimental observations are consistent with the symmetry theory (figure \[aggweight\]; and section \[results\]). Experimental Procedures\[ExpPro\] ================================= [**Comparison data:**]{} It is important to do comparisons on a wide range of styles and contents. We have built two datasets, using 50 style images (see supplementary) and the 200 content images from the BSDS500 test set. The [*main set*]{} is used for most experiments, and was obtained by: take 20 evenly spaced weight values in the range 50-2000; then, for each weight value, choose 15 style/content pairs uniformly and at random. The [*aggressive weighting set*]{} is used to investigate the effect of extreme weights on Gatys method and the ACG method. This was built by taking 20 weight values sampled uniformly and at random between 2000-10000; then, for each weight value, choose 15 style/content pairs uniformly and at random. For each method, we then produced 300 style transfer images using each weight-style-content triplet. For UST [@UST], since the maximum weight is one, we linearly map *main set* weights to the zero-one range. Our samples are sufficient to produce clear differences in standard error bars and evaluate different methods  [@Forsyth2018]. [**Methods.**]{} We compare the following methods: [*Gatys*]{} ([@gatys2016image] and described above); we use the implementation by Gatys [@leongatys]. [*ACG:*]{} We used a [*pairwise descending*]{} strategy with pre-trained VGG-16 model. We use R11, R21, R31, R41, and R51 for style loss, and R42 for the content loss for style transfer.\ [*Cross-layer, multiplicative (MCG):*]{} A natural alternative to combine style and content losses is to multiply them; we form $$L^m(I_n) = L_c(I_n, I_c) * L_s(I_n, I_s).$$ It provides a dynamical weighting between content loss and style loss during optimization. Although this loss function seems unreasonable, but we find them to perform competitively on a wide range of our EC plots (see supplementary). [*Gatys, with histogram loss:*]{} as advocated by [@risser2017stable], we attach a histogram loss to Gatys method. [*ACG, with histogram loss:*]{} We use the implementation of  [@abhiskk] for histogram adjustment.\ [*Universal style transfer:*]{}(from [@UST], and it’s Pytorch implementation [@SunshineAtNoon] ;\ [*Ensemble Q:*]{} for each weight-style-content triple, we choose the result that produces the best $Q=E*C$ over all methods.\ [*Ensemble E:*]{} for each weight-style-content triple, we choose the result that produces the best $E$ over all methods. Results\[results\] ================== [**ACG is better.**]{} Figure \[ecxvg\] shows an EC plot of layer 1 comparing style transfers using the cross-layer additive loss (ACG) with transfers using five other non-ensemble losses. Note that cross-layer loss achieves much higher average E for a given value of C. In various parts of the AUC range, additive cross-layer loss is somewhat outperformed by the multiplicative version, but in the high AUC regime it is much better. The difference to every other method ranges from one to four standard errors over the range, hence our 300 sample size is large enough [@Forsyth2018]; the ACG method is clearly significantly better. [**Control.**]{} Figure \[control\] shows an EC plot of two controls. In the first, the resized style image is reported as a transfer; this results as expected in high values of E, but low values of C. There is significant variance in E, an effect due to resizing. However, the range of E’s shows the size of the effect of resizing on E; on average, E slightly greater than 4. In the second, the content image is reported as a transfer; this results suggest that obtaining high C values (though not uniformly; some images remain hard to segment) may at the cost of getting low E values nearly to zero (look at the differences of E values for two controls). This shows investigating E is necessary for all methods. . [**Histogram losses.**]{} improves Gatys’ method (compare green/light green on figure \[ecxvg\]) only at extreme weights and low C. This may be an effect of the loss of symmetry, explained below. They also weaken the performance of cross-layer style transfer (compare red/pink on figure \[ecxvg\], about three standard errors). [**Ensemble methods.**]{} do not outperform cross-layer style transfer (see figure \[ensmethl\]). As comparing the red and cyan curves suggests, choosing the result with the best E is essentially the same as using the cross-layer style transfer result. The yellow curve shows the ensemble Q method, which works somewhat better than cross-layer style transfer in low C regimes, and somewhat worse in high C regimes. This suggests that more sophisticated ensemble methods might yield even better performance. [**Aggressive weights.**]{} One might speculate that Gatys’ method underperforms because the weight regime is inappropriate. Figure \[aggweight\] compares Gatys’ method to cross-layer style transfer. Notice that large weights cause serious trouble for Gatys’ method. We believe that this is because large weights on the style loss cause the symmetry in Gatys’ loss to manifest itself, resulting in significant rescaling of features. In particular, Gatys’ method cannot achieve high E values, because symmetry in the style loss produces feature distributions that are quite different from that desired. Furthermore, larger weights on the style loss [*do not*]{} produce better style transfers (large diamonds toward the bottom left of the plot). Instead, by exaggerating the effect of the symmetry, large weights produce transfers that both have low E (poor transfer) [*and*]{} low C (do not respect original segmentation). **User Study.** We obtained preference data from Amazon Mechanical Turk (AMT) workers. We use 300 *main set* image pairs from ACG and Gatys results, each image pair is annotated by 10 workers in total, separated in two groups. Detailed worker-task statistics are present in supplementary materials Mechanical turk worker data is extremely noisy, and so difficult to plot helpfully. We distinguish between 44 master workers (who are known to be quite reliable) and 49 generic workers (about whom we know nothing). We analyze using a logistic regression of preference for cross-layer (resp. Gatys) against E values for Gatys and for cross-layer and C values for Gatys and for cross-layer. Our analysis supports the idea that E and C predict worker preferences, but that there are other likely sources of preference, too (or that better measures of E and C would help). We have two datasets: one using master workers only, and the other using workers of any type.\ [*Analysis, master workers:*]{} all four regression coefficients and the intercept are different from zero with strong statistical significance (for each coefficient, $p<0.025$). Weights produced by this regression are: Intercept: 0.3409; $E_{Gatys}= -0.1484; E_{ACG}= 0.1015;C_{Gatys}=-3.4369;C_{ACG}=3.8982$\ [*Conclusion, master workers:*]{} master workers slightly prefer cross-layer images over Gatys images, whatever E and C; worker preference can be predicted by looking at E and C; in particular, master workers tend to prefer transfers with higher E and C values (if cross-layer has higher E and C, it will tend to be preferred, etc). The difference in weight size is roughly proportional to the relative scales (a factor of about 10), but one measure may be more important to workers than others. The regression has relatively high deviance (and cross-validated AUC of predictions by this regression is approximately 0.57, depending on regularization constant), meaning that other factors may explain preferences, too.\ [*Analysis, generic workers:*]{} three of four regression coefficients are different from zero with strong statistical significance (for each coefficient, $p<0.02$, but the intercept could be zero). Significant weights produced by this regression are:$E_{ACG}=0.076, C_{Gatys}=-3.38, C_{ACG}=3.77$. However, a cross-validated L1 regularized logistic regression obtains an average AUC on held-out predictions of about 0.85 (depending on regularization coefficient) using only the $E_{Gatys}$ and $C_{ACG}$ coefficients; this suggests that a preference for cross layer images is predicted by large values E and C on Gatys images.\ [*Conclusion, generic workers.*]{} worker preference can be predicted by looking at C, and checking whether the $E_{ACG}$ is large; in particular, workers tend to prefer transfers with higher C value (if cross-layer has higher C, it will tend to be preferred, etc). The effect of E is small. The regression has relatively high deviance, and there is good evidence of odd behavior by workers (who prefer cross-layer images when E and C is larger for Gatys), meaning there may be workers who are not attending to the task. [**The experimental effects of symmetry:**]{} Our experimental evidence suggests the symmetries manifest themselves in practice. Gatys’ method significantly underperforms the cross-layer method by producing a lower E statistic for any C statistic. This suggests that the variance implied by the larger symmetry group is actually appearing. In particular, Gatys’ symmetry group allows rescaling of features and shifting of their mean, which will cause the feature distribution of the transferred image to move away from the feature distribution of the style, causing the lower E statistic. Furthermore, Gatys’ method has a strong tendency to produce very poor transfers when offered aggressive weighting of the style loss. We believe this is likely because large rescaling effects are suppressed when the style loss has a smaller weight, because large rescaling will eventually lead to a change in the content loss. But when the style loss has a high weight, then the changes in the content loss are of small significance, and very significant variations can appear in the early layers, forcing down the E value; the C value goes down because little weight is placed on the content loss. This effect does not appear for the cross layer method, because rescaling isn’t possible for those symmetries. Conclusion ========== In this paper we present a novel approach to quantitatively evaluate style transfer methods. Our metric is built with two factors in mind, Effectiveness: a good style transfer should preserve the desired characteristics of original style; Coherence: style transfer method should respect to content’s underlying decomposition of object segments. We apply various style transfer methods which are built either on with-in layer or cross-layer gram matrices, and we compared stylized images both quantitatively using the proposed EC plots, and qualitatively showing their results as well as conducting user study. Using this analytical framework, we confirm Gatys method is troubled by symmetry group, especially so when having aggressive style weights. The cross-layer method, which has very different symmetry group setting, is less compromised and thus achieves higher EC score. This conclusion is supported by master AMT workers’ preference from user study. References {#references .unnumbered} ==========
--- author: - Ying Zhang date: 'October 3, 2006' title: 'Representing Primes as $x^2 + 5y^2$: An Inductive Proof that Euler Missed' --- HISTORICAL INTRODUCTION {#s:1} ======================= In this note we present an elementary inductive proof which Euler could have obtained, for his assertion that every prime of the form $20n+1$ or $20n+9$ is a sum $x^2 + 5y^2$, had he refined a bit his proof for Fermat’s theorem that every prime of the form $4n+1$ is a sum of two squares. Here and throughout this note all letters are assumed to stand for [*nonnegative*]{} integers, unless otherwise specified. It is our pleasure to start by briefly reviewing the story told by Cox in the nice book [@cox1989book]. Pierre de Fermat (1601–1665), who had done pioneering work on representing primes as $x^2 + ny^2$, stated, but did not write down a proof, that he had proved by his favorite method of infinite descent the following: \(i) Every prime $p \equiv 1 \,\,{\rm mod}\,\, 4$ is a sum $x^2 + y^2$. \(ii) Every prime $p \equiv 1, 3 \,\,{\rm mod}\,\, 8$ is a sum $x^2 + 2y^2$. \(iii) Every prime $p \equiv 1 \,\,{\rm mod}\,\, 3$ is a sum $x^2 + 3y^2$. He also conjectured but could not prove that \(iv) The product of two primes, each of which is $\equiv 3, 7 \,\,{\rm mod}\,\, 20$, is a sum $x^2 + 5y^2$. Leonhard Euler (1707–1783) heard of Fermat’s results and spent 40 years proving (i)–(iii) and considering their generalizations. This finally led him to the discovery of the quadratic reciprocity, although he could not provide a solid proof for it. By working out numerous examples on representing primes as $x^2+ny^2$ for various $n$, he discovered more patterns. Some of his discoveries which he could not prove are: \(v) Every prime $p \equiv 1, 9 \,\,{\rm mod}\,\, 20$ is a sum $x^2 + 5y^2$. \(vi) For every prime $p \equiv 3, 7 \,\,{\rm mod}\,\, 20$, $2p$ is a sum $x^2 + 5y^2$. \(vii) A prime $p=x^2 + 27y^2 $ if and only if $ p \equiv 1 \,\,{\rm mod}\,\, 3$ and 2 is a cubic residue modulo $p$. \(viii) A prime $p=x^2 + 64y^2 $ if and only if $ p \equiv 1 \,\,{\rm mod}\,\, 4$ and 2 is a biquadratic residue modulo $p$. Joseph-Louis Lagrange (1736–1813) and Adrien-Marie Legendre (1752–1833) later developed the form theory as well as the genus theory to prove (iv)–(vi). Indeed, they could prove (v) and that (v$'$) Every prime $p \equiv 3, 7 \,\,{\rm mod}\,\, 20$ is a sum $2x^2 + 2xy + 3y^2$ (where one of $x,\, y$ may be negative). Then (iv) and (vi) follow immediately from the following two identities: $$\begin{aligned} (2x^2 + 2xy +3y^2)(2a^2 + 2ab + 3b^2)\!\!\!\!\!&=&\!\!\!\! (2ax + bx +ay +3by)^2 + 5(bx - ay)^2; \\ 2(2x^2 + 2xy +3y^2)\!\!\!\!\!&=&\!\!\!\!(2x + y)^2 + 5y^2.\end{aligned}$$ Both Legendre and Lagrange, however, could prove neither (vii) nor (viii). It was Carl Friedrich Gauss (1777–1855) who had finally tackled (vii) as well as (viii) using his cubic and biquadratic reciprocities. And, years before this, it was also Gauss who gave the first rigorous proof of the quadratic reciprocity. The interested reader is referred to [@cox1989book] to enjoy the rest of the story. What we shall show is that in fact Euler [*could*]{} have proved (iv)–(vi) had he just refined his proof of (i)–(iii), and hence the above told story would be somewhat different. A REVIEW OF EULER’S PROOF {#s:2} ========================= Let us first briefly review Euler’s proof. According to \[1\], a version of Euler’s proof of (i) goes as follows. For prime $p \equiv 1 \,\,{\rm mod}\,\, 4$, there is an $r < p$ such that $pr = x^2 + 1$. For each prime factor $q$ of $r$, since $-1$ is a quadratic residue modulo $q$, it follows that either $q \equiv 1 \,\,{\rm mod}\,\, 4 $ or $q=2$. We assume by induction that each such $q$ is a sum of two squares. Then a cancelation lemma, Lemma \[lem:1\] below, enables to cancel the prime factors of $r$ one by one. As a result, one obtains a representation of $p$ as a sum of two squares. By a $(1, n)$-[*representation*]{} we mean an expression of the from $x^2 + ny^2$. The following lemma appears as Lemma 1.4 in [@cox1989book]. Here, for the convenience of the reader, we include the proof given in [@cox1989book] in a slightly shortened form. \[lem:1\] Suppose $p$ and $pr$ each has a $(1,n)$-representation, where $p$ is a prime. Then $r$ has a $(1,n)$-representation. Suppose $p = a^2 + nb^2$, and $pr = x^2 + ny^2$. Then $$\begin{aligned} p^{2}r = (ax \pm nby)^2 + n(ay \mp bx)^2.\end{aligned}$$ Note that $p \,\mid \,(ay - bx)(ay + bx)$ since $$\begin{aligned} (ay - bx)(ay + bx) = (a^2 + nb^2)y^2 - b^2(x^2 + ny^2).\end{aligned}$$ It follows that either $p \mid ay - bx$ or $p \mid ay + bx$; correspondingly, $p \mid ax + nby$ or $p \mid ax - nby$. Consequently, we have one of the following holds: $$\begin{aligned} && r = ((ax + nby)/p)^2 + n ((ay - bx)/p)^2, \\ && r = ((ax - nby)/p)^2 + n ((ay + bx)/p)^2. $$ This proves Lemma \[lem:1\]. It is not hard to see that Euler’s proof also applies to the cases of $x^2 + 2y^2$ and $x^2 + 3y^2$ after minor modifications. This is because in the representation $pr = x^2 + 2$ (resp. $pr = x^2 + 3$) all of the prime factors of $r$ (with one or two exceptions which are easy to deal with) are of the desired type, so we can again use Lemma \[lem:1\] and the inductive hypothesis to cancel them one by one. However, it is [*not*]{} the case for $x^2 + 5y^2$. Note that for $pr = x^2 + 5$, where we may assume that $5 \nmid r$, each prime factor $q$ of $r$ is such that either $q \equiv 1, 9, 3, 7 \,\,{\rm mod}\,\, 20$ or $q=2$, hence Lemma \[lem:1\] is not enough to cancel all prime factors $q$ of $r$; we have to deal with those $q \equiv 3, 7 \,\,{\rm mod}\,\, 20$ and $q=2$. Indeed we [*do*]{} have such a cancelation lemma (Lemma \[lem:2\] in §\[s:3\]) which enables us, with the help of a small trick, to cancel such factors pair by pair under the inductive hypothesis that (iv)–(vi) hold for all primes $q < p$ such that $q \equiv 1, 9, 3, 7 \,\,{\rm mod}\,\, 20$ or $q = 2$. It turns out that we must prove (iv)–(vi) [*simultaneously*]{} by induction. The rest of this note consists of the detailed statements and proofs. A SECOND CANCELATION LEMMA {#s:3} ========================== For the convenience of further exposition, we make the following definition. [**Definition.**]{} A $(1, n)$-representation is said to be [*nontrivial*]{} if both of $x$ and $y$ are nonzero; it is [*proper*]{} if $x$ and $y$ are relatively prime. [**Remarks.**]{} The following items ([**a**]{})–([**f**]{}) can be easily checked. ([**a**]{}) A proper $(1, n)$-representation is automatically nontrivial unless it equals $1$ or $n$. ([**b**]{}) A $(1, n)$-representation of a prime $p$, where $p \neq n$, is automatically proper and nontrivial. ([**c**]{}) A nontrivial $(1, n)$-representation of the product of two primes is always proper. ([**d**]{}) If $p$ is a prime such that $p \nmid r$ and $p \nmid n$, then any $(1, n)$-representation of $pr$ is nontrivial. ([**e**]{}) There is the following very useful [*Euler identity*]{} which expresses the product of two $(1, n)$-representations as a $(1, n)$-representation in two ways: $$\begin{aligned} \label{eqn:eulerid} (a^2 + nb^2)(x^2 + ny^2) = (ax \pm nby)^2 + n(ay \mp bx)^2.\end{aligned}$$ ([**f**]{}) If an odd $s$, where $5 \nmid s$, has a nontrivial, proper $(1,5)$-representation $s=a^2 + 5b^2$, then $$\begin{aligned} s^2 = (a^2 - 5b^2)^2 + 5(2ab)^2\end{aligned}$$ is a nontrivial, proper $(1,5)$-representation, since every prime common factor of $a^2 - 5b^2$ and $2ab$ is a common factor of $a$ and $b$. Our second cancelation lemma, Lemma \[lem:2\] below, enables us to cancel $q^2$, where $q$ is an odd prime and $q^2$ has a nontrivial $(1, n)$-representation, from a given $(1, n)$-representation of $q^2r$ and obtain a $(1, n)$-representation of $r$. \[lem:2\] Suppose $q^2$ has a nontrivial $(1, n)$-representation, where $q$ is an odd prime. If $q^2 r$ has a $(1, n)$-representation, then $r$ has a $(1, n)$-representation. Moreover, if $q^2 r$ has a nontrivial, proper $(1, n)$-representation and $r \neq 1,n$, then $r$ has a nontrivial, proper $(1, n)$-representation. Suppose $q^2 = a^2 + nb^2$ is nontrivial. Then it is proper by Remark ([**c**]{}). Let $q^2 r = x^2 + ny^2$. From the Euler identity (\[eqn:eulerid\]), we have $$\begin{aligned} q^4 r = (ax \pm nby)^2 + n(ay \mp bx)^2.\end{aligned}$$ Then $q^2 \mid (ay - bx)(ay + bx)$ as in the proof of Lemma \[lem:1\]. First, suppose $q^2 \nmid ay - bx$ and $q^2 \nmid ay + bx $. Then we must have $q \mid ay - bx$ and $q \mid ay + bx$. Hence $q \mid 2ay$. Since $q$ is odd, $q \mid ay$. We then have $q \mid y$ since $q \, \nmid \, a$ (otherwise $a = 0$, or $a = q$ and $b = 0$, a contradiction). Similarly, $q \mid x$. Consequently, we have $$r = (x/q)^2 + n(y/q)^2.$$ In this case $q^2 r = x^2 + ny^2$ is not proper. Now we may suppose $q^2 \mid ay - bx$ or $q^2 \mid ay + bx$. Then $q^2 \mid ax + nby$ or $q^2 \mid ax - nby$ accordingly. Consequently, we have one of the following two holds: $$\begin{aligned} && r = ((ax + nby)/{q^2})^2 + n ((ay - bx)/{q^2})^2, \label{eqn:case1} \\&& r = ((ax - nby)/{q^2})^2 + n ((ay + bx)/{q^2})^2. \label{eqn:case2} $$ [Claim I.]{}  The above obtained $(1, n)$-representation of $r$ is nontrivial and proper if so is $q^2 r = x^2 + ny^2$. [Proof of Claim I.]{}  In the case where $q^2 \mid ay - bx$ and $q^2 \mid ax + nby$, it follows from the identities $$\begin{aligned} &&x = a ((ax + nby)/{q^2}) - nb\, ((ay - bx)/{q^2}), \\&&y = b\, ((ax + nby)/{q^2}) + a ((ay - bx)/{q^2}) $$ that $(ax + nby)/{q^2}$ and $(ay - bx)/{q^2}$ are relatively prime since so are $x$ and $y$. Hence the $(1, n)$-representation (\[eqn:case1\]) is proper. In the case where $q^2 \mid ay + bx$ and $q^2 \mid ax - nby$, it follows from the identities $$\begin{aligned} &&x = a ((ax - nby)/{q^2}) + nb\, ((ay + bx)/{q^2}), \\&&y = -b\, ((ax - nby)/{q^2}) + a ((ay + bx)/{q^2}) $$ that $(ax - nby)/{q^2}$ and $(ay + bx)/{q^2}$ are relatively prime since so are $x$ and $y$. Hence the $(1, n)$-representation (\[eqn:case2\]) is proper. Since $r \neq 1,n$, in either case the $(1, n)$-representation of $r$ is nontrivial by Remark ([**a**]{}). Claim I is thus proved. This completes the proof of Lemma \[lem:2\]. The case where $q=2$ and $n=5$ is simple and is considered in the following [**Addendum to Lemma \[lem:2\].**]{} If $2^2r$ has a $(1, 5)$-representation $2^2r=x^2 + 5y^2$, then $x$ and $y$ must be both even (by a simple modulo $4$ argument), and hence $r = (x/2)^2 + 5(y/2)^2$. Furthermore, if $x^2 + 5y^2$ is nontrivial, so is $(x/2)^2 + 5(y/2)^2$. THE PROOF THAT EULER MISSED {#s:4} =========================== For convenience of later reference, we restate the assertions (iv)–(vi) in §\[s:1\] as [**Theorem.**]{} *[(1)]{} Every prime $p \equiv 1, 9 \,\,{\rm mod}\,\, 20$ has a $(1, 5)$-representation.* [(2)]{} For every pair of primes $q,q'$ such that $q \equiv 3,7 \,\,{\rm mod}\,\, 20$ and either $q'\equiv 3,7 \,\,{\rm mod}\,\, 20$ or $q'=2$, their product $q q'$ has a nontrivial $(1, 5)$-representation. It is the following inductive proof that Euler missed. Suppose by induction that (1) and (2) hold for all primes $p, q, q'$ which are less than a certain prime $\pi$ where $\pi \equiv 1,9,3,7 \,\,{\rm mod}\,\, 20$. We need to show that $(1)_{\pi}$  If $\pi \equiv 1,9 \,\,{\rm mod}\,\, 20$ then $\pi$ has a $(1, 5)$-representation. $(2)_{\pi}$  If $\pi \equiv 3,7 \,\,{\rm mod}\,\, 20$ then for every prime $q \le \pi$ such that $q \equiv 3,7$ $\,\,{\rm mod}\,\, 20$ or $q = 2$, $\pi q$ has a nontrivial (hence proper) $(1, 5)$-representation. To start, we have from the quadratic reciprocity that for a prime $p \neq 2, 5$, $-5$ is a quadratic residue mod $p$ $\Longleftrightarrow$ $p \equiv 1,9,3,7 \,\,{\rm mod}\,\, 20$. Hence there is a $(1,5)$-representation $$\begin{aligned} \label{eqn:pir=} \pi r =\, x^2 + 5y^2.\end{aligned}$$ Here (\[eqn:pir=\]) initially holds for some $x \le (\pi - 1)/2$ and $y=1$; it follows that $x^2 + 5y^2 < {\pi}^2$ and hence $r < \pi$. After reduction if necessary, it can be assumed that $5 \nmid r$ and that (\[eqn:pir=\]) is a nontrivial, proper $(1,5)$-representation. [Claim II.]{} When $r$ in (\[eqn:pir=\]) is minimized, we have either $r = 1$ or $r = q'$, where $q'$ is a prime such that either $q' \equiv 3,7 \,\,{\rm mod}\,\, 20$ or $q'= 2$. [Proof of Claim II.]{} Since $-5$ is a quadratic residue mod $p$, for each prime factor $q$ of $r$, we have $q< \pi$ and either $q\equiv 1,9,3,7 \,\,{\rm mod}\,\, 20$ or $q= 2$. Our idea is to manage to cancel the prime factors of $r$ one by one for those congruent to $1,9$ modulo $20$, and pair by pair for those congruent to $3,7$ modulo $20$ or equal to $2$. If $r$ has a prime factor $q$ such that $q \equiv 1,9 \,\,{\rm mod}\,\, 20$, then, by the inductive hypothesis, $q$ has a $(1, 5)$-representation. By Lemma \[lem:1\], $\pi r'$, where $r' = r/q$, has a $(1, 5)$-representation. Hence, by minimizing $r$ in (\[eqn:pir=\]), we may assume that $r$ has no prime factors congruent to $1,9$ modulo $20$. Now each prime factor $q$ of $r$ is of the form either $q \equiv 3,7 \,\,{\rm mod}\,\, 20$ or $q = 2$. If the number of such prime factors of $r$, counted with multiplicity, is at least $2$, let $q, q'$ be two of them and set $r' = r/(q q')$. If $q = q'$ then $qq'$ has a nontrivial $(1, 5)$-representation by the inductive hypothesis, hence we can apply Lemma \[lem:2\] and its addendum directly to cancel $q^2$ from $\pi r=(\pi r')(q^2)$ and obtain a $(1, 5)$-representation of $\pi r'$. If $q \neq q'$ then $q q'$ has a nontrivial $(1, 5)$-representation by the inductive hypothesis again. Now $q^2 (q')^2 \pi r' = (q q')(\pi r)$ has a $(1, 5)$-representation by the Euler identity (\[eqn:eulerid\]), and applying Lemma \[lem:2\] and its addendum [*twice*]{} implies that $\pi r'$ has a $(1, 5)$-representation—here is the trick used. This finishes the proof of Claim II. We proceed to prove the inductive step. By minimizing $r$ in (\[eqn:pir=\]), we are in one of the alternatives described in Claim II. First, we prove [$(1)_{\pi}$]{}. In this case $\pi \equiv 1,9 \,\,{\rm mod}\,\, 20$. One must have $r = 1$ and hence $\pi$ has a $(1, 5)$-representation; otherwise, $r = q' \equiv 3,7 \,\,{\rm mod}\,\, 20$ or $r=q'=2$, but then $\pi q' = x^2 + 5y^2 \equiv 3,7,\pm 2 \,\,{\rm mod}\,\, 20$ and consequently $x^2 \equiv \pm 2 \,\,{\rm mod}\,\, 5$, a contradiction. This proves [$(1)_{\pi}$]{}. To prove [$(2)_{\pi}$]{}, suppose $\pi \equiv 3,7 \,\,{\rm mod}\,\, 20$. One must have $r = q'$ as described in Claim II; otherwise $r = 1$, which implies that $\pi r = x^2 + 5y^2 \equiv 3,7 \,\,{\rm mod}\,\, 20$, a contradiction. Thus $\pi q'$ has a $(1, 5)$-representation, which is automatically nontrivial and proper, for some prime $q' < \pi$ such that either $q'\equiv 3,7 \,\,{\rm mod}\,\, 20$ or $q'=2$. Then Remark ([**f**]{}) implies that ${\pi}^2 (q')^2 = (\pi q')(\pi q')$ has a nontrivial, proper $(1, 5)$-representation. By the inductive hypothesis, either $q' = 2$ or $(q')^2$ has a nontrivial $(1, 5)$-representation. Lemma \[lem:2\] and its addendum then give a nontrivial, proper $(1, 5)$-representation of ${\pi}^2$. To prove the remaining part of [$(2)_{\pi}$]{}, let $q < \pi$ be any prime such that either $q \equiv 3,7 \,\,{\rm mod}\,\, 20$ or $q=2$. Then either $q = q' = 2$ or, by the inductive hypothesis, $q q'$ has a nontrivial $(1, 5)$-representation. Thus $\pi q(q')^2 = (\pi q')(q' q)$ has a $(1, 5)$-representation by the Euler identity (\[eqn:eulerid\]). On the other hand, by the inductive hypothesis, either $q' = 2$ or $(q')^2$ has a nontrivial $(1, 5)$-representation. Now Lemma \[lem:2\] and its addendum give a $(1, 5)$-representation for $\pi q$, which is automatically nontrivial. This proves [$(2)_{\pi}$]{}. The theorem is thus proved by induction. Note that we have proved (iv)–(vi) without reference to (v$'$). More interesting is that in fact (v$'$) follows from (vi). To see this, for any prime $p \equiv 3,7 \,\,{\rm mod}\,\, 20$, let $2p = x^2 + 5y^2$. It follows that both $x$ and $y$ are odd. Hence $x = 2x' + y$, where $x'$ may be negative. Then $p = 2{x'}^2 + 2x'y +3y^2$ gives a desired representation. Among many other existing elementary proofs of Fermat’s theorem (i), we cannot help but mention Zagier’s beautiful “one-sentence proof” (see [@zagier1990amm], or as explained in [@aigner-ziegler2004book]) to conclude this note. [**ACKNOWLEDGEMENTS.**]{} The author would like to thank H. Y. Loke for teaching him Number Theory years ago and for encouragement. Thanks are also due to the referees whose constructive suggestions helped improve the exposition of this note. The author is supported by a CNPq-TWAS postdoctoral fellowship and partially by NSFC grant No.10671171. [9]{} M. Aigner and G. M. Ziegler, *Proofs from the Book*, 3rd ed., Springer-Verlag, Berlin, 2004. D. A. Cox, *Primes of the Forms $x^2 + ny^2$: Fermat, Class Field Theory, and Complex Multiplication*, John Wiley, New York, 1989. D. Zagier, A one-sentence proof that every prime $p\equiv 1(\rm{mod}\,\, 4)$ is a sum of two squares, *Amer. Math. Monthly*, [**97**]{} (1990) 144. [Department of Mathematics, Yangzhou University, Yangzhou 225002, CHINA]{} E-mail: `yingzhang@yzu.edu.cn` [[Current Address:]{} IMPA, Estrada Dona Castorina 110, Rio de Janeiro 22460, BRAZIL E-mail: `yiing@impa.br`]{}
--- address: - | Laboratoire Kastler Brossel [^1],\ 24, rue Lhomond, F-75231 Paris Cedex 05, France - | Laboratoire de Physique Théorique de l’Ecole Normale Supérieure [^2]\ 24, rue Lhomond, F-75231 Paris Cedex 05, France author: - 'M. A. BOUCHIAT' - 'C. BOUCHIAT' title: | AN ATOMIC LINEAR STARK SHIFT VIOLATING P BUT NOT T ARISING FROM THE ELECTROWEAK\ NUCLEAR ANAPOLE MOMENT --- PACS. 11.30.Er - 21.90 +f - 67.80.Mg - 31.15.Ct Introduction {#introduction .unnumbered} ============ It has been well demonstrated that parity violation in atomic transitions can be used to test electroweak theory [@bou97]. In this way, the Standard Model has been confirmed convincingly in the domain of low energies. At present, refinements in experiments and theory allow more precise measurements to look for a breakdown of the Standard Model predictions and hence, new physics [@ben99; @mor00; @fay99; @ram99]. The essential parameter extracted from atomic parity violation (PV) measurements is the weak nuclear charge $Q_W$. This electroweak parameter appears in the definition of the dominant electron-nucleus PV potential induced by a $Z_0$ exchange: $$V_{pv}^{(1)}(r) = G_F/\sqrt{2} \cdot Q_W/2 \cdot \gamma_5 \cdot P_V(r) \;,$$ where the $Z_0$ couples to the nucleus as a vector particle, just as the photon does in the Coulomb interaction. In this $Z_0$ exchange, $Q_W$ plays the same role as the electric charge in the Coulomb interaction. $\gamma_5$ is the Dirac matrix which reduces to the electron helicity, $ \vec \sigma \cdot \vec p/ m_e c $, in the non-relativistic limit. The distribution $ P_V(r)$ normalized to unity represents the weak charge distribution inside the nucleus. The physical quantity measured in atomic PV experiments is a transition electric dipole moment, $E_1^{pv}$ between states with the same parity, like the $nS_{1/2} \rightarrow n'S_{1/2} $ transitions. In particular the $6S_{1/2} \rightarrow 7S_{1/2}$ transition in cesium has been the subject of several experiments, the accuracy of which has been steadily increasing with time [@bou82; @bou862; @gil85; @noe88; @woo97]. On top of this the PV electron-nucleus interaction involves also a nuclear spin-dependent contribution which can provide valuable and original information regarding Nuclear Physics. It is generated by an interaction of the current-current type with a vector coupling for the electron and an axial coupling for the nucleus. The associated PV potential $V_{pv}^{(2)}$ is given by the following expression: $$V_{pv}^{(2)} = G_F/ \sqrt{2} \cdot A_W /(2 I) \cdot \vec {\alpha} \cdot \vec I \cdot P_A(r)\;,$$ where $\vec \alpha$ is the Dirac matrix associated with the electron velocity operator, $\vec I$ the nuclear spin and $P_A(r) $ a nuclear spin distribution normalized to unity. The weak axial moment of the nucleus, $A_W$, receives several contributions. The most obvious one comes from the weak neutral vector boson $Z_0$ with axial coupling to the nucleons. However, in the standard electroweak model the coupling constants involved nearly cancel accidentally. As first pointed out by Flambaum [*et al.*]{} [@fla80], a sizeable contribution to $A_W$ is induced by the contamination of the atom by the PV interactions between the nucleons which take place [*inside*]{} the nucleus. The concept relevant to describe this interaction is the nuclear anapole moment [@zel57]. In fact the interaction can be interpreted simply in terms of a chiral contribution to the nuclear spin magnetization [@bou911; @bou99], as illustrated in Fig 1. In other words, one can say that the PV nuclear forces inside any stable nucleus are responsible for the nuclear anapole moment or equivalently a nuclear helimagnetism. The present paper addresses the problem of [*how to detect directly this unique static nuclear property characteristic of parity violation in stable nuclei.*]{} Up to now there has been only one experimental demonstration of the nuclear anapole moment, namely that obtained very recently by the Boulder group [@woo97]. In their experiment which gives a high precision determination of parity violation in the atomic $6S_{1/2}\rightarrow 7S_{1/2}$ Cs transition, this effect appears as a small relative difference, actually $ \sim 5\%$, between the $E_1^{pv}$ transition dipole amplitudes measured on two different hyperfine lines belonging to that same transition. In this case the dominant source of P without T violation comes from the electron-nucleon $Z_0$ exchange associated with the weak charge $Q_W$ of the nucleus. This makes the extraction of the nuclear spin-dependent part a most delicate matter. In view of the importance of this result for the determination of the PV pion-nucleon coupling constant, $f_{\pi}^1$ (see [@fla84]), a totally independent determination is highly desirable. It is well known that T reversal invariance forbids the manifestation of $V_{pv}^{(1)}$ in an atomic stationary state. However, we shall show in the following sections that in such a state T reversal invariance does not forbid the manifestation of $V_{pv}^{(2)}$, hence that of the nuclear helimagnetism. For a free atom, the rotation symmetry of the Hamiltonian leads to an exact cancellation of the diagonal matrix elements. This property still holds true if the rotation symmetry is broken by the application of static uniform electric and magnetic fields. However, if the symmetry is broken by the application of a static potential of quadrupolar symmetry, for instance by trapping the atoms inside a crystal of hexagonal symmetry, then, the stationary atomic states are endowed with a permanent electric dipole moment which can give rise to a linear Stark shift. This offers a novel possibility of detecting the nuclear helimagnetism having a twofold advantage: i\) In a stationary state it is [*the sole cause of P without T violation*]{}. ii\) It manifests itself by a modification of the atomic transition frequencies in an applied electric field, [*i.e. a linear Stark shift, providing for the first time an opportunity for demonstrating the static character of this unusual nuclear property.*]{} There exist in the literature other proposals for a [*direct*]{} detection in atoms of the nuclear spin-dependent effect, [*i.e.*]{} without any participation from $V_{pv}^{(1)}$: i\) One is based on the difference between the selection rules of the potentials $V_{pv}^{(1)}$ and $V_{pv}^{(2)}$. While the former acts as a scalar in the total angular momentum space and mixes only states of identical angular momentum (and opposite parity), the latter acts like a vector and mixes states of different total angular momentum. Consequently, one can find atomic transitions between states of the same parity which are allowed for the nuclear spin dependent contribution but remain forbidden for the nuclear spin independent one [@bou75]. One such example is the $(6p^2) ^3P_0 \to (6p^2) ^1S_0$ lead transition at 339.4 nm, strictly forbidden for even isotopes, which acquires a non-vanishing matrix element $E_1^{pv}$ in odd isotopes owing to the PV interaction involving the nuclear spin, which mixes the $(6p^2) ^1S_0$ state to the $(6p 7s) ^3P_1$ state of opposite parity [@bou75]. ii\) A second approach, invoked by several groups in the past and now under serious consideration [@bud98], consists in the detection of an $E_1^{pv}$ amplitude via a right-left asymmetry appearing in hfs transition probabilities for the ground state of potassium in the presence of a strong magnetic field (magnetic and hyperfine splittings of comparable magnitude). iii\) There is also the possibility of detecting the energy difference in the NMR spectrum of enantiomer molecules [@bar86]. In view of the extreme difficulty of these other projects, we believe that, over and above its intrisic scientific interest, the linear Stark shift discussed in this paper deserves careful consideration. The first section of this paper recalls the main angular momentum properties of the permanent nuclear spin-dependent PV electric dipole operator arising from the nuclear anapole moment. In addition we compute its magnitude for the cesium atom using recent empirical data relative to the Cs $6S \rightarrow 7S$ transition. The next section (sec. 2) shows that this dipole can manifest itself via a linear Stark shift only if the free atom symmetry is broken. After this we consider the case where the atom is perturbed by a crystal field of uniaxial symmetry. Here, the crystal axis $\vec n$, and the applied electric and magnetic fields create a chiral environment permitting the existence of a linear Stark shift, the explicit expression for which is given. In the section 3, we examine a realistic experimental situation where its observation looks reasonably feasible: this deals with Cs atoms trapped inside a $^4$He crystal matrix of hexagonal symmetry. We have investigated quantitatively how, by breaking the atomic symmetry, the matrix induced perturbation manages to generate a linear Stark shift. Moreover, we evaluate both the matrix induced anisotropy and the shift. The details of the necessary calculation based on a semi-empirical method are given in the Appendix. In the final section we suggest another experimental approach in which the atoms are no longer submitted to a crystal field, but are instead perturbed by an intense nonresonant radiation field. The permanent nuclear spin-dependent PV electric dipole ======================================================== Symmetry considerations ------------------------- The space-time symmetry properties of the atomic electric dipole induced by the nuclear spin dependent PV interaction have been presented before in many review papers (see for instance[@san86]). We recall them here for completeness, since they constitute the starting point of the linear Stark shift calculation developed in the present paper. First, we wish to stress that the existence of the anapole moment interaction not only implies the existence of a transition dipole proportional to the nuclear spin, but also that of an electric dipole operator having diagonal matrix elements between stationary atomic states. This electric dipole is found to be proportional to the operator $\vec s \wedge\vec I$. Therefore it does not undergo the same transformation under P as does an ordinary dipole, since it is a pseudovector instead of a vector. We also note that it is even under T-reversal, so that the quantity $(\vec s \wedge \vec I) \cdot \vec E$, associated with a linear Stark shift, violates P, but does not violate T invariance. It is convenient to define $ \vec{d}_{pv} (n^{\prime},n) $ as the effective pv electric dipole moment operator acting in the tensor product ${\cal E}_S \bigotimes {\cal E}_I$ of the electronic and nuclear angular momentum spaces, which describes the transition between two $ S_{1/2}$ subspaces corresponding to given radial quantum numbers $n $ and $n^{\prime}$. This effective dipole operator includes both contributions from potentials $ V_{pv}^{(1)}$ and $ V_{pv}^{(2)}$. Rotation invariance together with the fact that $ V_{pv}^{(2)}$ is linear in $ \vec{I} $ implies that $ \vec{d}_{pv} (n^{\prime},n) $ can be written under the following general form: $$\vec{d}_{pv} (n,n^{\prime}) =- i Im \, E_{1pv}^{(1)}(n,n^{\prime}) \ \vec{\sigma} +i \, a (n,n^{\prime}) \, \vec{I}+ b (n,n^{\prime})\, \vec{s} \wedge \vec{I} \, , \label{dpv}$$ where the real quantities $ a(n,n^{\prime}) $ and $b(n, n^{\prime}) $ parametrize the contribution of the nuclear spin-dependent pv potential. Time reversal invariance of $ V_{pv}^{(1)}$ and $ V_{pv}^{(2)}$ implies the following relations under the exchange $ n \leftrightarrow n^{\prime} $: $$\begin{aligned} Im \, E_{1pv}^{(1)}(n,n^{\prime}) &=&-Im \, E_{1pv}^{(1)}(n^{\prime},n) \,,\nonumber \\ a(n,n^{\prime})&=&-a(n^{\prime},n) \,,\nonumber \\ b(n,n^{\prime})&=&b(n^{\prime},n) \,.\end{aligned}$$ The effective pv static dipole moment $ \vec{ \cal D} _{pv}= \vec{d}_{pv} (6,6) $ relative to the ground state is then given by : $$\vec{ \cal D} _{pv}= b (6,6)\, \vec{s} \wedge \vec{I}= d_I \, \vec{s} \wedge \vec{I} \, . \label{Dpv}$$ If we introduce the total angular momentum $\vec F= \vec s + \vec I$, using simple relations of angular momentum algebra, one can derive the useful identity: $$\vec s \wedge\vec I \equiv [\vec F^2 \;,\; \frac{-i}{2} \vec s] \;.$$ It then becomes obvious that, in low magnetic fields and without external perturbation, the dipole operator $\vec {\cal D}^{pv}$ has no diagonal matrix elements between atomic eigenstates. In fact, as demonstrated in the next section of this paper, a manifestation of this dipole requires special conditions for breaking the free-atom rotational symmetry. Magnitude of the permanent dipole. ---------------------------------- The magnitude, $d_I$, of the permanent dipole will play a decisive role in the assessment of the feasibility of an experiment. We are now going to perform the evaluation of $d_I $ in the interesting case of cesium. We proceed in two steps : first we compute directly $ b(6,7)$ from experimental data, then we give a theoretical evaluation of the ratio $ b(6,6)/b(6,7)$. It is convenient to use the notations of ref [@bou862] and [@bou911] and to rewrite $ \vec{d}_{pv} (6,7) $ as: $$\vec{d}_{pv} (6,7)= -i \,Im E_1^{pv} ( 6,7) \(( \vec{\sigma } + \eta \frac{ \vec{I} } {I} + i\, {\eta}^{\prime} \vec{\sigma} \wedge \frac{ \vec{I} } {I} \)) \,.$$ The nuclear spin dependent potential $V_{pv}^{(2)}$ induces a specific dependence of the pv transition dipole on the initial and final hyperfine quantum numbers, $ F$ and $F^\prime$. In order to isolate the $V_{pv}^{(2)}$ contribution, we are led, following ref [@bou862], to the introduction of the reduced amplitudes $ d_{F \,F^\prime} $: $$d_{F \, F^\prime}( \eta , {\eta}^{\prime})= \frac{\langle 7S, F^\prime \, M^\prime \vert \vec d_{pv} \vert 6S, F\, M \rangle} {\langle F^\prime \, M^\prime \vert \vec{\sigma} \vert F\, M \rangle}.$$ The amplitudes $ d_{F \,F^\prime}(\eta,{\eta}^{\prime}) $ are tabulated in Table XXII of ref[@bou862] and reduce to $-i\, Im E_1^{pv}(6,7)$ for vanishing $\eta$ and ${\eta}^{\prime}$. The quantity of interest here is the ratio $ r_{hf}= d_{4\,3} / d_{3\,4}$ which is given, to second order in $\eta$ and $ {\eta}^{\prime} $, by : $$r_{hf} \simeq1-\frac{2 I+1}{I}\, {\eta}^{\prime} .$$ Using the empirical value for the ratio $ r_{hf} $ given by the last Boulder experiment [@woo97]: $ r_{hf}-1= ( 4.9 \pm 0. 7) \times 10^{-2} $, we obtain: $${\eta}^{\prime}= -\frac{7}{16} \, ( r_{hf}-1) = (-2.1 \pm 0.3)\times 10^{-2} .$$ We deduce $ b(6,7) $ by a simple identification: $$b(6,7)=Im\,E_1^{pv} ( 6,7) \frac{2}{I}\,{\eta}^{\prime} = % = -0.837 (4/7) -2.1 \times10^{-13} \vert e \vert a_0 (1.04 \pm 0.15\times)10^{-13} \vert e \vert a_0 \,,\label{b67}$$ where we have used for $Im\,E_1^{pv} ( 6,7) $ the empirical value obtained in ref [@woo97]: $$Im\,E_1^{pv} ( 6,7) =(-0.837 \pm 0.003) \times 10^{-11 } \vert e \vert a_0 .$$ To compute the ratio $ b(6,6)/b(6,7) $, we are going to use an approximate relation, derived in ref [@bou911], which relates the potential $ V_{pv}^{(2)} $ to $ V_{pv}^{(1)}$: $$V_{pv}^{(2)}(\vec{r} )= K_A \,\frac{ A_W}{ Q_W} 2 \vec{j} \cdot \frac{ \vec{I} } {I}\, V_{pv}^{(1)}(\vec{r} ). \label{relVpv2Vpv1}$$ Here $K_A $ is a constant very close to unity which depends weakly upon the shape of the nuclear distributions $ P_{V}(r) $ and $ P_{A}(r) $; $\vec{j} $ is the single electron angular momentum and since, as we shall see, only single particle states with $ j=1/2$ are involved, we can write hereafter $ 2 \vec{j} =\vec{ \sigma} $. This relation, valid for high Z atoms like cesium, hinges upon the fact that the matrix elements $ \langle n^{\prime} p_{3/2} \vert V_{pv}^{(2)} \vert n s_{1/2} \rangle $ involving $p_{3/2}$ states are much smaller- by a factor $ 2 \times 10^{-3} $- than those which involve $p_{1/2}$ states, $ \langle n^{\prime} p_{1/2} \vert V_{pv}^{(2)} \vert ns_{1/2}\rangle$ . This is easily verified in the one-particle approximation since the radial wave functions at the surface of the nucleus are very close to Dirac Coulomb wave functions for an unscreened charge $ Z $. It is argued in ref [@bou911] that this property remains true, to the level of few $ \% $, when $ V_{pv}^{(2)} (\vec{r}) $ is replaced by the non local potential $ U_{pv}^{(2)}(\vec{r}, {\vec{r}}^{\prime} )$, which describes the core polarization effects within the R.P.A. approximation[^3]. The contributions of $ V_{pv}^{(i)} $ to the effective dipole operator $\vec{d}_{pv} (n,n^{\prime})$ are given as the sum of the two operators: $$\begin{aligned} {\vec{A} }^{(i)} &= & P( n^{\prime} S_{1/2}) \,V_{pv}^{(i)} \, G(E_{n^{\prime}} )\, \vec{d} \, P( n S_{1/2}) , \nonumber \\ {\vec{B} }^{(i)} &= & P( n^{\prime} S_{1/2}) \, \vec{d} \, G(E_{n}) \,V_{pv}^{(i)}\, P( n S_{1/2}) , \label{AB}\end{aligned}$$ where $ \vec{d} $ is the electric dipole operator, $ G(E_{n})=(E_{n}-H_{atom} )^{-1}$ the Green function operator relative to the atomic hamiltonian; $P( n S_{1/2})$ and $P( n^{\prime} S_{1/2})$ stand for the projectors upon the subspaces associated with the configurations $ n S_{1/2}$ and $ n^{\prime} S_{1/2}$; $ E_{n}$ and $E_{n^{\prime}}$ are the corresponding binding energies. It follows immediatly from the Wigner-Eckart theorem that the operators $ {\vec{A} }^{(1)} $ and $ {\vec{B} }^{(1)} $ can be written as: $${\vec{A} }^{(1)}=i h(n,n^{\prime} ) \,\vec{ \sigma } \; ; \;\;\; {\vec{B} }^{(1)}=i k(n,n^{\prime} ) \,\vec{ \sigma }\,. \label{hk}$$ Using now the relation given in equation (\[relVpv2Vpv1\]) and the commutation of $\vec{\sigma} \cdot \vec{I} $ with the pseudoscalar $V_{pv}^{(1)}$, one gets the following expressions for $ {\vec{A} }^{(2)} $ and $ {\vec{B} }^{(2)} $: $$\begin{aligned} {\vec{A} }^{(2)}&= & i\, K_A \,\frac{ A_W}{ Q_W} \, h(n,n^{\prime} ) \,\((\vec{\sigma} \cdot \frac{ \vec{I} } {I}\))\,\vec{\sigma} , \nonumber \\ {\vec{B} }^{(2)}&= & i\, K_A \,\frac{ A_W}{ Q_W} \, k(n,n^{\prime} ) \,\vec{\sigma} \,\((\vec{\sigma} \cdot \frac{ \vec{I} } {I}\)) . \end{aligned}$$ We arrive finally at an expression for $\vec{d}_{pv} (n,n^{\prime})$ which can be used to compute the ratio $ b(6,6)/b(6,7) $: $$\begin{aligned} \vec{d}_{pv} (n,n^{\prime})&=& i ( \,\vec{\sigma}+ K_A \,\frac{ A_W}{ Q_W }\frac{ \vec{I} } {I} )\,\(( h(n,n^{\prime})+k(n,n^{\prime}) \))+ \nonumber \\ & & \,\vec{\sigma}\wedge\frac{ \vec{I} } {I} \, K_A \,\frac{ A_W}{ Q_W } \,\(( h(n,n^{\prime})-k(n,n^{\prime}) \)) \,.\end{aligned}$$ Time reversal invariance implies $ h(n,n)= -k(n,n) $ so that we can write the sought for ratio $ b(6,6)/b(6,7) $ as: $$\frac{b(6,6)}{b(6,7)}= \frac{ 2\, h(6,6)}{h(6,7)-k(6,7)} \; .$$ The amplitudes $ h(6,6)\, , \, h(6,7)$ and $ k(6,7)$ can be computed from the formulas given in Eqs. (\[AB\]) and (\[hk\]). We have used the explicit values of the radial matrix elements (parity mixing and allowed electric dipole amplitudes) for the intermediate states[^4] $6P_{1/2}-9P_{1/2}$ and the energy differences involved, which are tabulated in ref. [@blu92] (Table IV)[^5]. We obtain in this way: $$b(6,6)/ b(6,7) = 4.152 / 1.86=2.27\,.$$ Combining the above result with the value of $ b(6,7)$ given by equation (\[b67\]) we obtain the following estimate for $ d_I$: $$d_I \simeq 2.36 \times 10^{-13} \vert e \vert a_0\; , \label{dI}$$ believed to be about 15 $\%$ accurate. It is of interest to compare $ d_I$ with the P-odd T-odd EDM of the Cs atom obtained from a theoretical evaluation using the latest experimental upper bound for the electron EDM [@com94] : $$\vert d_e \vert \leq 7.5 \times \, 10^{-19} \,\vert e \vert \, a_0.$$ Using for the cesium anti-screening factor the theoretical value [@joh86]: $ 120 \pm10,$ one gets the following upper bound for the cesium EDM, namely the experimental sensitivity to be reached for improving the existing bound on $ \vert d_e \vert$ : $$\vert d_{CsEDM} \vert \, \leq 9.0 \times \, 10^{-17} \,\vert e \vert \, a_0\,. \label{dCsEDM}$$ We are going to use Eqs.(\[Dpv\]) and (\[dI\]) for calculating the linear Stark shift. It is interesting to note here that these equations predict also the magnitude of the pv transition dipole involved in an eventual Cs project which would be based on the observation of hyperfine transitions in the Cs ground state, analogous to the potassium project mentioned in the introduction (see also [@bud98]). Therefore both a project of this kind and the linear Stark shift discussed here aim at the determination of the same physical parameter, $d_I$, but only the observation of a dc Stark shift would prove its static character. The linear Stark shift induced by $V_{pv}^{(2)}$ ================================================ Need for breaking the rotation symmetry of the atomic Hamiltonian ----------------------------------------------------------------- The parity conserving spin Hamiltonian in presence of a static magnetic field $\vec B_0$ is: $$H_{spin}= A\;\vec s \cdot \vec I -g_s \mu_B \vec s \cdot \vec B_0 - \gamma_I \vec I \cdot \vec B_0 \, .$$ From section 2, we have seen that, to first order in the electric field, the effect of $V_{pv}^{(2)}$ in presence of an applied electric field can be described by the following Stark Hamiltonian: $$H_{pv}^{st} = d_I\; \vec s \wedge \vec I \cdot \vec E \equiv - d_I\; \frac{i}{2} [\vec F^2\;,\;\vec s \cdot \vec E] \,. \label{HStark}$$ We have noted that the above identity implies the vanishing of the average value of $H_{pv}^{st}$ in the low magnetic field limit. We are going to show that this null result still remains valid [*for arbitrary values and orientations of the magnetic field.*]{} To do this we consider the transformation properties of both $H_{spin}$ and $H_{pv}^{st}$ under the symmetry $\Theta$, defined as the product of $T $ reversal by a rotation of $\pi $ around the unit vector $\hat u= \vec E \wedge \vec B_0/ {\vert E B_0\vert} $, the rotation $ R(\hat u, \pi)$. It should be stressed that the rotation $ R(\hat u, \pi)$ and the symmetry $\Theta$ considered here are quantum mechanical transformations acting only on the spin states. The external fields are considered as real $ c-$numbers and are not affected. One sees immediately that $H_{spin}$ is invariant under the symmetry $\Theta= T\,R(\hat u, \pi) $, while $H_{pv}^{st}$ changes sign. We conclude that, in order to suppress the linear Stark shift cancellation we have to break the $\Theta$ symmetry. This symmetry breaking can be achieved, for instance, by perturbing the atomic $S_{1/2}$ state with a crystal field compatible with uniaxial symmetry along the unit vector $\vec n $. A practical realization looks feasible, since it has been demonstrated that Cs atoms can be trapped in a solid matrix of helium having an hexagonal symmetry [@kan98] (see also section 3). In this case the alkali S state is perturbed by the Hamiltonian[^6]: $$H_b( \vec{n} )={\lambda}_b \cdot (\frac{e^2}{ 2 a_0} ) \(( (\vec{\rho}\cdot \vec{n})^2-\frac{1}{3} \ {\rho}^2 \))\,, \label{Hbub}$$ where both $\lambda_b$ and $\rho =r/a_0$ are expressed in atomic units. The perturbed atomic state is now a mixture of S and D states, with no component of the orbital angular momentum along the $\vec n$ axis. The spin Hamiltonian is modified and an anisotropic hyperfine interaction is induced by the D state admixture. The new spin Hamiltonian reads: $${\widetilde H_{spin}}= A_{\perp}\;\vec s \cdot \vec I+ (A_{\parallel}-A_{\perp})(\vec s\cdot \vec n)(\vec I\cdot \vec n) -g_s \mu_B \vec s \cdot \vec B_0 - \gamma_I \vec I \cdot \vec B_0 \,.$$ It is easily verified that, if $ \vec{n} $ lies in the $(\vec B, \hat u)$ plane, with non-zero components along both $\vec B$ and $\hat u$, this perturbed atomic Hamiltonian is no longer invariant under the transformation $\Theta$. Another possible method for breaking the symmetry of $H_{spin}$ will be presented in section 4. Strong magnetic field limit ($\gamma_s B_0 \gg A_{\perp}, A_{\parallel}$) ------------------------------------------ The anisotropy axis is defined as: $$\vec n = \cos{\psi} \;\hat z\;+ \sin{\psi} \; \hat x \,.$$ Let us consider the nuclear spin Hamiltonian associated with the restriction of $H_{spin}$ to the electronic eigenstate ${\cal E}(\tilde{ n_s}, m_s) $ perturbed by the quadrupolar potential $H_b(\vec n)$: $$\begin{aligned} H^{eff}_{(m_s)} &= & A_{\perp} m_s I_z+m_s (A_{\parallel}-A_{\perp}) (\sin{\psi} \cos{\psi} I_x + \cos^2{\psi} I_z) +\gamma_s B_0 m_s - \gamma_I B_0 I_z \nonumber \\ &=& m_s \lbrack \gamma_s B_0 + I_z (A_{\perp} \sin^2{\psi}+A_{\parallel} \cos^2{\psi} -\frac{\gamma_I B_0}{m_s}) + I_x (A_{\parallel}-A_{\perp}) \sin{\psi} \cos{\psi}\rbrack \nonumber\end{aligned}$$ $H^{eff}_{(m_s)}$ is identical to the Hamiltonian seen by an isolated nucleus coupled to an effective magnetic field, $\vec B^{eff}(m_s)$, having the following components: $$\begin{aligned} B_x^{eff} &=& -(A_{\parallel}-A_{\perp}) \sin{\psi} \cos{\psi} \;\frac{m_s}{\gamma_I} \nonumber\\ B_y^{eff} &=& 0 \nonumber \\ B_z^{eff} &=& B_0 - (A_{\perp}\sin^2{\psi} + A_{\parallel} \cos^2{\psi}) \; \frac{m_s}{\gamma_I}\,, \end{aligned}$$ or equivalently : $$B_x^{eff}= B^{eff} \sin{\alpha}, \;\;\; B_y^{eff} = 0, \;\;\; B_z^{eff} = B^{eff} \cos{\alpha},$$ where $$\tan{\alpha}= \frac{ (A_{\parallel}-A_{\perp}) \sin{\psi} \cos{\psi} \;m_s}{-\gamma_I B_0 + (A_{\perp} \sin^2{\psi} + A_{\parallel} \cos{^2{\psi})\; m_s}} \;.$$ In other words, the direction $\hat z^{eff}$ of $\vec B^{eff} $ can be deduced from the $\hat z$ axis by a rotation ${\cal R}(\hat y,\alpha)$ by an angle $\alpha$ around the $ \hat y$ axis. Hence, the eigenstates of $H^{eff}_{(m_s)}$ are $\vert m_s \; \tilde {m_I} >$, where $\tilde{m_I} $ now stands for the [*z*]{}-component of the spin $\widetilde{\vec I~}$ resulting from $\vec I$ through the rotation ${\cal R}(\hat y,-\alpha)$. $$\widetilde I_z = \widetilde{\vec I~} \cdot \hat z = \vec I \cdot {\cal R}(\hat y,\alpha) \hat z= \cos{\alpha}\; I_z + \sin{\alpha } \; I_x \,.$$ We can now compute the linear Stark shift associated with the Hamiltonian $H_{st}^{pv}$ given by Eq.(\[HStark\]), supposing the $\vec E$ field directed along the $\hat y$ axis: $$\begin{aligned} \Delta E_{st} &=& \langle m_s \; \tilde{m_I} \vert d_I E \; s_z I_x \vert m_s \; \tilde{m_I} \rangle \nonumber \\ &=& d_I E\; m_s \langle \tilde{m_I} \vert I_x \vert \tilde{m_I} \rangle \nonumber \\ &=& d_I E\; m_s \langle m_I \vert I \cdot {\cal R}(\hat y,\alpha) \hat x \vert m_I \rangle \nonumber \\ &=& - d_I E\; m_s \; m_I \sin{\alpha} \,. \end{aligned}$$ If we suppose $\gamma_I B_0 \ll A_{\parallel},\;A_{\perp} \ll \vert \gamma_s \vert B_0 $ and $\vert A_{\parallel}-A_{\perp} \vert \ll A_{\parallel}+A_{\perp} $, we obtain: $$\tan{\alpha} \approx \frac{A_{\parallel}-A_{\perp}}{\frac{1}{2}(A_{\parallel}+A_{\perp})} \sin{ \psi} \cos{\psi}\;\approx \sin {\alpha} \,,$$ which yields the simplified expression: $$\Delta E_{st} = - d_I E \; m_s m_I \frac{A_{\parallel}-A_{\perp}}{A_{\parallel}+A_{\perp}} \sin{2 \psi} \; .$$ In this approximation, $\Delta E_{st} $ can be considered as a modification of the hyperfine constant linear in the applied electric field. In order to show up the transformation properties of $\Delta E_{st}$, it is useful to express this last result in terms of the two fields, $\vec E$ and $\vec B$, and the unit vector $\hat n$ which defines the anisotropy axis: $$\Delta E_{st}=- 2 d_I m_s m_I \frac{(\hat n \cdot \vec B_0 )(\hat n \cdot \vec E \wedge \vec B_0)}{\vec B_0^2} \; \frac{A_{\parallel}-A_{\perp}}{A_{\parallel}+A_{\perp}} \,. \label{Delst}$$ From this expression, it is clearly apparent that the linear shift breaks space reflexion symmetry but preserves time reversal invariance. It differs from the P and T violating linear Stark shift arising from an electron EDM by the fact that it cancels out when the quadrupolar anisotropy of the ground state vanishes. It is also obvious from Eq. (\[Delst\]) that, in the strong field limit, the size of the Stark shift depends only on the orientation of $\vec B$ relative to $\vec E$ and $\vec n$ and not on the strength of the magnetic field. Figure 2 represents two mirror-image configurations of the experiment. Limit of low magnetic fields and small anisotropy ------------------------------------------------- We now consider the limit $ \vert A_{\parallel}-A_{\perp} \vert \ll \gamma_s B_0 \ll A_{\perp}, A_{\parallel} .$ The linear Stark shift can be computed by using second order perturbation theory. $H_{spin}$ is perturbed by both $H_{pv}^{st}$ and $H_b( \vec{n} )$, the latter being responsible for the anisotropy contribution to $ H_{spin}$, i.e. $ ( A_{\parallel}-A_{\perp}) (\vec s\cdot \hat n) (\vec I\cdot \hat n) $. The fields $\vec B_0 $ and $\vec E$ are still taken parallel to $\hat z $ and $\hat y$ respectively. We find: $$\Delta E_{st}(F, M) = 2 (A_{\parallel}-A_{\perp}) d_I E \cos{\psi} \sin{\psi} \times$$ $$\noindent \times \sum_{F'\not=F, M' }{ \frac{ \langle F \; M \vert s_z I_x + s_x I_z\vert F' \; M' \rangle \langle F' \; M' \vert s_z I_x - s_x I_z \vert F\; M \rangle}{ E_{FM} - E_{F'M'}}} \,.$$ Since the operator $\vec s\wedge \vec I$ is identical to the commutator $[\vec F^2, -\frac{i }{2} \vec s]$, we see that only the hyperfine states $F'\not= F$ with $M'=M \pm 1$ contribute to the sum. Therefore, in the energy denominator we can neglect the Zeeman contribution which is small compared to the hyperfine splitting and, in the sum, we can factorize out the energy denominator $2(F-I) A_{\parallel}(I+ \frac{1}{2}) $. Since $F^\prime = F $ does not contribute, the resulting sum can be performed using a closure relation: $$\Delta E_{st}(F, M)=\frac{(A_{\parallel}-A_{\perp}) d_I E}{2(F-I) A_{\parallel}(I+ \frac{1}{2})} \sin{2 \psi} \langle F \; M \vert (s_z I_x + s_x I_z) (s_z I_x - s_x I_z) \vert F\; M \rangle\,.$$ Using standard properties of spin $ 1/2$ matrices, we can transform the diagonal matrix element above into $\frac{1}{4} \langle F\;M\vert (I_x^2 - I_z^2) \vert F\; M\rangle $. Once taken into account the axial symmetry of the unperturbed atomic state, it still simplifies to $\frac{1}{8} \langle F\;M\vert { (\vec I}^{\,2} - 3 I_z^{\, 2}) \vert F\;M \rangle $. We arrive at the final expression: $$\Delta E_{st}(F, M)= k(F, M)\; \frac{A_{\parallel}-A_{\perp}}{A_{\parallel}+ A_{\perp}} d_I E \sin{2 \psi} \;, \label{delE}$$ where $$k(F,M)= 2 (F-I) \langle F\;M\vert \frac{1}{2} ({\vec I}^{\,2} - 3 I_z^{\, 2}) \vert F\;M \rangle / (2I + 1) \,.$$ The Stark shift coefficients $k(F, M) $ for $ ^{133}_{~55}$Cs (I=7/2) are listed in Table 1. We note that $\Delta E_{st}$ depends on $M^2$ so the linear Stark shifts of the Zeeman splittings $E(F, M)-E(F, M-1)$ have opposite signs for $M > 0$ and $M < 0$ (see figure 3): $$E(4, M)-E(4, M-1) = \hbar \omega_s M / (2 I + 1) + \Delta E_{st}(4, \vert M \vert ) - \Delta E_{st}(4, \vert M-1 \vert) \; .$$ As expected, once again the pseudoscalar $${\cal P}=\frac{A_{\parallel}-A_{\perp}}{A_{\parallel}+A_{\perp}} \frac{(\hat n \cdot \vec B_0 )(\hat n \cdot \vec E \wedge \vec B_0)}{B_0^2} \; ,$$ plays an essential role. If ${\cal P}> 0 $, there is a contraction of the Zeeman splittings belonging to the F=4 hyperfine state for positive values of $M$ and a dilatation for negative ones, as shown by Figure 3. The situation is reversed when the sign of ${\cal P}$ is changed. In the F=3 hyperfine state, splitting contraction also occurs for $M > 0$ with ${\cal P} > 0$ and for $M < 0$ with ${\cal P} < 0$. This behavior could help to discriminate the linear Stark shift induced by the nuclear helimagnetism from spurious effects. The largest shift between two contiguous sublevels is expected to occur for the couple of states $F=3, M=\vert 3 \vert \rightarrow F=3, M=\vert 2 \vert$. From Table 1 and Eqs. (\[dI\]) and (\[delE\]) we predict: $$\Delta E_{st}(3, 3)-\Delta E_{st}(3, 2) = \frac{75}{64} \; \frac{A_{\parallel}-A_{\perp}}{A_{\parallel} + A_{\perp}} \; \sin{2 \psi} \; d_I \cdot E \; , \label{Starkshift}$$ with $ \frac{75}{64} d_I \simeq 2.76 \times 10^{-13} \vert e \vert a_0\;.$ As in the strong field limit, we note that the size of $\Delta E_{st}$ depends only on the direction of $\vec B$. $\vert M \vert $ 0 1 2 3 4 ------------------------ -------- --------- ----- -------- -------- k(4, $\vert M \vert $) 15/16 51/64 3/8 -21/64 -21/16 k(3, $\vert M \vert $) -15/16 - 45/64 0 75/64   : Linear Stark shift coefficients k(4, M) and k(3, M) of the different F, M substates of the natural cesium ground state Analogy between this shift and the PV energy shift searched for in enantiomer molecules --------------------------------------------------------------------------------------- We would like to stress that from the point of view of symmetry considerations there exists a close analogy between the linear Stark shift induced by the anapole moment and the energy shift which is searched for in enantiomer molecules [@dau99]. Indeed in the present configuration the three vectors $\vec E, \vec B$ and $\vec n$ which are non-coplanar are sufficient to place the atom in a chiral environment similar to that experienced by an atomic nucleus inside a chiral molecule. Between two mirror-image environments an energy difference is predicted exactly like between two mirror-image molecules. Experimental considerations and order of magnitude estimate ============================================================ We now consider an experimental situation which looks like a possible candidate for the observation of the linear Stark shift discussed in the previous sections. It has been demonstrated experimentally [@wei95] that cesium atoms can be trapped in solid matrices of $^4$He. At low pressures, solid helium cristallizes in an isotropic body-centered cubic (bcc) phase, but also in a uniaxial hexagonal close packed (hcp) phase. Optically detected magnetic resonance has proved to be a sensitive tool for investigating the symmetry of the trapping sites. The group of A. Weis has reported the observation in the hexagonal phase of zero-field magnetic resonance spectra and magnetic dipole-forbidden transitions which they interpret in terms of a quadrupolar distorsion of the atomic bubbles [@wei98]. Particularly relevant here is their observation of the matrix-induced lifting of the Zeeman degeneracies in zero field. This is attributed to the combined effect of two interactions, the quadrupolar interaction of the form $ H_b( \vec{n} )={\lambda}_b \(( (\vec{\rho}\cdot \vec{n})^2 - \frac{1}{3} \ {\rho}^2 \)) $ between the cesium atom and the He matrix on the one hand, and the hyperfine interaction in the Cs atom on the other. Provided that $\vec F^2$ is still a good quantum number, it is easily shown from general symmetry considerations, that the anisotropy of the hyperfine interaction induced by the (hcp) crystal potential can be represented, within a given hyperfine multiplet, by the effective perturbation: $$H_{eff}= C_{eff}(F) \cdot ((\vec F \cdot \hat n)^2 - \frac{1}{3} \vec F^2) \,.$$ The constants $C_{eff}(F) $ can be easily related to the anisotropic hyperfine constants appearing in the spin hamiltonian ${\widetilde H_{spin}} $ introduced in Eq.(\[Hbub\]) of the previous section: $$C_{eff}(F = I \pm 1/2 )= \pm (A_{\parallel} - A_{\perp})/8\;.$$ In a uniaxial crystal, when the atoms are optically polarized along the crystal axis in the absence of external magnetic fields, the lifting of the degeneracy between Zeeman sublevels induced by $ H_{eff}$ should make it possible to drive magnetic resonance transitions between these levels. One would expect to deduce the hyperfine anisotropy from the observed spectra. At first sight, the zero-field magnetic resonance spectra observed by Weis [*et al.*]{} would seem to match this prediction. However, their experiment has been performed in a polycristalline (hcp) sample. The effects observed in this situation result from averaging over the distribution of the microcrystal axes. For each microcrystal, there exists a quantization axis, $ \hat{z}$, which diagonalizes the hyperfine level density matrix. Immediately a question arises as to the direction of the quantization axis $ \hat{z} $ with respect to the microcrystal symmetry axis $\vec{n}$. If the population differences resulted, say, from the Boltzmann factor, then $ \hat{z}$ would be along $\vec{n}$, since in the zero magnetic field limit there is no other preferred direction. In such a situation, there would be no difference between the spectra for a polycrystal and a monocrystal. But in the experimental situation considered here, the population differences are induced by an optical pumping mechanism which provides a second preferred direction: the direction of the photon angular momentum along $ \vec{k} $. The microcrystal density matrix is then expected to keep some memory of the direction of $ \vec{k} $. So, two directions $\vec{n}$ and $ \vec{k} $ compete in the determination of the quantization axis $ \hat{z} $. To proceed further, we consider the extreme case where $ \hat{z} $ is taken along $\vec{k}$, together with an assumed isotropic distribution of microcrystal axes. It is then easily seen that the lines associated with the hyperfine anisotropy $ H_{eff}$ collapse into a single asymmetric line when the average is performed over the polycrystal. Clearly, one at least of the two preceeding assumptions is too drastic, most likely the isotropy of the $\vec{n}$ distribution. It is indeed likely that the optical pumping process is more efficient for microcrystals having a preferred orientation with respect to the photon angular momentum. Such a selection mechanism would then lead to an effective anisotropic distribution of $\vec n$, and in this way a spectrum of separated lines can be recovered. From the above qualitative considerations, it clearly follows that the final interpretation of the the zero-field resonances requires a detailed analysis of the optical pumping process for Cs atoms trapped inside deformed bubbles of arbitrary orientation. The corresponding theoretical investigation is currently underway in A. Weis’s group. Meanwhile, to plan any experiment, we still need to know about the physical origin and the magnitude of the ratio $ \frac{A_{\parallel}-A_{\perp}}{A_{\parallel}+ A_{\perp}}$, which governs the magnitude of the electroweak linear Stark shift. We are going to present now the result of an investigation which has led us both to a physical understanding and a reasonably accurate estimate of the sought after parameter. We have chosen to devote an appendix to a detailed description of our semi-empirical approach, which consists in relating the hyperfine anisotropy to another measured physical quantity. Here we shall give a brief summary of our procedure and present the final result. We start from the remark that there really does exist a mechanism able to generate an hyperfine anisotropy to first order in the “bubble” Hamiltonian $ H_b( \vec{n} )$. The $nD_{3/2}$ state is indeed mixed to the $6S_{1/2} $ state under the effect of $H_b( \vec{n} )$, and we note then that the hyperfine interaction has non-zero off-diagonal matrix elements between $S_{1/2}$ and $D_{3/2}$ states. In fact, it has been shown previously [@bou881] that the $\langle nS_{1/2}\vert H_{hf}\vert n^{\,\prime }D_{3/2}\rangle$ matrix elements are not easy to calculate, because they are dominated by the contribution coming from many-body effects, due to the existence of an approximate selection rule which suppresses the single particle matrix element. However, as we show in the appendix, the variation of the matrix elements $\langle n^{\prime} S_{1/2} \vert H_{hf} \vert n^{\prime \prime} D_{3/2} \rangle $ with respect to the binding energies ${ \cal E }_{n^{\prime} S_{1/2} }$ and ${ \cal E }_{n^{\prime \prime} D_{3 /2} }$, -expressed in Rydberg- can be reasonably well predicted in the limit $\vert { \cal E }_{n^{\prime} S_{1/2} } \vert \, , \, \vert{ \cal E }_{n^{\prime \prime} D_{3 /2} }\vert \ll 1 $. In this way, we are left with a single parameter which can be deduced from the empirical knowledge of another physical quantity involving the same matrix elements. We have in mind the quadrupolar amplitude $E_2^{hf} $ induced by the hyperfine interaction which is present in the cesium $ 6S \rightarrow 7S$ transition in the absence of a static electric field [@bou882]. In order to show the relation between the quantities $A_{\parallel}-A_{\perp}$ and $E_2^{hf}$, we express them explicitly in terms of the matrix elements $M( n^{\prime },n ) $ given by: $$M( n^{\prime },n ) =\sum_{n^{\prime\prime} } \frac{\langle n^{\prime } S_{1/2} \vert H_{hf} \vert n^{\prime\prime} D_{3/2}\rangle \langle n^{\prime\prime} D_{3/2}\vert { \rho }^2 \vert n \, S_{1/2}\rangle } { { \cal E }_{n^{\prime} S_{1/2} }- { \cal E }_{n^{\prime \prime} D_{3 /2} } }\, .$$ The basic formula used in our numerical evaluation of $A_{\parallel}-A_{\perp}$ can be cast in a very compact form: $$A_{\parallel}-A_{\perp} = - \frac{4\,{\lambda}_b}{ \Delta {\cal{E} }} \; \frac{ 2 M(6,6) }{M(7,6)+M(6,7) } \; a_3(7,6) \; Ry$$ where $\Delta {\cal{E} }$ is the energy of the $ 6S\rightarrow 7S$ transition and $a_3 \propto \, E_2^{hf} / \mu_B $ is the empirical quadrupolar amplitude (see Eq. (A.\[M1E2hh\]) for a precise definition). A second empirical input is used to determine the coupling constant $\lambda_b$: this is the $S-D$ mixing coefficient which is obtained from the hyperfine frequency shifts measured by Weis [*et al.*]{} for Cs atoms trapped either in the (bcc) or the (hcp) phases [@wei98] in the low magnetic field limit. The ratio involving the matrix elements $M( n^{\prime },n )$ is evaluated in the appendix, using the approximation scheme sketched above. Its absolute value is found to lie close to unity. Let us quote now the final result given by our semi-empirical method[^7] described in the appendix: $\vert \frac{A_{\parallel}-A_{\perp}}{A_{\parallel}+A_{\perp}} \vert = 1.07 \times 10^{-3}$. The uncertainty is believed not to exceed $20\%$. For observing the electroweak linear Stark shift discussed in the present paper, it is important to work with a uniaxial hexagonal crystal. Indeed, in a polycristalline phase, where the individual crystals are oriented totally at random, the average value of the pseudoscalar ${\cal P}$ taken over the isotropic distribution of $\hat n$ is expected to be suppressed and thus is the Stark shift computed in the previous section. Although trapping of cesium atoms has not yet been achieved in a monocrystalline hexagonal phase, the prospect does not look unfeasible [@bal] and a determination of the magnitude of the hyperfine anisotropy appears to be the first step to be achieved. Using $\vert\frac{A_{\parallel}-A_{\perp}}{A_{\parallel}+ A_{\perp}}\vert = 1.07 \times 10^{-3}$ and Eq.(\[Starkshift\]), we find that the effective P-odd T-even electric dipole moment of the trapped cesium atoms associated with the nuclear anapole moment reaches $ 2.96 \times 10^{-16} \vert e \vert a_0$. For comparison, it is interesting to note that this is about three times as large as the Cs EDM limit (Eq.\[dCsEDM\]) to be measured on unperturbed Cs atoms for improving our present knowledge about a possible P-odd T-odd EDM of the electron. Breaking the free atom symmetry by application of a nonresonant radiation field ================================================================================ In this last section we want just to mention another possibility for breaking the atomic Hamiltonian rotation symmetry by other means than static uniform electric and magnetic fields. We have in mind the application of a strong nonresonant radiation field which generates an anisotropic electron gyromagnetic ratio. In the presence of an external magnetic field $\vec B$ it has been shown [@lan70] that the effect of the nonresonant radiation field can be described by the introduction of an effective magnetic field: $$\vec B' = \left( g_{\perp} \vec B + (g_{\parallel} - g{_\perp}) \hat n \cdot \vec B \;\hat n \; \right)/ \sqrt{(g_{\parallel}^2 + g_{\perp}^2)}\; ,$$ where $\hat n$ defines the direction of polarization of the radiation field, $g_{\perp}=g_F $ and $ g_{\parallel}= g_F J_0(\omega_1/ \omega)$, $J_0$ is the zero-order Bessel function, and $\omega_1 $ is the Rabi angular frequency associated with the radiation field. The above formula suggests the existence of a uniaxial symmetry, but it is valid only within an atomic hyperfine multiplet. It is clear that the “dressing” by a nonresonant radiation field offers new possibilities for placing the atoms in a quadrupolar environment. However, it is important to bear in mind that at least two stringent requirements must be satisfied if one wants to detect an electroweak Stark shift in the ground state. First, the uniaxial perturbation has to mix the two hyperfine substates, otherwise the matrix element of $H_{pv}^{st}$ cancels. Second, it is imperative to avoid a broadening of the transition lines for allowing precise frequency measurements. We are currently investigating how to achieve the proper conditions in a realistic way. Conclusion {#conclusion .unnumbered} ========== This paper investigates a way to get around the well known no-go theorem:\ [*no linear Stark shift can be observed in a stationary atomic state unless T reversal invariance is broken.* ]{} The perturbation of an atom by the nuclear spin-dependent parity-odd potential generated by the nuclear anople moment leads to a static electric dipole moment $ d_I \,\vec s \wedge \vec I $, which clearly is T-even. However, if one considers an atom placed in arbitrarily oriented electric and magnetic uniform static fields $ {\vec B}_0 $ and $ {\vec E}_0 $, the quantum average $ {\vec E}_0 \cdot \langle \vec s \wedge \vec I\rangle $ is found to vanish. This can be understood by noting that $ \vec s \wedge \vec I \cdot {\vec E}_0 $ is odd under the quantum symmetry transformation $\Theta$ defined as the product of the time reflexion $ T$ by a space rotation of $ \pi $ about an axis normal to a plane parallel to the fields $ {\vec B}_0 , {\vec E}_0$, while the atomic hamiltonian stays even. Our strategy to obtain a linear Stark shift is to break the $\Theta$ symmetry while keeping T invariance. As a possible practical realization of such a situation, we have studied the case of ground state Cs atoms trapped in a uniaxial (hcp) phase of solid $^4$He, which has been recently the subject of detailed spectroscopic studies [@wei98]. The required breaking of space rotation is provided by the uniaxal crystal field. As a result of the deformation of the atomic spatial wave function the hyperfine interaction acquires an anisotropic part, which plays an essential role in the determination of the size of the linear Stark shift. We have performed a numerical estimation of the hyperfine anisotropy, believed to be accurate to the $ 20\% $ level, using a semi-empirical method. We use as an input the recent experimental measurement of the $E_2$ amplitude of the $ 6S_{1/2}\rightarrow 7S_{1/2}$ transition induced in cesium by the hyperfine interaction. We arrive in this way at a numerical evaluation of the linear Stark shift induced by the nuclear anapole moment: the expected effect is found to be about three times the experimental upper limit to be set on the T-odd Stark shift of free Cs atoms for improving the present limit on the electron EDM. Besides the obvious remark that the T-even Stark shift studied here could be a possible source of systematic uncertainty in EDM experiments designed to reach unprecedented sensitivity[@hun91; @com94; @hin97; @for00], we believe that there are strong physical motivations for measuring the Stark shift itself. First, it would lead to a direct measurement of the nuclear anapole moment in absence of any contribution coming from the dominant PV potential due to the weak nuclear charge. It would also provide an evidence for a truly static manifestation of the electroweak interaction, something which is still lacking. Second, this experiment would rely on the measurement of frequency shifts rather than transition amplitudes. While transition probabilities are difficult to measure very accurately, high precision measurements of frequency shifts have already been achieved. Acknowledgements {#acknowledgements .unnumbered} ================ We thank Ph. Jacquier for continuous interest in the subject of this work and his encouragements. We acknowledge many stimulating discussions with A. Weis and S. Kanorsky. We are grateful to M. Plimmer and J. Guéna for careful reading of the manuscript. This work has been supported by INTAS (96-334). APPENDIX: Semi-empirical calculation of the hyperfine anisotropy of Cs atoms trapped inside a $^4$He hexagonal matrix {#appendix-semi-empirical-calculation-of-the-hyperfine-anisotropy-of-cs-atoms-trapped-inside-a-4he-hexagonal-matrix .unnumbered} ===================================================================================================================== In this appendix we present our evaluation of the hyperfine structure anisotropy $\frac{A_{\parallel} -A_{\perp}}{A_{\parallel}+ A_{\perp}}$ resulting from the matrix induced bubble deformation of quadrupolar symmetry, a quantity frequently referred to in this paper. ### 1. Two processes induced by hyperfine mixing {#two-processes-induced-by-hyperfine-mixing .unnumbered} Our approach is based on the fact that hyperfine mixing plays quite similar roles in two different processes. The first process concerns the Cs $6S \rightarrow 7S $ quadrupolar transition amplitude in zero electric field while the second process deals with the parameter $\frac{A_{\parallel} -A_{\perp}}{A_{\parallel}+ A_{\perp}}$. We start by rewriting the standard mixed $M_1-E_2$ transition operator in atomic units: $$T_{ M_1+E_2}= (\vec{\epsilon }\wedge \vec{k} )\cdot \frac{\vec{\cal{M}}}{\mu_B} -i \frac {1}{2} \Delta {\cal{E} } ( \vec{\rho} \cdot \vec{\epsilon } )( \vec{\rho} \cdot \vec{k} ) \,, \label{M1E2}$$ where $\vec{\rho} $ is the electron coordinate in Bohr radius unit and $ \Delta {\cal{E} } $ is the transition energy expressed in Rydberg unit[^8]. We are first going to study the perturbation effect on $ T_{ M_1+E_2} $ caused by the hyperfine interaction $ H_{hf} $. This phenomenon has been observed experimentally in the forbidden $ 6S_{1/2} \rightarrow 7S_{1/2}$ transition. It provides a useful calibration amplitude in cesium parity violation experiments. To analyse the experimental results, it was found convenient, to introduce the effective transition operator $T_{hf} $ acting upon the tensor products of the electron spin and nuclear spin states: $$T_{hf}= i a_2( n^{\prime},n ) \, ( \vec{s}\wedge \vec{I})\cdot (\vec{\epsilon }\wedge \vec{k} )+ i a_3( n^{\prime},n) \, ( (\vec{s} \cdot \vec{k}) (\vec{I}\cdot\vec{\epsilon } )+(\vec{s} \cdot\vec{\epsilon } ) (\vec{I}\cdot\ \vec{k})) \,. \label{M1E2hh}$$ The second physical process to be analysed in this section is not at first sight closely connected but happens to be described by the same formalism. This will allow us to establish a very useful connection between measurements coming from rather different experimental situations. Recently optical pumping has been observed with cesium atoms trapped inside an hexagonal matrix of solid helium [@wei98]. Among the new effects to be expected, we have seen earlier in this paper that the existence of an anisotropic hyperfine structure opens the possiblity of observing a linear Stark shift induced by the nuclear anapole moment, an effect which cannot exist for an atom in a spherically symmetric environment. It is known that in the bubble enclosing the cesium atom there is a small overlap between the cesium and the helium orbitals [@kan98]. As a consequence, the axially symmetric crystal potential inside the bubble can be well approximated by a regular solution of the Laplace equation: $$H_b( \vec{n} )={\lambda}_b (\frac{e^2}{ 2 a_0} )\(( (\vec{\rho}\cdot \vec{n})^2-\frac{1}{3} \ {\rho}^2 \)) \,. \label{hbub}$$ The perturbation of the hyperfine interaction by the bubble quadrupole potential $H_b( \vec{n} ) $ induces an anisotropic hyperfine structure for cesium $ nS_{1/2} $ states. This is described by the effective Hamiltonian: $$H_{hf}^{anis}= (A_{\parallel}-A_{\perp} ) \(( (\vec{s}\cdot \vec{n} )(\vec{I}\cdot\vec{n}) - \frac{1}{3} \vec{s}\cdot\vec{I} \))\,. \label{hfanis}$$ We present now the basic formulas which allow the computation of the parameters relevant for the two physical problems in hand. They will be given in such a way as to exhibit their close anology. We have chosen to use the Dirac equation formalism. Besides the fact that formulas are more compact, it is well known that relativistic corrections play an important role in cesium hyperfine structure computation. Neglecting the contribution of the quadrupole nuclear moment of the Cs nucleus[^9], the hyperfine hamiltonian is written as: $$H_{hf}= \vec{I}\cdot\vec{\cal{A}}\, ,$$ $$\vec{\cal{A}}=C_{hf}\frac{\vec{\alpha} \wedge \vec{\rho} }{ {\rho}^3} + \delta {\vec{\cal{A}} }^{(1)}(\vec{\rho},{\vec{\rho}^{\;\prime}} ) + . . .\; .$$ The first term gives the hyperfine interaction of the valence electron treated as a Dirac particle; the second represents the non-local modification of the hyperfine interaction induced by the excitation of core electron-hole pairs to lowest order and the dots stand for higher order contributions[^10]. It has been shown in reference [@bou881] that the off-diagonal matrix element $ \langle n^{ \prime \prime} D_{3/2} \vert \frac{\vec{\alpha}\wedge \vec{\rho} }{ {\rho}^3} \vert n S_{1/2} \rangle $ is strongly suppressed by an approximate selection rule which does not apply to the many-body non local operator $\delta {\vec{\cal{A}} }^{(1)}(\vec{\rho},{\vec{\rho}^{\;\prime}} )$. An evaluation of the latter contribution led to a semi-quantative agreement with the experimental measurements of the ratio $ a_3(7,6)/a_2( 7,6)$, while the single particle result is too small by about two orders of magnitude. To obtain an estimate of the ratio $ (A_{\parallel}-A_{\perp} ) / a_3( 7,6)$ it is convenient to introduce the cartesian tensor operator $ T_{i_1i_2i_3}(E)$. This object appears naturally in the lowest-order pertubation expressions for the quantities of interest: $$T_{i_1i_2i_3}(E) = {\cal A}_{i_1} \, G_{3/2}^{+}(E)\,( {\rho}_{i_2} {\rho}_{i_3}-\frac{1}{3}\, \delta_{i_2,i_3}\, {\rho}^2) \,,$$ where the indices $ i_1 \, ,\, i_2 \, ,\, i_3 $ take any value between 1 and 3. The scalar operator $ G_{3/2}^{+}(E)$ is the atomic Green function operator restricted to the subspace of $ D_{3/2}$ configurations (total atomic angular momentum J=3/2 and positive parity). We now proceed to isolate in $T_{i_1i_2i_3}(E)$ the part transforming as a vector; this is the only part to survive after the operator is sandwiched between the projectors $P(n^{\prime} S_{1/2}) $ and $ P(n S_{1/2} ) $. This operation is achieved by a decomposition of $T_{i_1i_2i_3}(E)$ into a traceless tensor $\bar{T}_{i_1i_2i_3}(E)$ and a remainder [@group]: $$\begin{aligned} T_{i_1i_2i_3}(E) &= &\bar{T}_{i_1i_2i_3}(E)+\frac{3}{10}\,\(( {\delta}_{i_1,i_2}\, T_{\alpha \alpha i_3}(E) + {\delta}_{i_1,i_3}\, T_{\alpha \alpha i_2 } (E) \)) \nonumber \\ & &-\frac{2}{10} \, {\delta}_{i_2,i_3}\,T_{\alpha \alpha i_1}(E) \,, \label{irreduc}\end{aligned}$$ where we have used the fact that $T_{\alpha \alpha i } = T_{\alpha i \alpha } $ and $T_{i \alpha \alpha }= 0 $. It is a simple matter to verify from the above equation that we have indeed $\bar{T}_{\alpha \alpha i_3} =\bar{T}_{\alpha i_2\alpha } =\bar{T}_{i_1\alpha \alpha } =0$. The fully symmetric part of the traceless tensor $\bar{T}_{i_1i_2i_3}^S (E)$ is easily identified with an octupole spherical tensor having seven independant components. By a simple counting argument, the left over term is seen to have five components; it is to be identified with the quadrupole tensor which appears in the full decomposition of $ T_{i_1i_2i_3}(E) $ into irreducible representations of the rotation group $ O(3)$. Let us have a look at the vector operator, $ \vec{V} $, the components of which appear in the right hand side of Eq.(A.\[irreduc\]): $$\vec{V}= (\vec{\cal{A}} G_{3/2}^{+}(E)\cdot \vec{\rho}) \, \vec{\rho}- \frac{1}{3} \vec{\cal{A}} G_{3/2}^{+}(E) {\rho}^2 \,.$$ The second term in the above expression does not contribute when it acts upon an $ n S_{1/2} $ state so, we are led for our purpose to introduce the vector operator $ \vec{{\cal T}}( n^{\prime},n) $ $$\begin{aligned} \vec{{\cal T}}( n^{\prime},n) &= &\frac{3}{10} P(n^{\prime} S_{1/2} ) \(( \vec{\cal{A}} G_{3/2}^{+}(E_i ) \cdot \vec{\rho}) \, \vec{\rho} +( h.c , E_f \rightarrow E_i )\)) P(n S_{1/2}) \nonumber \\ & =& \gamma(n^{\prime},n) \,\vec{s} \,, \label{gamma}\end{aligned}$$ where $ E_f$ and $ E_i$ are respectively the binding energies of the $ n^{\prime} S_{1/2}$ and $ n S_{1/2} $ atomic states. The second line of the above equation follows directly from the Wigner-Eckart theorem applied to a vector operator. In order to calculate $ a_3 (n^{\prime},n) $ we have to perform the contraction of $ I_{i_1} {\epsilon}_{i_2} k_{i_3} $ with the tensor: $$F_{i_1i_2i_3}= P(n^{\prime} S_{1/2} ) \(( T_{i_1i_2i_3}(E_f)+( h.c. , E_f \rightarrow E_i ) \)) P(n S_{1/2}) \,.$$ Using Eq.(A\[irreduc\]) and (A\[gamma\]) $, F_{i_1i_2i_3} $ can be cast into the simple form : $$F_{i_1i_2i_3}= \gamma(n^{\prime},n) \(( {\delta}_{i_1,i_2}\, s_{i_3}+ {\delta}_{i_1,i_3}\, s_{i_2}-\frac{2} {3}{\delta}_{i_2,i_3}\, s_{i_1} \)) \,.$$ The required index contraction with the tensor $I_{i_1} \epsilon_{i_2} k_{i_3}$ is now easily performed and one obtains directly $ a_3(n^{\prime} ,n)$, up to a prefactor whose value is found by identification with Eq.(A.\[M1E2\]): $$a_3(n^{\prime} ,n)= -\frac {1}{2} \Delta {\cal{E} }\,\gamma(n^{\prime},n)\,. \label{E2hf}$$ To calculate the hyperfine anisotropy $A_{\parallel}-A_{\perp}$, we follow the same lines but this time the contraction involves the tensor $ I_{i_1} \(( {n}_{i_2} n_{i_3} -\frac{1}{3} {\delta}_{i_2,i_3} \))$, the prefactor is fixed by comparison with Eq.(A.\[hbub\]) and the exchange $i_2 \leftrightarrow i_3 $ leads to two identical contributions. Hence, $$\begin{aligned} A_{\parallel}-A_{\perp} &= & 2 \,{\lambda}_b (\frac{e^2}{ 2 a_0} ) \,\gamma(n,n) \\ &=& - (\frac{e^2}{ 2 a_0} ) \, \frac{4\,{\lambda}_b}{ \Delta {\cal{E} }} \, \frac{\gamma(n,n)}{\gamma(n^{\prime},n)}\, a_3 (n^{\prime},n) \,. \label{anisA}\end{aligned}$$ The expression (A.\[anisA\]) looks to us a good starting point for numerical evaluation of $A_{\parallel}-A_{\perp} $: besides the fact that several sources of uncertainties in the evaluation of ${\gamma(n^{\prime},n)}$ are eliminated in the ratio $\frac{\gamma(n,n)}{\gamma(n^{\prime},n)}$, it lends itself to the use of empirical information. One may note, here, a certain similarity with Eq. (7) of sec 1.2 used for the evaluation of the permanent dipole, $d_I$. ### 2. Numerical evaluation {#numerical-evaluation .unnumbered} We proceed now to a numerical evaluation of $A_{\parallel}-A_{\perp} $ in three steps, starting from formula (A.\[anisA\]). The numerical value of the $ 6S_{1/2}\rightarrow 7S_{1/2}$ quadrupole amplitude $a_3 (7,6) $ is readily obtained from measurements [@bou882; @ben99] of the ratio $$a_3 (7,6)/a_2 (7,6)= E_2/M_1^{hf} = ( 5.3 \pm 0.3) \times 10^{-2}\, ,$$ combined with a precise theoretical evalution of the magnetic dipole amplitude[^11] $ a_2 (7,6) = -\frac{ M_1^{hf}}{2\mu_B} = -0.4047 \pm 4 \times 10^{-4}$. We obtain finally: $$a_3 (7,6)= ( 2.14 \pm 0.12)\times 10^{-7}\, .$$ The second step is the numerical estimate of the ratio $ \gamma(6,6) / \gamma(7,6) $. This is more delicate and requires an assumption which has been shown to work in similar situations. To begin with, we have addressed the question[^12] of the origin and size of the variations of the off-diagonal matrix elements $ \langle nS_{1/2}\vert H_{hf}\vert n^{\prime \prime} D_{3/2}\rangle $ with the radial quantum numbers $ n$ and $ n^{\prime \prime}$. It is of interest to remind that a very precise answer to this question has been already obtained in the case of cesium single particle matrix elements $ \langle nL_{J}\vert H_{hf}^{sp}\vert n^{\prime } L_{J}\rangle $ with $ L= 0 \;{\rm or}\;1 $ and with $n \;{\rm and} \; n^{\prime }$ referring to the radial quantum numbers of any pair of valence states. For simplicity, we are going to express the answer within a non-relativistic formalism, but it should be borne in mind that all of what is said holds true within a relativistic framework. It is convenient to introduce the notion of overline matrix elements such as those computed with radial wave functions $\overline{ R_{nlj} }(\rho) $ which have a starting coefficient at the origin equal to unity instead of a unit norm[^13]. More explicitly we can write: $$\overline {\langle nL_{J} \vert H_{hf}^{sp} \vert n^{\prime } L_{J}\rangle } = \frac{ \langle nL_{J} \vert H_{hf}^{sp} \vert n^{\prime } L_{J}\rangle }{ A_{ n l_{j} } \, A_{ n^{\prime } l_{j} } } \,,$$ where $ A_{ n l_{j} }= \lim_{ \rho \to 0} {\rho}^{- l} R_{nlj} (\rho) $ is the starting coefficient of the space normalized wave function. (In the relativistic case the above condition is replaced by energy independent boundary conditions imposed on the Dirac radial wave functions at the nuclear radius). It was found in references [@bou881; @bou86] that the overlined matrix elements are independent of the valence orbital radial quantum numbers $ n$ and $n^{\prime }$ to better than $ 10^{-4}$ for $ S_{1/2} $ states and better than $ 10^{-3}$ for $ P_{1/2} $ states. This result is understood by noting that, in the domain of the $ \rho $ values relevant for the evalutation of the matrix elements of $ \frac{\vec{\alpha} \wedge \vec{\rho} }{ {\rho}^3} $ for $ S_{1/2}$ and $ P_{1/2} $ states, the potential energy is larger than valence binding energies by more than three orders of magnitude. This implies that, [*in this domain, the overlined radial wave functions have no dependence upon the binding energy or equivalently upon the radial quantum numbers of the valence orbitals*]{}. The above argument has to be reconsidered for the lowest order many body correction involving the matrix element of the non local operator: $ \delta {\vec{\cal{A}}}^{(1)}(\vec{\rho},{\vec{\rho}^{\;\prime}} ) $. The relevant domain of $ \rho $ values is now determined by the “radii” of the core outer orbitals involved in the computation, which in the case of $ S_{1/2} $ and $ P_{1/2} $ matrix elements are $ 5s\, , \, 5p \; $, while in the case of the off-diagonal matrix element $ \langle nS_{1/2} \vert \delta {\vec{\cal{A}}}^{(1)}\vert n^{\prime } D_{3/2}\rangle$ only $ 5p\, $ is relevant. We measure the variation of the overlined matrix elements $ \overline{ \langle nL_{J}\vert H_{hf}^{mb}\vert n^{\prime } L^{ \,\prime}_{J^{\prime}} \rangle } $ with the valence state binding energies $ { \cal E }_{nL_{J} } $ by the parameters $ {\delta }_{L_{J} }$ defined as their logarithmic derivative with respect to $ { \cal E }_{nL_{J} } $, (here $H_{hf}^{mb}$ stands for the many-body modification to the hyperfine interaction). From results of references [@bou881; @bou86], we can infer the relative variation of $\overline{ \langle n L_{1/2} \vert \delta {\vec{\cal{A}}}^{(1)}\vert n^{\prime } L_{1/2}\rangle } $ for $ L=0,1$ and we arrive to the values ${\delta }^{(1)}_{S_{1/2} }= -0.12 $ and ${\delta }^{(1)}_{P_{1/2} }= -0.30 $. The fact that $-{\delta }^{(1)}_{P_{1/2} } $ is about three times larger than $-{\delta }^{(1)}_{S_{1/2} }$ is coming from the fact that $ P $ state binding energies have to be compared with the potential energy minus the centrifugal energy. Let us, now, consider the more difficult case of the S-D off-diagonal matrix elements $ \overline{ \langle n^{\prime }S_{1/2} \vert \delta {\vec{\cal{A}}}^{(1)}\vert n^{\prime\prime} D_{3/2}\rangle }$. The corresponding parameter ${\delta }^{(1)}_{S_{1/2} }$ is expected to be somewhat larger in absolute value, due to the fact that the relevant $ 5p $ orbital is less tightly bound than the $5s$ orbital which gives the dominant contribution to the $ S_{1/2} $ diagonal matrix element. The relative variation versus the $ D_{3/2} $ energy is expected to be on the order of few units, since the centrigugal barrier is three times higher than in the case of $ P $ states. This expectation is borne out by a preliminary estimate which gives ${\delta }^{(1)}_{D_{3/2} } \sim -3$. We proceed now to a numerical evaluation of the ratio $ r_{anis}=\gamma(6,6)/\gamma(7,6) $, leaving, for the moment, ${\delta }_{ S_{1/2}}$ and $ {\delta }_{ D_{3/2} }$ as free parameters. As an intermediate step, we compute the quantities $ M( n^{\prime },n ) $, written as sums over the intermediate $ n^{\prime\prime} D_{3/2} $ states : $$M( n^{\prime },n ) =\sum_{n^{\prime\prime} } \frac{\langle n^{\prime } S_{1/2} \vert H_{hf} \vert n^{\prime\prime} D_{3/2}\rangle \langle n^{\prime\prime} D_{3/2}\vert { \rho }^2 \vert n \; S_{1/2}\rangle } { { \cal E }_{n^{\prime} S_{1/2} }- { \cal E }_{n^{\prime \prime} D_{3 /2} } }\,.$$ The ratio $ r_{anis}$ is given in terms of $M( n^{\prime },n )$ by the following formula : $$r_{anis}=\gamma(6,6)/\gamma(7,6)= \frac{ 2 M(6,6) }{M(7,6)+M(6,7) } \, .$$ An explicit numerical computation of $ r_{anis}$ has been performed according to the following procedure. First, any binding energy independent factor appearing in $ M( n^{\prime },n ) $ is dropped since it disappears in the ratio. This is indicated below by the symbol $\propto $. The sum $ \sum_{n^{\prime\prime} } $ appearing in $M( n^{\prime },n )$ is limited to $ 5 \leq n^{\prime\prime} \leq 8 $. The set of the quadrupole matrix element $\langle n^{\prime\prime} D_{3/2}\vert { \rho }^2 \vert n \; S_{1/2}\rangle $ were obtained by a relativistic version of the Norcross model. In order to test the sensitivity of the result to quadrupole amplitudes, we have also used a set calculated by an extension of the Bates-Damgaard method. The energy denominators are taken from experiment. The hyperfine matrix elements $ \langle n^{\prime } S_{1/2} \vert H_{hf} \vert n^{\prime\prime} D_{3/2}\rangle$, to second order in the binding energies are, given by the following formulas: $$\begin{aligned} \langle n^{\prime } S_{1/2} \vert H_{hf} \vert n^{\prime\prime} D_{3/2}\rangle & \propto & A_{n^{\prime\prime} D_{3/2} }\, A_{ n^{\prime } S_{1/2} } \times \label{hfSDme} \nonumber \\ & & \(( 1+ {\delta }_{ S_{1/2} } { \cal E }_{n^{\prime} S_{1/2} } + {\delta }_{ D_{3/2} } {\cal E }_{n^{\prime \prime} D_{3/2} } \)) \,, \\ A_{ n L_{J} } & \propto & \, { (-\cal E }_{n L_{J} })^{\frac{3}{4} }\,. \label{startcoef}\end{aligned}$$ In formula (\[hfSDme\]), we have dropped, according to the above prescription, the zero energy limit of the overlined matrix element $ \overline{\langle n^{\prime } S_{1/2} \vert H_{hf} \vert n^{\prime\prime} D_{3/2}\rangle}$. Equation (A.\[startcoef\]) follows from a result obtained in [@bou75], where the Fermi-Segr' e formula was extended to arbitrary orbital angular momentum states. For simplicity we have ignored for simplicity a factor involving the derivative of the quantum defects, which in the present context would introduce few percent corrections. We now have all the elements needed to calculate the sought after ratio: $$r_{anis} =\frac{ \gamma(6,6)}{\gamma(7,6)}=-0.8173-0.0255 {\delta }_{ D_{3/2} }+ 0.1456 {\delta }_{ S_{1/2} }\,.$$ The negative sign of $ r_{anis} $ can be traced back to the fact that the $7S_{1/2}$ level lies just in between $5D_{3/2}$ and $6D_{3/2}$ levels. If we adopt the rough estimate given above : a few tens of $ \% $ for $ -{\delta }_{ S_{1/2} } $ and a few units for $ -{\delta }_{ D_{3/2} }$, the first-order energy correction remains well below the $10 \% $ level, due in part to a cancellation between the two correcting terms. To reduce the absolute value of $ r_{anis} $ by more than $ 10 \%$ would require unrealistic values of $-{\delta }_{ D_{3/2} }$ so we believe the estimate of $ r_{anis}= -0.82 \pm 0.10$ to be reasonably safe[^14]. The final step in our evaluation of the hfs anisotropy is devoted to the empirical determination of the coupling constant $ \lambda_b$ appearing in front of the crystal electronic potential. As experimental input we are going to use the hyperfine energy shift which is observed for trapped cesium atoms, when one passes from the cubic to the hexagonal phase. This shift is attributed to the effect of the anisotropic bubble potential $ H_b( \vec{n} )$. We shall ignore, for the moment, the possible contribution of the anisotropic hyperfine interaction $H_{hf}^{anis}$ and assume that the shift is essentially due to the renormalization of the $6 S_{1/2} $ component of the atomic wave function by the admixtures $ {\alpha}_{n D_{3/2}} $ of the $n D_{3/2}$ states. The corresponding variation of the hyperfine splitting $ \delta W $ is then given by : $$\frac{\delta W}{ W}=-\sum_{n,J}{ { \vert {\alpha}_{n D_{J} } \vert }^2}= -\lambda_b^2 {\cal J }_{SD}\,,$$ where we have isolated $\lambda_b^2 $ by introducing the purely atomic quantity ${\cal J }_{SD} $. Let us write down the explicit expression of ${\cal J }_{SD} $, neglecting spin-orbit coupling and assuming that $\vec{n}$ lies along the quantization axis: $${\cal J }_{SD} = \sum_{n} {\vert \langle 6 S \vert ( \cos ^2\theta-1/3)\rho^2\vert n D \rangle / ( {\cal E}_{6 S}- {\cal E}_{n D} ) \vert}^2 \,.$$ We limit the sum to $n$ values ranging from 5 to 8. With the same radial quadrupole matrix elements as before, we obtain the numerical value: ${\cal J }_{SD}=9512 $. Using the empirical number given in ref [@wei98], $\sqrt{- \delta W/W }=0.035 $, we arrive at the following absolute value of the coupling constant $\lambda_b $ (in Ry): $$\vert\lambda_b \vert = \sqrt{ \frac{-\delta W}{ {\cal J }_{SD} \, W} }= 0.000359 \,. \label{lambda}$$ It should be pointed out that if the crystal axis $\vec{n}$ is not aligned along the quantization axis, one obtains values of ${\cal J }_{SD}$ smaller than the one quoted above, so the value of $ \vert\lambda_b \vert $ should be considered, strictly speaking, as a lower bound. At last, we have in hand all the ingredients needed to perform a numerical evaluation of $ \vert A_{\parallel}-A_{\perp} \vert$ from the formula (A.\[anisA\]) since $ \Delta {\cal E } =0.169$ is taken directly from experiment: $$\begin{aligned} \vert A_{\parallel}-A_{\perp} \vert &= &\, \frac{4\,\vert{\lambda}_b\vert}{ \Delta {\cal{E} }} \, \frac{\vert\gamma(6,6)\vert}{\vert\gamma(7,6)\vert}\,\vert a_3 (7,6) \vert \; {\rm Rydberg( MHz) } = 4.9 \; {\rm MHz } \\ &=& 1.07 \times 10^{-3} \times ( \vert A_{\parallel}+A_{\perp} \vert ) \,. \label{finalresult}\end{aligned}$$ As a final topic, we should discuss the effect of the anisotropic hyperfine interaction itself on the the empirical splitting $ \delta W $, since this could modify the value of $\vert{\lambda}_b\vert $ and so play a role in the assessment of the uncertainty affecting the result given by Eq. (A.\[finalresult\]). Due to this effect, the constant $\lambda_b$ is no longer given by Eq. (A.\[lambda\]) but rather by a second order equation where the linear term is associated with the anisotropic hyperfine interaction. It is convenient to introduce the variable $ x= \lambda_b/ \lambda_b^0$ with $ \lambda_b^0 = \sqrt{ -\delta W /( {\cal J }_{SD} \, W) }$. The equation giving $\lambda_b$ takes then the simple form: $ x^2-2\,b\,x -1=0 $, where the coefficient $b$ is given by the following formula: $$b= \frac{ (A_{\parallel}-A_{\perp} )^{(0)} }{ A_{\parallel}+A_{\perp} } \frac{W}{\delta W} \frac { \Delta_F \langle \; s_z I_z -\frac{1}{3 } \vec{s}\cdot\vec{I} \; \rangle }{ 2\,I+1}\,.$$ The superscript $^{(0)}$ indicates that the hf anisotropy is given, up to a well defined sign, by Eq. (A.\[finalresult\]). The symbol $ \Delta_F $ means that one should take the difference between the two hyperfine states of the quantum average over which it is applied. To obtain an over-estimate of $b $ we have assumed that optical pumping works at its maximum efficiency so that the microwave transition takes place between the hyperfine levels (4,4) and (3,3). In this case we obtain $b=0.20$ and the two possible solutions for $\lambda_b$ are : $$\lambda_b^{(\pm )}=\pm 3.6 \, 10^{-4} \,(1\pm 0.2) \, .$$ The actual experimental situation is expected to lie far from the extreme case considered here, so the difference between the two absolute values is certainly smaller than the upper limit given by the above calculation. In conclusion, including all sources of uncertainties, we consider our evaluation of Eq.(A.\[finalresult\]), $ \vert \frac{ A_{\parallel}-A_{\perp}}{ A_{\parallel}+A_{\perp} } \vert = 1.07 \times 10^{-3} $, as reliable within uncertainty limits of about 20$\%$. However, if, during hyperfine shift measurements $\vec n$ is not aligned along the quantization axis, the central value of $\lambda_b $, and therefore that of $\frac{ A_{\parallel}-A_{\perp}}{ A_{\parallel}+A_{\perp} }$, may be pushed upwards. [50]{} M. A. Bouchiat and C. Bouchiat, [*Rep. Prog. Phys.*]{} [**60**]{}, 1351 (1997). S. C. Bennett and C. E. Wieman, [*Phys. Rev. Lett.*]{} [**82**]{}, 2484 (1999). M. A. Bouchiat, in [*Electroweak Interactions and Unified Theories*]{}, (Proceedings of the $XXXV^{th}$ Rencontres de Moriond, March 11-18 2000 Tran Than Van ed., éditions Frontière). P. Fayet and G. Altarelli, in[*Parity Violation in Atoms and Electron Scattering*]{}, B. Frois and M. A. Bouchiat eds. (World Scientific, 1999). M. J. Ramsey-Musolf,[*Phys. Rev.*]{} C [**60**]{}, 015501 (1999). M. A. Bouchiat, J. Guéna, L. Hunter and L. Pottier, [*Phys. Lett.*]{} B [**117**]{}, 358 (1982); [*Phys. Lett.*]{} B [**134**]{}, 463 (1984). M. A. Bouchiat, J. Guéna, L. Hunter and L. Pottier, [*J. Phys. (France)*]{} [**47**]{}, 1709 (1986). S. L. Gilbert, M. C. Noecker, R. N. Watts, and C. E. Wieman, [*Phys. Rev. Lett.*]{} [**55**]{}, 2680 (1985). M. C. Noecker, B. P. Masterson, and C. E. Wieman, [*Phys. Rev. Lett.*]{} [**61**]{}, 310 (1988). C. S. Wood [*et al*]{}, , [**275**]{}, 1759 (1997). V. V. Flambaum and I. B. Khriplovich, , 835 (1980). Ya. B. Zel’dovich, , [**6**]{}, 1184 (1957). C. Bouchiat and C. A. Piketty, , 49 (1991). C. Bouchiat, in [*Parity Violation in Atoms and Electron Scattering*]{}, B. Frois and M. A. Bouchiat eds. (World Scientific, 1999) p. 138. V. V. Flambaum [*et al.*]{}, , 367 (1984). M. A. Bouchiat and C. Bouchiat, [*J. Phys. (France)*]{} [**36**]{}, 899 (1974). D. Budker, in [*Proceedings of the Fifth International Symposium of the WEIN Conference (Santa Fe, June 98)*]{}, P. Herczeg, C. M. Hoffman, H. V. Klapdor-Kleingrothaus, eds., (World Scientific, 1999), p. 418-441; and private communication. A. L. Barra, J. B. Robert, and L. Wiesenfeld, [*Phys. Lett.*]{} A [**115**]{}, 443 (1986) and Europhys. Lett. [**5**]{}, 217 (1988). P. G. H. Sandars, Physica Scripta [**36**]{}, 904 (1987). S. A. Blundell, W. R. Johnson and J. Sapirstein, [*Phys. Rev.*]{} D [**45**]{}, 1602 (1992). E. D. Commins, S. B. Ross, D. DeMille, and B. C. Regan, [*Phys. Rev.*]{} A [**50**]{}, 2960 (1994). W. R. Johnson, D. S. Guo, M. Idrees, and J. Sapirstein, [*Phys. Rev.*]{} A [**34**]{}, 1043 (1986); A. C. Hartley, E. Lindroth, and A. M. Martensson-Pendrill, [*J. Phys.* ]{} B [**23**]{}, 3417 (1990). S. I. Kanorsky, and A. Weis, in [*Advances in Atomic, Molecular and Optical Physics*]{}, [**vol38**]{}, B. Bederson and H. Walther eds., (Academia Press, San Diego, 1998), p. 87, and references therein. Ch. Daussy, T. Marrel, A. Amy-Klein, C. T. Nguyen, Ch. J. Bordé, and Ch. Chardonnet, [*et al.*]{}, [*Phys. Rev. Lett.*]{} [**83**]{}, 1554 (1999). M. Arndt, S. I. Kanorsky, A. Weis, and T. W. Hänsch, [*Z. Phys.* ]{} [**98**]{}, 377 (1995). S. Kanorsky, S. Lang, T. Eichler, K. Winkler and A. Weis, [*Phys. Rev. Lett.*]{} [**81**]{}, 401 (1998). C. Bouchiat and C. A. Piketty, [*J. Phys. France*]{} [**49**]{}, 1851 (1988). M. A. Bouchiat and J. Guéna, [*J. Phys. France*]{} [**49**]{}, 2037 (1988). S. Balibar, and A. Weis, private communications. C. Landré, C. Cohen-Tannoudji, J. Dupont-Roc and S. Haroche, [*J. Physique France*]{} [**31**]{}, 971 (1970). L. R. Hunter, [*Science*]{} [**252**]{} 73 (1991). E. D. Hinds and B. Sauer, [*Physics World*]{} April 1997, p. 37. M. V. Romalis, W. C. Griffith, and E. N. Fortson, arXiv:hep-ex/0012001 1 Dec 2000. C. Bouchiat, in [*Atomic Physics 7*]{}, D. Kleppner and F. M. Pipkin eds. (Proceedings of the Seventh International Conference on Atomic Physics, Plenum Press, New York, 1981), p. 83-119. V. A. Dzuba, V. V. Flambaum, A. Ya Kraftmakher, and O. P. Sushkov, [*Phys. Lett.*]{} A [**142**]{}, 373 (1989). Morton Hamermesh, [*Group Theory*]{}, (Addison-Wesley series in Physics, Pergamon Press: London-Paris, 1962), p. 393. C. Bouchiat and C. A. Piketty, [*Europhys. Lett.*]{} [**2**]{}, 511 (1986). A. Derevianko, M. S. Safronova and W. R. Johnson, [*Phys. Rev.*]{} A [**60**]{}, R1741 (1999). V. A. Dzuba and V. V. Flambaum, [*Phys. Rev.*]{} A [**62**]{}, 052101 (2000). [^1]: Laboratoire de l’Université Pierre et Marie Curie et de l’Ecole Normale Supérieure, associé au CNRS (UMR 8552) [^2]: UMR 8549: Unité Mixte du Centre National de la Recherche Scientifique et de l’École Normale Supérieure [^3]: To check the validity of the relation (\[relVpv2Vpv1\]) we have compared the values for $ \eta$ and $ {\eta}^{\prime} $ obtained in this way with those deduced from a direct computation [@blu92] of $ {\vec{d}}_{pv}(6,7) $. The two results agree to better than $ 10 \, \% $. [^4]: We use here the fact that, as noted by several authors, most of the sum ($\approx 98\%$) comes from the four states $6P_{1/2}, 7P_{1/2},8P_{1/2},9P_{1/2}. $ [^5]: Note that a misprint in table IV of ref [@blu92] has caused an interchange between the contents of columns 1 and 2 of its lower half (entitled “7S perturbed”). [^6]: It has been shown [@kan98] that in the bubble enclosing the cesium atom there is a small overlap between the cesium and the helium orbitals. As a consequence, the axially symmetric crystal potential inside the bubble can be well approximated by a regular solution of the Laplace equation. [^7]: This method can be seen as a generalization of that used in sec.1.2 to evaluate the static dipole starting from the empirical knowledge of the transition dipole. [^8]: The phase difference, $ \pi/2$, between the two amplitudes expresses the fact that the magnetic moment $\vec{\cal{M}}$ and the quadrupole operator behave differently under time reflexion: the first is odd, while the second is even. [^9]: As shown in ref [@bou882], the quadrupole contribution for $^{133}$Cs plays a negligible role in the effects discussed in this appendix. [^10]: An explicit construction of $\delta {\vec{\cal{A}} }^{(1)}(\vec{\rho},{\vec{\rho}^{\;\prime}} )$ together with a resummation of an infinite set of higher order terms, within the many body field theory formalism, is given in reference [@bou81]. See also [@fla89] for more advanced analysis. [^11]: The theoretical method used to get $ M_1^{hf}$ is based upon the factorization rule: $\langle 6S \vert H_{hf}\vert 7S\rangle = \sqrt {\langle 6S \vert H_{hf}\vert 6S\rangle \langle 7S \vert H_{hf}\vert 7S\rangle } $. This rule was first established with an accuracy of a few parts in $10^3$ in ref.[@bou881]. It has been confirmed by a direct many-body relativistic computation [@joh99] of $\langle 6S \vert H_{hf}\vert 7S\rangle $, accurate to the $1\% $ level. More recently the validity of the rule has been pushed to the level of a fraction of $10^{-3}$ [@fla00]. [^12]: Arguments similar to those given below and in references [@bou881; @bou86] are developed in [@fla00]. [^13]: The wave function $\overline{ R_{nlj} }(\rho) $ is known to be an analytic function of the energy. This property is the starting point of the quantum defect theory. [^14]: The validity of the procedure leading to this estimate has been checked, to the $10 \% $ level, upon a significative subset of the many-body Feynman diagrams contributing to $ \gamma(n^{\prime},n) $.
--- abstract: 'In this work, we show that a giant spin current can be injected into a nodal topological superconductor, using a normal paramagnetic lead, through a large number of zero energy Majorana fermions at the superconductor edge. The giant spin current is caused by the selective equal spin Andreev reflections (SESAR) induced by Majorana fermions. In each SESAR event, a pair of electrons with certain spin polarization are injected into the nodal topological superconductor, even though the pairing in the bulk of the nodal superconductor is spin-singlet s-wave. We further explain the origin of the spin current by showing that the pairing correlation at the edge of a nodal topological superconductor is predominantly equal spin-triplet at zero energy. The experimental consequences of SESAR in nodal topological superconductors are discussed.' author: - 'Noah F. Q. Yuan, Yao Lu, James J. He and K. T. Law' title: Generating Giant Spin Currents Using Nodal Topological Superconductors --- [^1] [**Introduction**]{}— The search for Majorana fermions in condensed matter systems has been an important topic in recent years [@Wilczek; @Kane; @Qi; @Alicea_rev; @Beenakker_rev]. This search is strongly motivated by the fact that Majorana fermions are non-Abelian particles and have potential applications in quantum computation [@Kitaev; @Alicea_braiding; @Ivanov]. Recently, it was further pointed out that Majorana fermions, due to their self-Hermitian properties, could induce spin currents in paramagnetic leads [@SESAR; @Wu; @XinLiu] and correlated spin currents in spatially separated leads [@James_BDI; @Oreg]. These properties make it possible for Majorana fermions to have potential applications in superconducting spintronics [@Eschrig; @Linder]. In particular, it was pointed out that a single Majorana end state of a topological superconducting wire can induce the so-called selective equal spin Andreev reflection (SESAR) at the normal lead/topological superconductor (N/TS) interface [@SESAR]. In SESAR processes, only electrons with certain spin polarization $\bm {n}$ in the normal lead can couple to the Majorana fermion and undergo Andreev reflections. Importantly, the reflected holes are due to missing electrons with the same spin polarization $\bm {n}$ below the Fermi energy. As a result, two electrons with equal spin tunnel into the superconducting wire and form a spin-triplet Cooper pair in each Andreev reflection event. On the other hand, electrons with opposite spin polarization $-\bm {n}$ in the normal lead are decoupled from the Majorana fermion and get reflected as electrons with unchanged spin. Therefore, a spin current with spin polarization in the $\bm {n}$ direction can be injected into the superconductor using a *paramagnetic* lead. At the same time, a spin current is generated in the lead. In this work, instead of studying isolated Majorana modes, we study the spin transport properties of the 2D nodal topological superconductor with a large number of spatially overlapping zero energy Majorana modes at the sample edge [@Tanaka_Nodal; @Sato_MFB; @Schnyder_Nodal; @Chris_MFB]. The zero energy edge modes are associated with Majorana flat band (MFB), which connects the nodal points of the superconductor in the projected band structure. These flat bands are analogous to the surface Fermi arcs connecting the Weyl points in Weyl semimetals [@WSM1; @WSM2]. ![(a) The Dresselhaus (110) QW in proximity to s-wave superconductors and subject to a magnetic field $ \bm B $ along $y$ direction. A large number of zero energy Majorana modes associated with MFB are created at the edges parallel to $\bm B$. Electrons can undergo SESAR and inject pairs of electrons with equal spin into the superconductor. Cooper pairs in the bulk are spin-singlet. (b)The band structure of the QW with periodic boundary condition in $y$-direction and open boundary condition in $x$-direction using the tight binding model in Eq.\[Htb\]. A MFB connects the two nodal points. The parameters in $H_{TB}$ are: $\Delta =1, t=40, \alpha_{D}=30, \mu =-4t, B=1.5$.](Fig1.pdf){width="3.2in"} Specifically, we consider a 2D semiconductor quantum well (QW) grown in the (110) direction with Dresselhaus spin-orbit coupling (SOC) and in proximity to s-wave superconductors [@Alicea2010] as depicted in Fig.1. An in-plane Zeeman field can drive the system into a nodal topological phase which supports a large number of zero energy Majorana modes at the edge [@JiabinYou; @Tanaka110]. This model is considered because many of the nodal topological superconductors studied previously, except Ref.[@Chris_MFB], preserve time-reversal symmetry and cannot induce spin currents. In the following sections, we first review the SESAR processes. Secondly, using superconducting (110) Dresselhaus QW as an example, we show that giant spin currents can be injected into nodal topological superconductors using paramagnetic leads. Thirdly, to further explain the origin of the spin current, we show that the pairing correlation at the edge of the Dresselhaus QW is dominantly equal-spin triplet pairing even though superconductivity of the QW is induced by an s-wave superconductor. Finally, we discuss the experimental signatures of the equal-spin triplet Cooper pairs in nodal topological superconductors. [**SESAR and Quantized Spin Conductance**]{} — A Majorana fermion $\gamma$ is a self-Hermitian particle with the property $ \gamma = \gamma^\dagger$ [@Wilczek; @Kitaev]. In general, a Majorana fermion can couple to both spin up and spin down electrons. However, due to the self-Hermitian property of the Majorana fermion, the effective coupling Hamiltonian at the N/TS interface can be written as $ i\omega\gamma ( \Psi + \Psi^{\dagger})$ where $\Psi = a \psi_\uparrow + b \psi_\downarrow$ is a linear superposition of spin up and spin down electrons $\psi_{\uparrow/\downarrow}$ in the lead with the normalized coefficients $ |a|^2 +|b|^2 =1 $ and $ \omega>0 $ is the coupling constant. In the basis of $( \psi_{\uparrow}, \psi_{\downarrow}, \psi_{\uparrow}^{\dagger}, \psi_{\downarrow}^{\dagger})$ for the scattering matrix, the Andreev reflection matrix $r^{he}$ and the electron reflection matrix $r^{ee}$ at zero energy can be easily calculated as : $$\label{rhe} \begin{array}{ccc} r^{he} &=& \left( \begin{array}{ cc} a & b^{*} \\ b & -a^{*} \end{array} \right)^{*} \left( \begin{array}{ cc} 1 & 0 \\ 0 & 0 \end{array} \right) \left( \begin{array}{ cc} a & b^{*} \\ b & -a^{*} \end{array} \right)^{\dagger}, \\ r^{ee} &=& \left( \begin{array}{ cc} a & b^{*} \\ b & -a^{*} \end{array} \right) \left( \begin{array}{ cc} 0 & 0 \\ 0 & 1 \end{array} \right) \left( \begin{array}{ cc} a & b^{*} \\ b & -a^{*} \end{array} \right)^{\dagger}. \end{array}$$ It is clear from Eq.\[rhe\] that $r^{he} (a,b)^{T} = (a^{*}, b^{*})^{T} $ and $r^{he} (b^{*} , -a^{*})^{T} = 0$. It shows that electrons with spinor $ (a, b)$ can undergo resonant Andreev reflections with unity amplitude. Importantly, the reflected hole has spinor $(a^{*}, b^{*})$ due to missing electrons with spinor $(a, b)$ below the Fermi energy. Consequently, a pair of equal spin electrons are injected into the superconductor at each tunnelling event, which results in equal spin Andreev reflections. On the other hand, electrons with orthogonal spinor $ (b^{*}, -a^{*})$ are totally reflected as electrons. This type of Andreev reflection processes is referred to as SESAR [@SESAR]. With the scattering matrices, the charge conductance and the spin conductance in the $\bm{j}$ direction can be calculated as [@Datta]: $$\begin{aligned} G_c = \frac{e^2}{h} \text{tr} \left\lbrace \sigma_0 - r^{ee\dagger} r^{ee}+r^{he\dagger} r^{he} \right\rbrace ,\\ G_{s, \bm j} = \frac{e^2}{h} \text{tr} \left\lbrace - r^{ee\dagger} \bm{j} \cdot \bm\sigma r^{ee}+r^{he\dagger} \bm{j} \cdot \bm\sigma^{*} r^{he}\right\rbrace. \end{aligned}$$ Using Eq.\[rhe\], we have $G_c = 2\frac{e^2}{h}$ and $G_{s, \bm j}=2\frac{e^2}{h} \bm j \cdot \bm n$, where $\bm n = (a^{*}, b^{*}) \bm \sigma (a, b)^T$ is the spin polarization direction of the electrons undergoing SESAR. In other words, SESAR induces quantized spin conductance in the $\bm n$ direction at zero bias. As long as the coupling $ \omega $ is weakly energy dependent, the current at finite voltage bias is also spin polarized [@SESAR]. [**MFB in Dresselhaus QW**]{}— In this section, we consider a zinc-blende (110) quantum well in proximity to s-wave superconductors as depicted in Fig.1a [@Alicea2010]. It has been shown that, in the presence of an in-plane magnetic field, such a system can be driven to a nodal topological phase which supports a large number of Majorana modes at the sample edges [@JiabinYou; @Tanaka110]. The Hamiltonian of the system, in the Nambu basis of $ (c_{\bm k\uparrow},c_{\bm k\downarrow},c^{\dagger}_{-\bm k\uparrow},c^{\dagger}_{-\bm k\downarrow}) $, can be written as [@Alicea2010]: $$\label{H} \begin{array} {ll} H(k_x, k_y) = [-2t(\cos k_x +\cos k_y)-\mu]\tau_z \\ +\alpha_D\sin k_x \sigma_z + B_x\sigma_x\tau_z +B_y\sigma_y +\Delta\sigma_y\tau_y. \end{array}$$ Here $ t $ denotes the hopping amplitude, $ \mu $ is the chemical potential, $ \alpha_D $ is the Dresselhaus SOC strength, $\bm B=(B_x ,B_y ,0) $ is the in-plane magnetic field, and $ \Delta $ denotes the induced superconducting pairing amplitude. $\sigma_i$ and $\tau_i$ are Pauli matrices operating on the spin and particle-hole basis respectively. In this section, the magnetic field is chosen to be in the $y$ direction and $|\bm B|=B$. Unlike 2D QWs with Rashba SOC $k_x \sigma_y - k_y \sigma_x$, where the electron spins are coupled to both $k_x$ and $k_y$, the electron spins in the Dresselhaus QW couple to $k_x$ only. As a result, for fixed $k_y$, $H(k_x, k_y)= H_{k_y}(k_x)$ is equivalent to the model describing a quantum wire with Rashba SOC strength $\alpha_D$, Zeeman energy $B$, s-wave pairing $\Delta$ and effective chemical potential $\mu'= \mu+2t(1+\cos k_y)$. Therefore, $H_{k_y}(k_x)$ supports Majorana end states when the topological condition $ B^2 > { \mu'^2 + \Delta^2}$ is satisfied [@Sato2009; @Sau; @Alicea2010; @ORV]. In one-dimensional wires, it is rather difficult to fine tune the chemical potential to satisfy this topological condition. In the current model, the effective chemical potential is a function of $k_y$ and there is a wide range of $k_y$ such that the topological condition can be satisfied for a given chemical potential. This results in the MFB. The energy spectrum as a function of $ k_y $ for a 2D quantum well with open boundary conditions in the $x$-direction and periodic boundary conditions in the $y$-direction is depicted in Fig.1b. In calculating the energy spectrum, the following tight-binding model is used: $$\begin{aligned} \nonumber &&H_{TB}=-t\sum_{\bm R,\bm a,s}c^{\dagger}_{\bm R+\bm a,s}c_{\bm R,s}-\frac{\mu}{2}\sum_{\bm R,s}c^{\dagger}_{\bm R,s}c_{\bm R,s}\\ \nonumber &&+\frac{1}{2}\sum_{\bm R,s,s'}\left[{i\alpha_D}c^{\dagger}_{\bm R+\bm x,s}c_{\bm R,s'}(\sigma_z)_{ss'} +c^{\dagger}_{\bm R,s}c_{\bm R,s'}(\bm B\cdot\bm\sigma)_{ss'}\right]\\ \label{Htb} &&+\Delta\sum_{\bm R}c^{\dagger}_{\bm R\uparrow}c^{\dagger}_{\bm R\downarrow}+h.c.,\end{aligned}$$ where $ \bm R $ denotes the lattice sites, $ \bm a=\bm x,\bm y $ denotes the primitive vectors along $x$ and $y$ directions, $ s=\uparrow ,\downarrow $ denotes spin. The bulk gap closes at $k_y$ values where $ B^2 = { \mu'^2 + \Delta^2}$ is satisfied and the nodal points are connected by the MFB as shown in Fig.1b. It is important to note that the MFB realized in this model is protected against short range disorder by a chiral symmetry $C H(k_x,k_y) C^{-1}=-H(k_x,k_y)$, where $C=\sigma_x\tau_y $. This is similar to the case studied in Ref.[@Chris_MFB] in which the MFB is created in a $p \pm ip$ superconductor by an in-plane magnetic field. ![(a) The polarization direction $\bm n(k_y =0)$ of the spin current due to SESAR as a function of Dresselhaus SOC strength. The magnetic field $ \bm B $ is along the positive $y$-direction. (b) $\bm{n}(k_y =0)$ and the in-plane magnetic field directions $-\bm B$. The vector $- \bm B$ with certain colour determines the vector $\bm n$ with the same colour. (c) The three components of $\bm n (k_y=0)$ as a function of $k_y$ in the MFB regime. (d) The components of the spin polarization vector $\bm N$ of the current. The parameters of the QW are the same as Fig.1b. The lead is modelled by square lattices with hopping $ t_L =2t $. The hopping between the lead and the QW is $t_c =t/10 $.](Fig2.pdf){width="3.2in"} [**Giant Spin Currents induced by MFB**]{}— Since each single Majorana fermion can induce spin currents, we expect that the large number of Majorana modes at the edge of a nodal topological superconductor may induce giant spin currents in leads coupled to the Majorana modes due to resonant Andreev reflections [@Vic2009]. However, one has to show that there remains a large spin current after summing up the currents induced by all the Majorana modes. To proceed, we note that at fixed $k_y$, $H(k_x,k_y)$ in Eq.\[H\] is in symmetry class BDI and describes a 1D Rashba wire with s-wave pairing [@Schnyder_class; @ChrisDIII; @Sau_BDI]. Assuming periodic in $y$-direction and open boundary conditions in $x$-direction, for a fixed $k_y$ and in the topological regime where the MFB arises, the zero bias Andreev reflection matrix at the interface can be cast into the form [@Beenakker2012]: $$r^{he}_{k_y}= U_{1}^{*}(k_y)\left( \begin{array}{ cc} 1 & 0 \\ 0 & 0 \end{array} \right) U_{2}^{\dagger}(k_y),$$ where $U_{1}$ and $U_{2}$ are $ k_y $-dependent unitary matrices. When the coupling between the lead and the superconductor is weak, $U_{1} = U_{2}$ and the form obtained by effective Hamiltonian approach in Eq.\[rhe\] is recovered. To obtain $U_1$ and $U_{2}$ for general coupling strengths, the scattering matrix at the N/TS interface at fixed $k_y$ can be calculated as [@Lee]: $$\begin{aligned} \label{S} \{r_{k_y}^{\alpha \beta}\}_{ij} = - \delta_{ij}\delta_{\alpha \beta} + i\sum_{mn} \{\Gamma_\alpha^{1/2}\}_{im} G^{ \alpha \beta}_{mn}(k_y) \{\Gamma_\beta^{1/2}\}_{nj},\end{aligned}$$ where $\alpha, \beta \in \{e,h\}$ label the electron or hole, and $i,j,m,n$ label the spin degrees of freedom. $G_{mn}^{\alpha \beta}$ is the retarded Green’s function obtained from both $H_{TB}$ in Eq.\[Htb\] and the Hamiltonian of the lead by assuming periodic boundary conditions in the $y$ direction at fixed $k_y$. $\Gamma_{e/h}$ is the electron/hole part of the broadening function at fixed $k_y$ due to the lead. In the topological regime with a fixed $k_y$, the channel with spinor $ \Psi_{1}(k_y)=U_{2}(k_y) (0, 1)^{T}$ which undergoes electron reflection can be found by the condition $ r^{he}_{k_y} \Psi_{1}(k_y) = 0$. On the other hand, the channel with spinor $\Psi_{2}(k_y)= U_{2}(k_y) (1,0)^{T}$ undergoes resonant Andreev reflections at zero bias. In the weak coupling limit, the spin polarization of the SESAR process is $\bm n(k_y) = \Psi_{2}^{\dagger}(k_y)\bm\sigma\Psi_{2}(k_y)$. ![(a) and (b) The triplet pairing correlation magnitudes $ |\bm d| $, defined in Eq.10, calculated using $H_{TB}$ in Eq.\[Htb\] without and with disorder. In (b), the on-site disorder with normal distribution of variance $W=5\Delta$ is added to $H_{TB}$. (c) The spin-triplet pairing correlation magnitude $ |\bm d| $ at site $ (0,100) $ on the edge as a function of $ B/\Delta $. The $\bm d$-vector is non-zero only when the MFB appears. (d) The Cooper pair spin polarization $\bm s=i(\bm d\times\bm d^{*})/|\bm d|^2$ at site $(0,100)$ on the edge and the spin polarization vector $ \bm N $. Vectors with the same colour indicate the same parameters used in calculating $\bm s$ and $\bm N$.](Fig3.pdf){height="60.00000%"} To understand the parameter dependence of the spin polarization direction, $\bm n (k_y=0)$ as a function of Dresselhaus SOC strength $\alpha_D$ is shown in Fig.2a. $\bm n(k_y =0)$ as a function of in-plane magnetic field direction is shown in Fig.2b. The three components of $\bm n$ as functions of $k_y$ are shown in Fig.2c. At finite voltage bias $ V $, the total spin current in an arbitrary $\bm j$ direction can be worked out by summing up all the $k_y$ components : $$I^s_{\bm{j}} = \sum_{k_y \in {(-\pi, \pi]}}\frac{e}{h}\int_{-eV}^{eV}\text{tr}[-r^{ee\dagger}_{k_y}\bm{j}\cdot \bm{\sigma} r^{ee}_{k_y}+r^{he\dagger}_{k_y}\bm{j}\cdot \bm{\sigma}^* r^{he}_{k_y}]dE.$$ The spin polarization vector can be defined as $\bm{N} = ( I_{x}^s, I_{y}^s, I_{z}^s )/I_T $ with $I_T = \sqrt{(I_{x}^s)^2 + (I_{y}^s)^2 + (I_{z}^s)^2 }$. The voltage bias dependence of $\bm N$ is depicted in Fig.2d. It is evident that $\bm N$ is almost independent of voltage bias. As expected, the spin currents induced by the Majorana modes with different $k_y$ do not cancel each other and this results in a spin polarized current. [**Pairing Correlation**]{}— In the above sections, it is shown using scattering matrices that the normal lead injects pairs of electrons with certain spin polarization into the superconductor to form Copper pairs. On the other hand, the parent superconductor which induces superconductivity in the Dresselhaus QW has s-wave pairing and it is rather surprising that the induced superconductivity on the edge of the Dresselhaus QW is predominantly spin-triplet. To further understand the system, we calculate the real space retarded Green’s function of the system: $$G(E) = \left( \begin{array}{ cc} G^{ee} & G^{eh} \\ G^{he} & G^{hh} \end{array} \right) =\frac{1}{E + i0^{+}- H_{TB}}.$$ The anomalous part of the retarded Green’s function is the Fourier transform of retarded response function: $$G^{eh}_{s, s'}(E, {\bm R})= -i \int_{0}^{\infty} e^{i(E +i0^{+})t} \langle \{ c_{\bm R,s}(t), c_{\bm R, s'}(0) \} \rangle dt.$$ It provides information about the pairing symmetry of the superconductor [@Gorkov]. The four spin components of $G^{eh}_{s, s'}$ can be parametrized into the matrix form as: $$G^{eh}(E, {\bm R}) = (\psi_s + \bm d \cdot \bm \sigma )i\sigma_y.$$ Here, $\psi_s =$tr$[-i G^{eh}\sigma_y]/2$ gives the spin-singlet pairing correlation $\langle \psi^{\dag}_{\uparrow}\psi^{\dag}_{\downarrow}- \psi^{\dag}_{\downarrow}\psi^{\dag}_{\uparrow}\rangle $ and the $\bm d$-vector characterizes the spin-triplet pairing correlation. For example, $d_{x} = $tr$[-i\sigma_x G^{eh}\sigma_y]/2$ is the expectation value of $ \psi^{\dag}_{\uparrow}\psi^{\dag}_{\uparrow}- \psi^{\dag}_{\downarrow}\psi^{\dag}_{\downarrow}$. The retarded Green’s function is calculated based on the tight-binding model in Eq.\[Htb\] by recursive approach [@Lee]. The resulting position dependence of the triplet pairing correlation $ |\bm{d}|$ at zero energy is displayed in Fig.3a. The spin-singlet pairing correlation at zero energy is negligible in the whole sample. It is important to note that the triplet pairing correlation is strongest near the edges where the Majorana fermions reside. Moreover, since the Majorana modes are protected from disorder by a chiral symmetry [@Chris_MFB], the spin-triplet pairing correlations survive even if onsite disorder is introduced into the sample as shown in Fig.3b. The spin-triplet pairing correlation $|\bm d|$ of a chosen site as a function of Zeeman energy $B$ is shown in Fig.3c. The spin-triplet pairing correlation is finite only when the superconductor enters the nodal topological phase with $B > \Delta$. ![(a) The tunnelling between a half metal lead and the QW. The spin polarization of the lead $\bm h$ are approximately parallel or antiparallel to the Cooper pair spin polarization $\bm s$. (b) The corresponding charge conductance $ G_c $ in the parallel ($\bm h \parallel \bm s$) and antiparallel ($\bm h \parallel -\bm s$). Here the parameters used are the same as Fig.2d, except that a Zeeman field with magnitude $ |\bm h| = 2\Delta $ is added to the lead to polarize the electron spins in the lead. The hopping between the lead and the QW is $t_c =t/2 $.](Fig4.pdf){width="3.2in"} Interestingly, from the $\bm d$-vector, the spin polarization direction of the Cooper pair is found to be $ \bm s=i(\bm d\times\bm d^{*})/|\bm d|^2 $[@Leggett]. It is shown in Fig.3d that the spin polarization direction of the Cooper pairs matches the spin polarization of the tunnelling current found in Fig.2d. Since the Cooper pairs are spin-polarized in the $\bm s$ direction, a half-metal lead (such as CrO$_{2}$[@Gupta]) with spin polarization $\bm h$ parallel to the Cooper pair spin polarization direction $\bm s$ can freely inject Cooper pairs into the superconductor. The $\bm h $ dependence of the tunnelling current from a half-metal lead to the nodal topological superconductor is shown in Fig.4. In Fig.4b, it is shown that the zero bias conductance peak (ZBCP) is very wide when $\bm h$ is approximately parallel to the spin polarization direction of the Cooper pair. Due to the large number $m$ of Majorana modes at the edge, the ZBCP can be as large as $2m\frac{e^2}{h}$ due to Majorana induced resonant Andreev reflections [@Vic2009]. Practically, the ZBCP is limited by the number of conducting channels in the lead. On the other hand, the width of the ZBCP is greatly suppressed when $\bm h$ is approximately antiparallel to the spin polarization direction of the Cooper pairs. This feature can be used to detect the Majorana fermions in nodal superconductors. [**[Conclusion]{}**]{}— In this work, we show that giant spin current can be injected into nodal topological superconductors using paramagnetic leads due to SESAR. SESAR is related to the spin-triplet correlations at the edge of the topological superconductors which can be detected by tunnelling experiments. The authors thank the support of HKRGC and Croucher Foundation through HKUST3/CRF/13G, 602813, 605512, 16303014 and Croucher Innovation Grant. [99]{} F. Wilczek, Nat. Phys. [**5**]{}, 614 (2009). M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. [**82**]{}, 3045 (2010). X. L. Qi and S. C. Zhang, Rev. Mod. Phys. [**83**]{}, 1057 (2011). J. Alicea, Rep. Prog. Phys. [**75**]{} 076501 (2012). C. W. J. Beenakker, Annu. Rev. Con. Mat. Phys. [**4**]{}, 113 (2013). A. Y. Kitaev, Physics-Uspekhi [**44**]{}, 131 (2001). D. A. Ivanov, Phys. Rev. Lett. [**86**]{}, 268 (2001). J. Alicea, Y. Oreg, G. Refael, F. von Oppen and M. P. A. Fisher, Nat. Phys. [**7**]{}, 412 (2011). J. J. He, T. K. Ng, P. A. Lee and K. T. Law, Phys. Rev. Lett. [**112**]{}, 037001 (2014). B. H. Wu, W. Yi, J. C. Cao, and G.-C. Guo, Phys. Rev. B [**90**]{}, 205435 (2014). X. Liu, J. D. Sau and S. Das Sarma, Phys. Rev. B [**92**]{}, 014513 (2015). A. Haim, E. Berg, F. von Oppen and Y. Oreg, Phys. Rev. Lett. [**114**]{}, 166406 (2015). J. J. He, J. Wu, T-P Choy, X-J Liu, Y. Tanaka and K. T. Law, Nat. Commun. [**5**]{}, 3232 (2014). J. Linder and J. W. A. Robinson, Nat. Phys. [**11**]{}, 307 (2015). M. Eschrig, Phys. Today [**64**]{}, No. 1, 43 (2011). Y. Tanaka, Y. Mizuno, T. Yokoyama, K. Yada and M. Sato, Phys. Rev. Lett. [**105**]{}, 097002 (2010). M. Sato, Y. Tanaka, K. Yada and T. Yokoyama, Phys. Rev. B [**83**]{}, 224511 (2011). A. P. Schnyder and S. Ryu, Phys. Rev. B [**84**]{}, 060504(R) (2011). C. L. M. Wong, J. Liu, K. T. Law and P. A. Lee, Phys. Rev. B [**88**]{}, 060504(R) (2013). X. Wan, A. M. Turner, A. Vishwanath and S. Y. Savrasov, Phys. Rev. B [**83**]{}, 205101 (2011). A. A. Burkov and L. Balents, Phys. Rev. Lett. [**107**]{}, 127205 (2011). J. Alicea, Phys. Rev. B [**81**]{}, 125318 (2010). J. You, C. H. Oh, and V. Vedral, Phys. Rev. B [**87**]{}, 054501 (2013). S. Ikegaya, Y. Asano and Y. Tanaka, Phys. Rev. B [**91**]{}, 174511 (2015). M. P. Anantram and S. Datta, Phys. Rev. B [**53**]{}, 16390 (1996). M. Sato, Y. Takahashi and S. Fujimoto, Phys. Rev. Lett. [**103**]{}, 020401 (2009). Y. Oreg, G. Refael and F. von Oppen, Phys. Rev. Lett. [**105**]{}, 177002 (2010). J. Sau, R. M. Lutchyn, S. Tewari and S. Das Sarma, Phys. Rev. Lett. [**104**]{}, 040502 (2010). K. T. Law, P. A. Lee and T. K. Ng, Phys. Rev. Lett. [**103**]{}, 237001 (2009). A. P. Schnyder, S. Ryu, A. Furusaki and A. W. W. Ludwig, Phys. Rev. B [**78**]{}, 195125 (2008). C. L. M. Wong and K. T. Law, Phys. Rev. B [**86**]{}, 184516 (2012). S. Tewari and J. D. Sau, Phys. Rev. Lett. [**109**]{}, 150408 (2012) M. Diez, J. P. Dahlhaus, M. Wimmer and C. W. J. Beenakker, Phys. Rev. B [**86**]{}, 094501 (2012). P. A. Lee and D. S. Fisher, Phys. Rev. Lett. [**47**]{}, 882 (1981). L. P. Gor’kov and E. I. Rashba, Phys. Rev. Lett. [**87**]{}, 037004 (2001). A. J. Leggett, Rev. Mod. Phys. [**47**]{}, 331 (1976). R. S. Keizer, S. T. B. Goennenwein, T. M. Klapwijk, G. Miao, G. Xiao and A. Gupta, Nature [**439**]{}, 825-827 (2006). [^1]: phlaw@ust.hk
**Restoration of Many Electron Wave Functions** **from One-Electron Density** A. I. Panin$^1$ and A. N. Petrov$^2$ *$^1$Chemistry Department, St.-Petersburg State University,* *University prospect 26, St.-Petersburg 198504, Russia* *e-mail: andrej@AP2707.spb.edu* *$^2$Petersburg Nuclear Physics Institute, Gatchina,* *St.-Petersburg District 188300, Russia* ------------------------------------------------------------------------ In density functional theory (DFT) approaches it is accepted that the electronic energy of many electron systems (at least for the ground state) can be presented as a functional of the first order density, or, in other words, of the diagonal of the first order density matrix. In this connection the following question seems to be pertinent: Are there rigorous schemes, that do not involve approximations and hypothesis of any kind, for construction of electronic energy as a functional of the first order density ? In present paper we make an attempt to answer this question. The second question closely related to the first one may be formulated as: Is it possible to consider electronic energy of many-electron system for some fixed basis as a function of occupation numbers ? Recently orbital occupancy (OO) approach developing this idea for DFT type functionals was formulated \[1-4\]. In present paper we show how energy expressions involving diagonal elements of the first order density matrix may be rigorously constructed. Such energy expressions may be used only in non-gradient optimization schemes since these expressions are not differentiable in classic sense. ------------------------------------------------------------------------ [**Basic Definitions**]{} For a fixed basis set of $n$ orthonormal spin-orbitals the corresponding finite-dimensional Fock space ${\cal F}_N$ is spanned by determinants $|L\rangle $ where $L$ runs over all subsets of the spin-orbital index set $N$. Its $p$-electron sector ${\cal F}_{N,p}$ is spanned by determinants $|L\rangle $ with $|L|=p$. Basis determinants will be labelled by [*subsets*]{} and all sign conventions connected with their representation as the Grassman product of [*ordered*]{} spin-orbitals will be included in the definition of the creation-annihilation operators. The set of $q$-electron density operators is defined as $${\cal E}_{N,q}=\{t_q\in {\cal F}_{N,q} \otimes {\cal F}_{N,q}^*: t_q^{\dagger}=t_q \ \& \ t_q \ge 0 \ \&\ Tr (t_q)=1 \}\eqno (1)$$ The diagonal mapping over ${\cal F}_N \otimes {\cal F}_N^*$ is $$d(t) =\sum\limits_{L\subset N} \langle L|t|L \rangle e_L, \eqno(2)$$ where $t\in {\cal F}_N \otimes {\cal F}_N^*$ and $e_L=|L \rangle \langle L|$. The contraction operator over ${\cal F}_N \otimes {\cal F}_N^*$ is defined in terms of the standard fermion creation-annihilation operators as $$c=\sum\limits_{i=1}^na_i\otimes a_i^{\dagger}.\eqno(3)$$ [**Definition 1.**]{} q-electron density operator $t_q\in {\cal E}_{N,q}$ is called weakly $p$-representable if there exists $p$-electron density operator $t_p\in {\cal E}_{N,p}$ such that $$\frac {q!}{p!}c^{p-q}(d(t_p))=d(t_q).\eqno(4)$$ This definition is correct because the contraction operator possesses the property $$c(d({\cal E}_{N,p}))\subset d({\cal E}_{N,q}).$$ The set $d({\cal E}_{N,p})$ is called the standard (unit) simplex of the operator space $d({\cal F}_{N,p} \otimes {\cal F}_{N,p}^*)$ and its characterization is given by $${\cal T}_{N,p}=d({\cal E}_{N,p}) = \{ \sum\limits_{L\subset N}^{(p)}{\lambda }_Le_L:{\lambda }_L\ge 0 \ \& \ \sum\limits_{L\subset N}^{(p)}{\lambda}_L=1\}. \eqno(5)$$ The combinatorial structure of ${\cal T}_{N,p}$ is very simple: Any part of the set of all $p$-element subsets of the index set N determines a face of ${\cal T}_{n,p}$ and its complementary part generates the opposite face. In particular, there are $n\choose p$ hyperfaces opposite to the corresponding vertices. [**Definition 2.**]{} $$W_{N,p,q}=\frac {q!}{p!} c^{p-q}d({\cal E}_{N,p}).\eqno(6)$$ Weak representability problem may be formulated as the problem of description of the polyhedron $W_{N,p,q}$ with arbitrary admissible $n,p$, and $q$. Since, by definition, density operators are Hermitean, this polyhedron may be conveniently embedded into the [*real*]{} Euclidean space ${\mathbb R}_{N,q}$ of the dimension ${n \choose q}$ with its canonical basis vectors $e_K$ labelled by $q$-element subsets of $N$. With such an embedding the tensor products of the fermion creation-annihilation operators involved in the expression (3) should be replaced by the commuting (Bose) annihilation operators $$b_je_J= \cases {e_{J\backslash \{j\}},&if $j\in J$\cr 0,&if $j\notin J $ \cr},\eqno(7)$$ acting on the vector space $${\mathbb R}_N=\bigoplus\limits_{q=0}^n {\mathbb R}_{N,q}.\eqno(8)$$ $W_{N,p,q}$ is a polyhedron situated in the real affine hyperplane $${\cal H}_{N,q}=\{\lambda \in {\mathbb R}_{N,q}: \sum\limits_{J\subset N}^{(q)}{\lambda }_J=1\}.\eqno(9)$$ Let us put $$w_{p\downarrow q}(L)=\frac {q!}{p!}c^{p-q}e_L=\frac{1}{{p\choose q}} \sum\limits_{K\subset L}^{(q)}e_K.\eqno(10)$$ Directly from definition it follows that the polyhedron $W_{N,p,q}$ is the convex hull of ${n\choose p}$ vectors $w_{p\downarrow q}(L)$: $$W_{N,p,q}=Conv(\{w_{p\downarrow q}(L)\}_{L\subset N}). \eqno(11)$$ To the best of our knowledge, in contrast to the parametric description given by Eq.(11), the analytic description (that is the description in terms of the hyperfaces) of this polyhedron is obtained only for the case $q=1$ and is given by the following assertion that is just a consequence of the general theorem by Coleman \[5-10\]: [**Theorem 1.**]{} Polyhedron $W_{N,p,1}$ is the set of solutions of the system $$\cases { 0\le {\lambda}_k\le \frac {1}{p},\ k\in N \cr \sum\limits_{j\in N}{\lambda}_j=1\cr }\eqno(12)$$ This polyhedron has $2n$ hyperfaces with normals $$v_k^0=pe_k,\eqno(13a)$$ and $$v_k^1=-pe_k+\sum\limits_{j\in N}e_j,\eqno(13b)$$ where $k\in N$, and $e_k$ are canonical basis vectors of the Euclidean space ${\mathbb R}^n$. ------------------------------------------------------------------------ [**Density Matrix Diagonal**]{} With arbitrary vector ${\lambda}^{(0)} \in W_{N,p,1}$ it is convenient to associate two index sets: $$Ind({\lambda}^{(0)})=\{i\in N: {\lambda}^{(0)}_i>0\}\eqno(14a)$$ $$Ind_{\frac{1}{p}}({\lambda}^{(0)})=\{i\in N: {\lambda}^{(0)}_i=\frac{1}{p}\}\eqno(14b)$$ Let us present vector ${\lambda}^{(0)} \in W_{N,p,1}$ as the convex combination $${\lambda}^{(0)} =p{\mu}^{L_0}w_{p\downarrow 1}(L_0)+(1-p{\mu}^{L_0}){\lambda}^{(1)} \eqno(15)$$ where (see Eq.(10)) $$w_{p\downarrow 1}(L_0)=\frac{1}{p} \sum\limits_{i\in L_0}e_i,\eqno(16)$$ $${\lambda}^{(1)}=\sum\limits_{i \in L_0}\frac{{\lambda}^{(0)}_i-{\mu}^{L_0}} {1-p{\mu}^{L_0}}e_i+\sum\limits_{i \in N\backslash L_0} \frac{{\lambda}^{(0)}_i}{1-p{\mu}^{L_0}}e_i,\eqno(17)$$ and require the residual vector ${\lambda}^{(1)}$ to be representable. This requirement imposes the following restrictions on the admissible values of parameter ${\mu}^{L_0}$: $$\cases { 0\le \frac{{\lambda}^{(0)}_i-{\mu}^{L_0}}{1-p{\mu}^{L_0}}\le \frac{1}{p},\ i\in L_0\cr 0\le \frac{{\lambda}^{(0)}_i}{1-p{\mu}^{L_0}}\le \frac{1}{p},\ i\in N\backslash L_0\cr }\eqno(18)$$ The frontier solution of system (18) is $${\mu}^{L_0}=\min\{\min_{i\in L_0}\{{\lambda}^{(0)}_i\}, \min_{i\in N \backslash L_0}\{\frac{1}{p}-{\lambda}^{(0)}_i\}\}.\eqno(19)$$ If ${\mu}^{L_0}\ne 0$ then we arrive at non-trivial representation of diagonal ${\lambda}^{(0)}$ as a convex combination of vertex $w_{p\downarrow 1}(L_0)$ and a certain representable residual vector ${\lambda}^{(1)}$. From Eq.(19) it is easy to see that the additional condition ${\mu}^{L_0}\ne 0$ holds true if and only if subset $L_0$ satisfies the restriction $$Ind_{\frac{1}{p}}(\lambda^{(0)})\subset L_0 \subset Ind(\lambda^{(0)}) \eqno(20)$$ Iterating of Eq.(15) leads to the following expression $${\lambda}^{(0)}=\sum\limits_{i=0}^{k-1}\left [\prod\limits_{j=0}^{i-1} (1-p{\mu}^{L_j})\right ]p{\mu}^{L_i}w_{p\downarrow 1}(L_i)+ \left[\prod\limits_{i=0}^{k-1}(1-p{\mu}^{L_i})\right ]{\lambda}^{(k)}\eqno(21)$$ where $${\mu}^{L_i}=\min\{\min_{l\in L_i}\{{\lambda}^{(i)}_l\}, \min_{l\in N \backslash L_i}\{\frac{1}{p}-{\lambda}^{(i)}_l\}\}\eqno(22)$$ and $$Ind_{\frac{1}{p}}(\lambda^{(i)})\subset L_i \subset Ind(\lambda^{(i)}) \eqno(23)$$ for $i=0,1,\ldots,k-1$. [**Definition 3.**]{} Sequence $(L_0,L_1,\ldots,L_i,\ldots)$ of $p$-element subsets of $N$ is called $\lambda$-admissible if for each $i=0,1,\ldots$ subset $L_i$ satisfies the condition (23). [**Theorem 2.**]{} For any vector ${\lambda}^{(0)} \in W_{N,p,1}$ the residual vector in iteration formula (21) vanishes after a finite number of steps. [**Proof.**]{} First let us note that the number of nonzero components of representable residual vector ${\lambda}^{(k)}$ can not be less than $p$. If this number is equal to $p$ then ${\lambda}^{(k)}$ just coincides with the vertex $w_{p\downarrow 1}(L_k)$ where $L_k=Ind({\lambda}^{(k)})$, and the residual vector ${\lambda}^{(k+1)}$ vanishes. Let us suppose that the number of nonzero components of ${\lambda}^{(k)}$ is greater than $p$. From Eqs.(15), (17), and (19) it readily follows that there exists index $i_*\in Ind({\lambda}^{(k)})$ such that ${\lambda}^{(k+1)}_{i_*}$ is necessarily equal either to zero or to $\frac{1}{p}$. To complete the proof it is sufficient to show that if ${\lambda}^{(k)}_{i}=\frac{1}{p}$ then ${\lambda}^{(k+1)}_{i}=\frac{1}{p}$. Condition (23) implies that all the indices $i\in N$ such that ${\lambda}^{(k)}_i=\frac{1}{p}$ should belong to $L_k$ because in the opposite case the parameter ${\mu}^{L_k}$ would be equal to zero. If ${\mu}^{L_k}={\lambda}^{(k)}_{i_*} > 0$ and ${\lambda}^{(k)}_{i}=\frac{1}{p}$ then ${\lambda}^{(k+1)}_{i}=\frac{\frac{1}{p}-{\lambda}^{(k)}_{i_*}}{1-p{\lambda}^{(k)}_{i_*}} =\frac{1}{p}$. If, on the other hand, ${\mu}^{L_k}=\frac{1}{p}-{\lambda}^{(k)}_{i_*} > 0$ and ${\lambda}^{(k)}_{i}=\frac{1}{p}$ then $1-p{\mu}^{L_k}=p{\lambda}^{(k)}_{i_*}$ and ${\lambda}^{(k+1)}_{i}=\frac{\frac{1}{p}-{\mu}^{L_k}}{p{\lambda}^{(k)}_{i_*}} =\frac{1}{p}$ $\Box $ [**Corollary 1.**]{} The set of solutions of the Coleman’s system (12) is the convex hull of ${n\choose p}$ vertices $w_{p\downarrow 1}(L)$. [**Corollary 2.**]{} The number of vertices in expansion of a given density diagonal obtained on the base of the recurrence formula (21) is not greater than the number of its components different from zero. [**Corollary 3.**]{} $\lambda$-admissible sequence $(L_0,L_1,\ldots,L_{k_{\lambda}})$ generated recurrently on the base of the iteration formula (21) includes pairwise distinct $p$-element subsets and $$\bar{\lambda}(L_0,L_1,\ldots,L_{k_{\lambda}}) = \sum\limits_{i=0}^{k_{\lambda}}\left [\prod\limits_{j=0}^{i-1} (1-p{\mu}^{L_j})\right ]p{\mu}^{L_i}e_{L_i}\eqno(24)$$ is a diagonal of $p$-electron density matrix such that $$\frac{1}{p!}c^{p-1}\bar{\lambda}(L_0,L_1,\ldots,L_{k_{\lambda}})= \lambda.\eqno(25)$$ It is to be noted that Theorem 2 is just a specification of the fundamental theorem by Carathéodory [@Carat]: [**Theorem 3.**]{} Let $X\subset {\mathbb R}^n$. Then any vector $x\in Conv(X)$ may be presented as a convex combination of no more than $n+1$ vectors from $X$. Modern proof of this result may be found in [@Leicht]. From Corollary 3 if follows that any mapping $\lambda \to s_{\lambda}$ where $s_{\lambda}$ is a $\lambda$-admissible sequence compatible with the iteration formula (21) determines some global section (right inverse) ${\pi}_{1\uparrow p}$ of the contraction operator $\frac{1}{p!}c^{p-1}$ that is the mapping from $W_{N,p,1}$ to ${\cal T}_{N,p}$ such that $$\frac{1}{p!}c^{p-1}{\pi}_{1\uparrow p}(\lambda)=\lambda \eqno(26)$$ for any $\lambda \in W_{N,p,1}$. As it is seen from Eq.(22), sections constructed on the base of the recurrence relation (21) are not linear and even not differentiable in classic sense. The most ambitious task arising in the frameworks of the approach outlined is to try to develop efficient methods for direct optimization of energy as a function of diagonal of the first order density matrix. General scheme embracing the whole class of such methods may be described as follows. 1\. Some section(s) of the contraction operator should be chosen. 2\. Using available section, it is possible of associate with some trial diagonal $\lambda$ ensemble of $p$-electron determinant states and to determine squares of the CI coefficients: $$|C_{L_i}|^2=\left [ \prod\limits_{j=0}^{i-1}(1-p{\mu}^{L_j})\right ]p{\mu}^{L_i}\eqno(27)$$ (see Eq.(24)). 3\. Construct average energy $$E_{\lambda}({\phi}_0,{\phi}_1,\ldots,{\phi}_{k_{\lambda}})= \sum\limits_{i,j=0}^{k_{\lambda}}cos({\phi}_i-{\phi}_j)|C_{L_i}||C_{L_j}| <L_i|H|L_j>\eqno(28)$$ as a function of phases ${\phi}_i$. 4\. Minimize the function $$E_{{\pi}_{1\uparrow p}}:\lambda \to \min_{\phi}E_{\lambda}({\phi})\eqno(29)$$ to determine optimal diagonal and its expansion via vertices $w_{p\downarrow 1}(L)$. There are no serious problems in implementation of steps 2-4 of this scheme and the only complicated step is reasonable selection of mapping(s) ${\pi}_{1\uparrow p}$ (note that in general several different sections may be employed in the course of the energy optimization). It is rather difficult to estimate [*a priori*]{} the quality of some chosen concrete section ${\pi}_{1\uparrow p}$. There are two readily coming to mind general algorithms to construct such sections. Both of them involve full sorting of $p$-electron subsets of the spin-orbital index set. 1\. Maximization of parameter (22) on each iteration: On the k-th step current $L_k$ may be determined from the condition $${\mu}^{L_k}=\max_{L}\left \{\min\{\min_{l\in L}\{{\lambda}^{(k)}_l\}, \min_{l\in N \backslash L}\{\frac{1}{p}-{\lambda}^{(k)}_l\}\}\right \}.\eqno(30)$$ In this case it is not necessary to take into account Eq.(23) explicitly. This section is probably optimal from formal mathematical viewpoint but has no physical idea behind it. Computer experiments show that in restoration process of such type high order excitations from the HF state contribute mostly. Even if exact FCI occupancies for the ground state are chosen, the restoration produces ensemble of determinant states that involves HF determinant and excited determinants that practically do not interact with the HF one. 2\. Energy minimization: On the k-th iteration among subsets satisfying the condition (23) it is chosen the subset $L_k$ such that the lowest eigenvalue of $p$-electron Hamiltonian in the basis $\{|L_0>,|L_1>,\ldots,|L_k>\}$ is minimal. This is undoubtedly the best possible section of the contraction operator. Unfortunately, the use of this section for the energy minimization is of no sense because it is equivalent to a certain CI scheme that can be described as follows. 1\. First it is necessary to fix the maximal number $mdet$ of determinants in wave function expansion and put $k=0$; 2\. Put $k=k+1$; 3\. Sort all determinants different from the already chosen and select the one that corresponds to the lowest eigenvalue of the Hamiltonian in the basis of $k$ determinants $\{|L_1>,|L_2>,\ldots,|L_k>\}$. If $k<mdet$, return to step 2. Finally in a certain sense optimal basis involving not greater than $mdet$ determinants will be obtained. This scheme is based on the well-known bracketing theorem of matrix algebra (see, e.g.,[@W]) and is used in quantum chemistry for years in different modifications to select initial determinant space for multi-reference CI calculations [@Buenker-1; @Buenker-2]. In our opinion this scheme is interesting in its own right as a self-sufficient one when a relatively small number of leading determinants should be constructed from active orbitals with close orbital energies (the case that occurs extensively in transition metal complexes) because \(1) CI spaces of huge dimensions can be efficiently handled and disk memory usage is minimal; \(2) Calculations can be easily restarted; \(3) Algorithms are trivially parallelized and if, say, PC clusters are used, data transfer via local net is minimal; \(4) It is easy to handle both single excited state and a group of successive states. For the restoration purpose the above scheme can be considered as a certain benchmark one because it gives the best possible occupancies and energy that can be obtained on the basis of the restoration routine described by Theorem 2. ------------------------------------------------------------------------ General theorem establishing a connection between diagonal of the first order density matrix and a certain set of many-electron wave functions is proved. It is shown that rigorous energy expression involving only one-electron density becomes well-defined as soon as a certain right inverse of the contraction operator is chosen. For a fixed representable diagonal of the first order density matrix there exist quite a number of ways to restore $p$-electron determinant ensembles that are contracted to the diagonal under consideration. Each such way is in fact a path of a rather complicated graph with its vertices labelled by admissible (in sense of definition 3) $p$-element spin-orbital index sets. The main problem arising in implementation of optimization schemes based on such energy expressions is the lack of general simple algorithms for selection of admissible paths for restoration of wave functions from one-electron densities. Such algorithms, besides requirement being simple, should generate paths close in a certain sense to ones obtained by the benchmark calculations based on the bracketing theorem. Search for such algorithms is in progress now. Note in conclusion that the recurrence formula (21) can be easily generalized to treat densities of higher order and the only obstacle here is the lack of the complete set of inequalities for analytic description of the polyhedron $W_{N,p,q}$ in the case $q>1$. ------------------------------------------------------------------------ [**ACKNOWLEDGMENTS**]{} One of us (ANP) gratefully acknowledges the Ministry of Education of the Russian Federation (Grant PD 02-1.3-236) and the St. Petersburg Committee on Science and Higher Education (Grant PD 03-1.3-60) for financial support of the present work. ------------------------------------------------------------------------ [999]{} Pou, P. Phys Rev B 2000, [**62**]{}, 4309. Pou, P. Int J Quantum Chem 2002,[**91**]{},151. Pou, P. J Phys : Condens Matter 2002 [**14**]{}, L421. Pou, P. J Phys : Condens Matter 2003 [**15**]{}, S2665. Coleman, A. J. Rev Mod Phys 1963, 35,668. Coleman, A. J. In Reduced Density Matrices With Applications to Physical and Chemical Systems; Coleman, A.J. and Erdahl,R.M.,Eds.;Queen’s Uinv.: Kingston, Ontario, 1968;No 11,p.2. Coleman, A.J. J Math Phys 1972,13,214. Coleman, A. J. Reports on Math Phys 1973,4, 113. Coleman, A. J.; Yukalov, V. I. Reduced Density Matrices; Springer Verlag: New York, 2000. Coleman, A. J. Int J Quantum Chem 2001,85,196. Carathéodory C. Rend Circ Mat Palermo 1911, [**32**]{}, 193. von K. Leichweiß, Konvexe Mengen, VEB Deutscher Verlag def Wissenschaften, Berlin, 1980. Wilkinson J. H. The Algebraic Eigenvalue Problem; Clarendon Press: Oxford, 1965. Buenker, R. J.; Peyerimhoff, S. D. Theor. Chim. Acta 1974 [**35**]{},33. Buenker, R. J.; Peyerimhoff, S. D. Theor. Chim. Acta 1975 [**39**]{},217.
--- abstract: 'We present an experimental method to engineer arbitrary pure states of qudits with $d=3,4$ using linear optics and a single nonlinear crystal.' author: - 'Giacomo Mauro D’Ariano$^{a,c}$, Paolo Mataloni$^{b,c}$, and Massimiliano F. Sacchi$^{a,c}$' title: 'Generating qudits with $d=3,4$ encoded on two-photon states' --- Many issues in quantum information theory and processing deal with qudits, namely $d$-level quantum systems, instead of qubits. The interest in such more complex systems is both theoretical—the general structure of quantum protocols can be simplified for arbitrary dimension—and practical—some relevant applications perform better using qudits. For example, new quantum cryptographic protocols were recently proposed that deal specifically with qutrits [@peres:00; @kasz:03; @lang:04] and the eavesdropping analysis showed that these systems are more robust against specific classes of eavesdropping attacks [@bruss:02; @eav; @durt:03; @lang:04]. A further advantage of using multilevel systems deals with novel fundamental tests of quantum mechanics [@collins:02; @zuk]. Recent experimental realizations of qutrits rely on different physical implementations. In interferometric schemes, qutrits are generated by sending an entangled photon pair through a multi-armed interferometer [@rob:01], and the number of arms defines the dimensionality of the system. Other techniques exploit the properties of orbital angular momentum of single photons [@lang:04; @zei:01; @zei:03], or perform postselection from four-photon states [@antia:01]. All the above techniques, however, provide only partial control over the qutrit state. In the method of Refs. [@lang:04; @zei:01; @zei:03] one needs a specific hologram for a given qutrit state. Also the interferometric scheme [@rob:01] is not very flexible in switching between different states. More recently, an experimental realization of arbitrary qutrit states has been reported [@rece], where the polarization state of two-photon field has been exploited. Such a realization requires the use of *three* nonlinear crystals pumped by a common coherent source. In this paper we show an experimental method to engineer arbitrary pure states of qutrits and ququads, using a single nonlinear crystal and linear optical devices as phase waveplates. The qudit is encoded on the polarization of a two-photon state, and is obtained from local (e.g. single-photon) unitary transformations on a pure non-maximally entangled state which plays the role of a* seed* state. It can be generated a parametric source of entangled photon states [@note; @roma1]. In the present paper we refer to a high brilliance source [@roma1], with high flexibility in terms of state generation. It has been recently demonstrated that by this it is possible to produce two photon hyper-entangled states, entangled in polarization and momentum [@roma2]. Indeed, the adoption of hyper-entangled states can be crucial whenever one is interested in quantum information applications of qudits, since hyper-entanglement in polarization and momentum allows one to perform non trivial measurements—such as Bell measurements [@padua]—which are needed for quantum key distribution. In fact, as we will show, it is possible to implement a quantum cryptographic scheme with ququads that exploits two mutually unbiased bases made by two-photon Bell states, and here hyper-entanglement allows to perform Bell measurements. In the following we first show how to obtain an arbitrary qudit with $d=3,4$, from local unitary transformations on a bipartite pure state of two qubits. Hence, we want to show how to generate a state of the form $$|\psi \rangle = \alpha |00\rangle +\beta |11\rangle +\gamma |01\rangle +\delta |10\rangle \;$$ from the seed state $$|\chi \rangle =x|00\rangle +\sqrt{1-x^{2}}|11\rangle \text{,}$$ by means of two local unitary transformations. In the state $|\psi \rangle $ we can fix $\alpha $ positive, and take $\beta ,\gamma $ and $\delta $ complex without loss of generality. The state $|\chi \rangle $ is chosen with $x$ positive. Hence, given $\alpha ,\beta ,\gamma $ and $\delta $ we want to find $x$ and two unitaries $U$ and $W$ such that $$\begin{aligned} |\psi \rangle =U\otimes W|\chi \rangle \;. \;\label{tre}\end{aligned}$$ Of course, $x$, $U$, and $W$ will depend on the desired parameters $\alpha ,\beta ,\gamma $ and $\delta $. We can solve this problem by means of the singular value decomposition (SVD), which states that for any matrix $A$ one can find two unitaries $U$ and $W$ such that [@bhatia] $$\begin{aligned} A=U D W^{\tau }\;,\label{udw}\end{aligned}$$ where $\tau $ denotes transposition on the fixed basis, and $D$ is diagonal and positive. Consider now the matrix $\Psi $ corresponding to the state $|\psi \rangle $ $$\begin{aligned} \Psi = \alpha |0 \rangle \langle 0 | + \beta |1 \rangle \langle 1 | + \gamma |0 \rangle \langle 1 |+ \delta |1 \rangle \langle 0 | \;,\end{aligned}$$ through the identity [@pla] $$\begin{aligned} |\psi \rangle = (\Psi \otimes I) (|00 \rangle + |11 \rangle )\;.\end{aligned}$$ From the SVD $\Psi = U D W^\tau $ it follows that $$\begin{aligned} |\psi \rangle &= &(U D W^\tau \otimes I)(|00 \rangle + |11 \rangle ) \nonumber \\ &= & (U D \otimes W)(|00 \rangle + |11 \rangle ) \nonumber \\ &= & (U \otimes W) (D\otimes I)(|00 \rangle + |11 \rangle ) \nonumber \\ &= & (U \otimes W) (d_1 |00 \rangle + d_2 |11 \rangle ) \;,\end{aligned}$$ which is equivalent to Eq. (\[tre\]). The values $d_1 $ and $d_2$ are the elements of the diagonal matrix $D$ (“the singular values of $\Psi $”). Notice that $$\begin{aligned} 1=\langle \psi |\psi \rangle = \hbox{Tr}[\Psi ^\dag \Psi ]=\hbox{Tr} [D^2]=d_1^2 +d_2^2 \;,\end{aligned}$$ namely one obtains the correct normalization for the state $|\chi \rangle $. Our result generally holds in arbitrary Hilbert spaces, and hence provides a way to encode a qudit on a bipartite quantum system of ${\cal H}\otimes {\cal H}$, by means of local unitary transformations, where $d= (\hbox{dim}({\cal H}))^2$. Notice also that the decomposition in Eq. (\[udw\]) is not unique, and hence the unitaries $U$ and $W$ in Eq. (\[tre\]) are not unique either. For example, one has the invariance property $U'=U Z$ and $W'=W Z^\dag $, where $Z$ is an arbitrary diagonal unitary matrix. Let us apply the above derivation to the case where the qubits are represented by the polarization state of two photons. The seed state is written $$\begin{aligned} |\chi \rangle =x|HH\rangle +\sqrt{1-x^{2}}|VV\rangle \;. \label{chid}\end{aligned}$$ The state in Eq. (\[chid\]) represents a non-maximally entangled polarization state. It is easily obtained from the source sketched in Fig. 1. It is based on a high stability single arm interferometer which accomplishes the generation of the polarization entangled state $|\Phi \rangle = \frac{1}{\sqrt{2}}\left( |H\rangle |H\rangle +e^{i\theta }|V\rangle |V\rangle \right) $ by the superposition of the degenerate parametric emission cones at wavelength $\lambda $ (see Fig.  1) of a type-I  $\beta $-$BaB_{2}O_{4}$ (BBO) crystal, excited in two opposite directions, by a $V$-polarized laser beam at wavelength $\lambda _{p}=\lambda /2$. Other basic elements of the source are the following: \[$i$\] A spherical mirror $M$, reflecting both the parametric radiation or the pump beam, whose micrometric displacement allows to control the state phase $\theta $ ($0\leq \theta \leq \pi $). \[$ii$\] A zero-order $\lambda /4$ waveplate (wp), placed within the $M-$BBO path, which performs the $|HH\rangle \rightarrow |VV\rangle $ transformation on the two-photon state belonging to the left-cone. \[$iii$\] A positive lens which transforms the conical parametric emission of the crystal into a cylindrical one, whose transverse circular section identifies the so-called ”entanglement-ring” ($e-r$). A zero-order $\lambda _{p}/4$ wp inserted between $M$ and the BBO crystal, intercepting only the laser beam allows the engineering of tunable non-maximally entangled states in the following way: the polarization of the back-reflected pump beam is rotated by an angle $2\theta _{p}$ with respect to the optical axis of the crystal when the pump wp is rotated by an angle $ \theta _{p}$. As a consequence the emission efficiency of the $\left | HH \right \rangle $ contribution is lowered by a coefficient proportional to $\cos ^{2}2\theta _{p}$, with $\theta _{p}$ adjusted in the range $0-\pi /4$. Alternatively, we can obtain a lower value of the $\left| VV\right\rangle $ contribution with respect $\left| HH\right\rangle $ by inserting a $\lambda _{p}/2$ wp on the laser beam path before the crystal. By simultaneous rotation of the two wp’s, the complete tunability of the entanglement degree can be achieved [@roma3]. ![Layout of the universal source of non-maximally polarization entangled and polarization-momentum hyper-entangled two-photon states. In the left part, non-maximally entangled states in polarization are generated. In the central part, after division of the entanglement ring ($e-r$) along a vertical axis by a prism-like two-mirror system, momentum entanglement is realized by a four hole screen which selects the correlated pairs of modes $a_{1},b_{2}$ and $a_{2},b_{1}$. In the right part, qudits are encoded by means of the local unitary transformations $ U$ and $W$ on modes $a_{1},b_{1}$ and $a_{2},b_{2}$, respectively.[]{data-label="f:fig1"}](qudits_fig) The local unitary transformations that are needed to generate the desired state of the qudit can be easily realized by linear optics. In fact, a unitary $2 \times 2$ matrix can generally be written as $$\begin{aligned} U=\left( \begin{array}{rr} e^{i \alpha } \cos\theta & e^{i \beta }\sin\theta \\ -e^{i \gamma } \sin\theta & e^{i(\beta + \gamma - \alpha )}\cos\theta \end{array} \right) \; \end{aligned}$$ Such a unitary can be factorized as follows $$\begin{aligned} U&=&\left( \begin{array}{lc} e^{i \beta } & 0\\ 0 & e^{i(\beta + \gamma - \alpha )} \end{array} \right) \left( \begin{array}{rl} \cos\theta & \sin\theta\\ -\sin\theta & \cos\theta \end{array} \right) \nonumber \\& \times & \left( \begin{array}{cl} e^{i (\alpha -\beta )} & 0\\ 0 & 1 \end{array} \right) \;.\label{fact}\end{aligned}$$ Hence, any unitary transformation on the polarization state of a photon can be obtained as a sequence of a phase-shift, a rotation of the polarization, and a final phase-shift. The general scheme can be used to engineer mutually unbiased bases [@mub] of qutrits for cryptographic purposes. For example, the basis $$\begin{aligned} &&|u_I \rangle = |HH \rangle \;, \nonumber \\ & & |u_{II} \rangle = |VV \rangle \;, \nonumber \\ & & |u_{III} \rangle = \frac {1}{\sqrt 2}(|HV \rangle + |VH \rangle )\equiv |\psi ^+ \rangle \;;\label{u3}\end{aligned}$$ is mutually unbiased with the basis $$\begin{aligned} &&|v_{I} \rangle = \frac {1}{\sqrt 3} \left (|HH \rangle + |VV \rangle + |\psi ^+ \rangle \right )\;, \label{v3}\\ & & |v_{II} \rangle = \frac {1}{\sqrt 3} \left (|HH \rangle + e^{2\pi i/3}|VV \rangle + e^{-2\pi i /3}|\psi ^+ \rangle \right )\;, \nonumber \\ & & |v_{III} \rangle = \frac {1}{\sqrt 3} \left (|HH \rangle + e^{-2\pi i/3}|VV \rangle + e^{2\pi i /3}|\psi ^+ \rangle \right )\;.\nonumber \end{aligned}$$ It is quite easy to generate the states of the first basis. On the other hand, the states of the second basis can be generated according to the above derivation. Explicitly one has $$\begin{aligned} |v_{i}\rangle =(U_{i}\otimes W_{i})|\chi \rangle \;, \label{uvchi}\end{aligned}$$ where the seed state $|\chi \rangle $—the same for all i=I,II,III—is written $$\begin{aligned} |\chi \rangle &=&\frac{\sqrt{2}+1}{\sqrt{6}}|HH\rangle +\frac{\sqrt{2}-1}{ \sqrt{6}}|VV\rangle \nonumber \\ &\simeq &0.986|HH\rangle +0.169|VV\rangle \;,\end{aligned}$$ and the set of unitaries is given by $$\begin{aligned} && U_{I}=W_{I}=\frac{1}{\sqrt{2}}\left( \begin{array}{cc} 1 & 1 \\ 1 & -1 \end{array} \right) \;; \nonumber \\& & U_{II}=W_{II}=\frac{1}{\sqrt{2}}\left( \begin{array}{cc} 1 & 1 \\ e^{-2i\pi /3} & e^{i\pi /3} \end{array} \right) \;; \nonumber \\& & U_{III}=W_{III}=\frac{1}{\sqrt{2}}\left( \begin{array}{cc} 1 & 1 \\ e^{2i\pi /3} & e^{-i\pi /3} \end{array} \right) \;. \end{aligned}$$ Notice that for the particular chosen basis, the unitaries $U_{i}$ and $W_{i} $ are identical. Moreover, using the factorization formula (\[fact\]), the phase-shift on the right reduces to the identity matrix, and hence the $U_{i}$’s can be implemented by a $\lambda /2$ wp rotated by $\theta =\frac{\pi }{8}$, followed by a further phase delay between $H$ and $V$, corresponding to $ \varphi _{I}=0$, $\varphi _{II}=\frac{2}{3}\pi $ and $\varphi _{III}=-\frac{2 }{3}\pi $, respectively. Other qutrits of the form $$\begin{aligned} |\xi \rangle = \frac{1}{\sqrt 3} (|HH \rangle + e^{i\psi }|VV \rangle + e^{i\phi } |\psi ^+ \rangle ) \; \end{aligned}$$ can be generated by using the general formula $UDW^\tau =\xi$, where $$\begin{aligned} &&\!\!\!\!\!\!\!\! U=\frac{1}{\sqrt 2}\left( \begin{array}{ll} e^{i \arg [\sqrt 2 +e^{i(\phi -\frac \psi 2)}]} & e^{i \arg [\sqrt 2 - e^{i(\phi -\frac \psi 2)}]} \\ e^{i \arg [e^{i \phi } + \sqrt{2}e^{i \frac \psi 2}]} & e^{i \arg [e^{i \phi } - \sqrt{2}e^{i\frac \psi 2}]} \end{array} \right) \;; \nonumber \\& & \!\!\!\!\!\!\!\! W=\frac{1}{\sqrt 2}\left( \begin{array}{cc} 1 & 1 \\ e^{i \frac \psi 2} & - e^{i \frac \psi 2} \end{array} \right) \;;\nonumber \\ &&\!\!\!\!\!\!\!\! D= \left( \begin{array}{cc} \sqrt{\frac 12 +\frac{\sqrt 2}{3}\cos (\frac \psi 2 -\phi)} & 0 \\ 0 & \sqrt{\frac 12 - \frac{\sqrt 2}{3}\cos (\frac \psi 2 -\phi)} \end{array} \right) \;; \nonumber \\& & \!\!\!\!\!\!\!\! \xi=\frac{1}{\sqrt 3} \left( \begin{array}{cc} 1 & \frac{e^{i\phi}}{\sqrt 2} \\ \frac{e^{i\phi}}{\sqrt 2} & e^{i\psi } \end{array} \right) \;.\label{uv}\end{aligned}$$ Hence, one can write a relation as in Eq. (\[uvchi\]) for two other bases which are mutually unbiased with each other and with those of Eqs. (\[u3\]) and (\[v3\]), with the seed state $$\begin{aligned} |\chi \rangle &=&\sqrt{\frac{3+\sqrt{2}}{6}}|HH\rangle +\sqrt{\frac{3-\sqrt{ 2}}{6}}|VV\rangle \nonumber \\ &\simeq &0.858|HH\rangle +0.514|VV\rangle \;,\end{aligned}$$ and with unitaries that can be evaluated by Eq. (\[uv\]). The above described procedure could produce highly pure qudits. In fact, both the technique used to generate non-maximally entangled states and the waveplates and phase-shifters to realize the unitary operations can be very accurate, and in principle do not introduce any amount of mixedness in the state. Indeed, non-maximally entangled states generated by this source were recently used to prove the Hardy’s ladder theorem on nonlocality up to the $20th$ step of the ladder [@roma3]. Once qudits are available, one can characterize these states by quantum tomography, or use them for more advanced tests of nonlocality [@collins:02; @zuk]. As far as more specific quantum information applications are concerned, e.g. quantum key distribution, a major difficulty is the need to perform quantum measurements on mutually unbiased bases. The use of qutrits requires highly nontrivial setups at the measurement stage. However, the use of ququads is easier. In this case one should use five mutually unbiased bases, hence generating $20$ different states. For a system of two qubits [@zeil; @arav], one can consider three product bases and two Bell bases. We write explicitly the bases from Ref. [@arav] (in our scheme, we have $|0\rangle \equiv |H\rangle $ and $|1\rangle \equiv |V\rangle $), namely $$\begin{aligned} I)\ \ &&|0\rangle |0\rangle ,|0\rangle |1\rangle ,|1\rangle |0\rangle ,|1\rangle |1\rangle \;; \nonumber \\ II)\ \ &&(|0\rangle +|1\rangle )(|0\rangle \pm |1\rangle ), \nonumber \\ &&(|0\rangle -|1\rangle )(|0\rangle \pm |1\rangle ) \;;\nonumber \\ III)\ \ &&(|0\rangle +i|1\rangle )(|0\rangle \pm i|1\rangle ), \nonumber \\ &&(|0\rangle -i|1\rangle )(|0\rangle \pm i|1\rangle ) \;;\nonumber \\ IV)\ \ &&(|0\rangle +i|1\rangle )|0\rangle \pm (|0\rangle -i|1\rangle )|1\rangle , \nonumber \\ &&(|0\rangle -i|1\rangle )|0\rangle \pm (|0\rangle +i|1\rangle )|1\rangle \;; \nonumber \\ V)\ \ &&|0\rangle (|0\rangle +i|1\rangle )\pm |1\rangle (|0\rangle -i|1\rangle ), \nonumber \\ &&|0\rangle (|0\rangle -i|1\rangle )\pm |1\rangle (|0\rangle +i|1\rangle )\;.\end{aligned}$$ Clearly, the bases $I$,$II$, and $III$ correspond to the measurement of $ \sigma _{z}\otimes \sigma _{z}\;,\sigma _{x}\otimes \sigma _{x}$, and $ \sigma _{y}\otimes \sigma _{y}$, respectively. The bases $IV$ and $V$ are made of Bell projectors. The generation of the $12$ product states is trivial. On the other hand, the above source of entangled photon states very efficiently generates the other eight maximally entangled states. The problem of realizing Bell measurements can be solved by hyper-entangled states [@padua], which have been realized in the two degrees of polarization and momentum by the same source [@roma2]. Besides polarization entanglement, momentum entanglement is realized by a four hole screen which allows one to select the correlated pairs of modes $a_{1},b_{2}$ and $a_{2},b_{1}$ (Fig. 1) occurring with relative phase $\phi =0$. In this way, in either one of the cones the momentum entangled Bell state $|\psi ^{+}\rangle =\frac{1}{\sqrt{2 }}\left( |a_{1},b_{2}\rangle +|b_{1},a_{2}\rangle \right) $ can be generated. Note that the four modes $ a_{1}$, $b_{2}$, $a_{2}$, $b_{1}$ can be easily separated by mirrors and coupled to single mode optical fibers, allowing in this way fiber based cryptographic schemes. In a complete Bell state analysis the polarization state acts as the control qubit and the momentum state $|\psi ^{+}\rangle $ as the target qubit [@padua]. We notice also that a cryptographic protocol with ququads that uses just two instead of all five mutually unbiased bases is characterized by a maximum acceptable error rate that is only slightly lower, while the corresponding key rate is much larger [@eav]. The nontrivial encoding here is represented by the Bell states of the bases $IV$ and $V$, and a cryptographic scheme based just on such two bases can be implemented by our source. In conclusion, we have shown how to obtain an arbitrary qudit up to $d=4$, from local unitary transformations on a bipartite pure state of two qubits by SVD encoding. The theoretical scheme generally holds in arbitrary Hilbert space, encoding the qudit on a bipartite quantum system of ${\cal H}\otimes {\cal H}$ by means of local unitaries, with $d= (\hbox{dim}({\cal H}))^2$. Upon representing qubits by the polarization state of photons, the method allows one to generate experimentally qudits with a single nonlinear crystal and linear optics, using the source of Ref. [@roma1]. This allows one to create tunable nonmaximally entangled states that play the role of seed states, from which arbitrary qudit states are generated via SVD using simple linear optics. The hyper-entanglement of the generated photons allows one to perform nontrivial measurements—such as Bell measurements—that are crucial for quantum cryptographic applications. *Acknowledgments.* This work has been sponsored by INFM through the project PRA-2002-CLON, and by EC and MIUR through the cosponsored ATESIT project IST-2000-29681 and Cofinanziamento 2003. [99]{} H. Bechmann-Pasquinucci and A. Peres, Phys. Rev. Lett. **85**, 3313 (2000). D. Kaszlikowski, D. K. L. Oi, M. Christandl, K. Chang, A. Ekert, L. C. Kwek, and C. H. Oh, Phys. Rev. A **67**, 012310 (2003). N. K. Langford, R. B. Dalton, M. D. Harvey, J. L. O’Brien, G. J. Pryde, A. Gilchrist, S. D. Bartlett, and A. G. White, Phys. Rev. Lett. **93**, 053601 (2004). D. Bruss and C. Macchiavello, Phys. Rev. Lett. **88**, 127901 (2002). N. J. Cerf, M. Bourennane, A. Karlsson, and N. Gisin, Phys. Rev. Lett. [**88**]{}, 127902 (2002). T. Durt, N. J. Cerf, N. Gisin, and M. Zukowski, Phys. Rev. A **67**, 012311 (2003). D. Collins, N. Gisin, N. Linden, S. Massar, and S. Popescu, Phys. Rev. Lett. **88**, 040404 (2002). D. Kaszlikowski, L. C. Kwek, J-L Chen, M. Zukowski, and C. H. Oh, Phys. Rev. A [**65**]{}, 032118 (2002). R. T. Thew, A. Acin, H. Zbinden, and N. Gisin, Quantum Information and Computation **4**, 93 (2004). A. Vaziri, J.-W. Pan, T. Jennewein, G. Weihs, and A. Zeilinger, Phys. Rev. Lett. **91**, 227902 (2003). G. Molina-Terriza, A. Vaziri, J. Rehacek, Z. Hradil, and A. Zeilinger, Phys. Rev. Lett. **92**, 167903 (2004). J. C. Howell, A. Lamas-Linares, and D. Bouwmeester, Phys. Rev. Lett. **88**, 030401 (2002). Yu. I. Bogdanov, M. V. Chekhova, S. P. Kulik, G. A. Maslennikov, A. A. Zhukov, C. H. Oh, and M. K. Tey, Phys. Rev. Lett. [**93**]{}, 230503 (2004). P. G. Kwiat, E. Waks, A. G. White, I. Appelbaum, and P. H. Eberhard, Phys. Rev. A [**60**]{}, R773 (1999); A. Lamas-Linares, J.C. Howell, and D. Bouwmeester, Nature [**412**]{}, 887 (2001). C. Cinelli, G. Di Nepi, F. De Martini, M. Barbieri, and P. Mataloni, Phys. Rev. A [**70**]{}, 022321 (2004). C. Cinelli, M. Barbieri, F. De Martini, and P. Mataloni, Laser Phys. [**15**]{}, 124 (2005); M. Barbieri, C. Cinelli, F. De Martini and P. Mataloni, quant-ph./0505098. S. P. Walborn, S. Pádua, and C. H. Monken, Phys. Rev. A [**68**]{}, 042313 (2003). See, for example, R. Bhatia, * Matrix Analysis*, Springer Graduate Texts in Mathematics Vol. 169 (Springer, New York, 1996). The correspondence between $\Psi $ and $|\psi \rangle $ is just an isomorphism between bipartite pure states of the tensor-product Hilbert space ${\cal H}\otimes {\cal H}$, and Hilbert-Schmidt operators that act on ${\cal H}$. See, G. M. D’Ariano, P. Lo Presti, and M. F. Sacchi, Phys. Lett. A. [**272**]{}, 32 (2000). M. Barbieri, F. De Martini, G. Di Nepi, and P. Mataloni, Phys. Lett. A [**23**]{}, 334 (2005). I. D. Ivanovic, J. Phys. A **14**, 3241 (1981); W. K. Wootters and B. D. Fields, Ann. Phys. **191**, 363 (1989). J. Lawrence, C. Brukner, and A. Zeilinger, Phys. Rev. A **65**, 032320 (2002). P. K. Aravind, Z. Naturforsch. **58a**, 85 (2003).
--- abstract: 'New observations of CO ($J=1\to 0$) line emission from M33, using the 25 element BEARS focal plane array at the Nobeyama Radio Observatory 45-m telescope, in conjunction with existing maps from the BIMA interferometer and the FCRAO 14-m telescope, give the highest resolution ($13''''$) and most sensitive ($\sigma_{rms}\sim 60$  mK) maps to date of the distribution of molecular gas in the central 5.5 kpc of the galaxy. A new catalog of giant molecular clouds (GMCs) has a completeness limit of $1.3\times10^5$ [$M_{\sun}$]{}. The fraction of molecular gas found in GMCs is a strong function of radius in the galaxy, declining from 60% in the center to 20% at galactocentric radius $R_{gal}\approx4$ kpc. Beyond that radius, GMCs are nearly absent, although molecular gas exists. Most (90%) of the emission from low mass clouds is found within 100 pc projected separation of a GMC. In an annulus $2.1<R_{gal}<4.1$  kpc, GMC masses follow a power law distribution with index $-2.1$. Inside that radius, the mass distribution is truncated, and clouds more massive than $8\times10^5$ [$M_{\sun}$]{} are absent. The cloud mass distribution shows no significant difference in the grand design spiral arms versus the interarm region. The CO surface brightness ratio for the arm to interarm regions is 1.5, typical of other flocculent galaxies.' author: - Erik Rosolowsky - Eric Keto - Satoki Matsushita - 'S. P. Willner' title: High Resolution Molecular Gas Maps of M33 --- Introduction ============ The nearby galaxy M33 is an ideal site to study molecular gas and star formation in the larger context of a disk galaxy. Most of our knowledge of star formation and molecular clouds is informed by nearby star forming regions such as Taurus and Orion, but studies of star formation across the Milky Way as a whole are confused by our perspective from within the Galactic disk. In contrast, the M33 disk face is visible [$i\approx 52^{\circ}$, @cs00], so it is easier to study molecular gas in relationship to other components of the galaxy. In addition, M33 is near enough [$D=840$ kpc, @Freedman2001] that individual molecular clouds can be resolved with a large single-dish telescope or millimeter interferometer. Because of the perspective that observations of M33 offer, the galaxy has been the target of many large-scale studies of molecular gas. @ws89 observed the inner region of the galaxy in CO ($J=1\to 0$) with the NRAO 12-m and followed up with high resolution observations using the Owens Valley interferometer [@ws90]. The high resolution observations were used to study the properties of individual molecular clouds and compare to clouds in the Milky Way. A further study [@wwt97] took advantage of the disk visibility to study variations in the properties of molecular gas across a range of galactocentric radii. Following improvements in millimeter-wave instrumentation, large-scale surveys of the galaxy became possible, such as that of @eprb03 [hereafter EPRB] who used the BIMA millimeter interferometer to identify all GMCs across the star forming disk. @rpeb03 studied a limited area of the disk, comprising about 1/3 of the clouds in the BIMA survey, at higher resolution. The study determined individual cloud properties including virial masses, confirmed the work of @ws90, and constrained GMC formation mechanisms by comparing the predicted and observed values for GMC angular momentum. Finally, @m33-fcrao [hereafter HCSY] used the FCRAO 14-m to complete a single-dish survey of the entire galaxy. Comparison with [*IRAS*]{} data determined the local star formation rate as a function of the surface density of molecular gas. The existing observations, while providing a wealth of data, also raise additional questions that merit another look at the molecular gas in M33. In particular, comparison of the interferometer-only study of EPRB with the single-dish survey of HCSY shows that the interferometer recovered only 20% of the CO flux in the galaxy! Because the typical size scales of molecular clouds in M33 should roughly match the synthesized beam of the interferometer, such a small fraction of flux recovery is unexpected. What is the nature of the remaining molecular gas? One clue comes from the steep mass distribution of GMCs in M33. Unlike the Milky Way, EPRB found that the mass distribution was “bottom heavy” i.e. most of the molecular mass was found in clouds near their sensitivity limit. Thus, it appears that most of the missing flux is comprised of low-mass molecular clouds that the interferometer could not detect. This prompts several questions about the nature of such clouds: Where are these low mass molecular clouds located relative to the GMCs? Does the fraction of material found in GMCs vary over the galaxy? These questions are not easily answered by the existing observations, because the diffuse, extended emission of the lower-mass clouds is not easily detected by a radio interferometer, and the existing single dish observations do not have the angular resolution to define individual clouds. To study the distribution of low mass clouds in more detail, we mapped the inner 2 kpc of M33 using the BEARS receiver on the 45-m telescope at the Nobeyama Radio Observatory (NRO). The NRO 45-m data give better sensitivity than previous observations and allow detection of a greater fraction of the CO in the central region. We present two new maps: a BIMA+FCRAO map of the entire galaxy and a BIMA+FCRAO+NRO map of the inner 2 kpc. The maps are complementary — the first surveys the entire galaxy at $13''$ (50 pc) resolution, sufficient to identify individual GMCs, while the second has 50% better point source sensitivity and a factor of 3.5 better column density sensitivity, though having poorer resolution ($20''= 75$ pc) and limited coverage. Section \[obs\] of this paper describes the new observations at NRO, while §\[redux\] explains the data processing and production of the maps. Section \[gmcs-again\] presents the new GMC catalog. The new catalog demonstrates significant variations in the mass distribution across the face of the galaxy, as described in §\[massspec\]. Section \[summ\] summarizes the results. Nobeyama Radio Observatory Observations {#obs} ======================================= The BEARS array on the 45 m telescope at Nobeyama Radio Observatory consists of 25 SIS mixer receivers arranged in a $5\times 5$ array with 411 between the elements [@bears-sunada]. With the $14''$ beam size at 115 GHz, the elements are separated by 3 resolution elements on the sky. The basic unit of the M33 observations consisted of nine separate pointings arranged in a $3\times 3$ grid, each offset from the previous by 14. This strategy observes a $(3.5')^2$ region at half-Nyquist sampling. To acquire fully sampled data, this observation must be repeated three additional times with a 7 offset in each direction. Thus, a fully-sampled map consists of 900 individual pointings separated by $7''$; a half-sampled map contains 225 pointings separated by $14''$. The BEARS receiver requires using the 25-element autocorrelator array as a back end. It has a total bandwidth of 512 MHz split into 1024 channels, resulting in a nominal velocity resolution of 1.3 km s$^{-1}$ in the $^{12}$CO ($J=1\to 0$) line. The observation strategy divided M33 into $(3.5')^2$ regions defined by the footprint of the receiver array on the sky. Figure \[obspos\] shows the fields observed, and Table \[fieldprops\] gives their positions. For each field observed with half-Nyquist sampling, we observed the nine individual position offsets in 20 second scans, then position switched to an absolute reference position. The reference positions were chosen to be regions at large galactocentric radius ($R_{gal}$). These regions may contain CO emission, but they were always chosen so that the velocity of any emission in the reference beam would be much different from the line velocity in the observation position. In good observing conditions, two scans could be obtained between reference positions. In marginal weather, the number of scans had to be decreased to one, and the reference position was an offset of $30'$ in azimuth instead of an absolute position. One pass through the nine positions of a field took five to six minutes depending on the settle time for the telescope, and typical observations utilized 12 cycles obtaining 180 seconds of integration time at each position. This completed a half-Nyquist observation of a field in 90 minutes. Before and between each field, we updated the pointing solution for the telescope using the S40 receiver to observe SiO ($J=1\to 0,v=0,1$) maser emission from IRC+30021. In low wind conditions, the pointing solution drifted by $<5''$ over the hour required to observe a single field. When the wind velocity was steadily above 5 m s$^{-1}$, the pointing drift could exceed $10''$ over an hour. [ccccc]{} 1 & 01:33:34.4 +30:34:51.2 & 108 & 0.94 & Half-Nyq.\ 2 & 01:33:34.4 +30:38:23.6 & 108 & 0.99 & Half-Nyq.\ 3 & 01:33:34.4 +30:41:42.2 & 108 & 0.97 & Half-Nyq.\ 4 & 01:33:34.4 +30:45:07.7 & 108 & 0.84 & Half-Nyq.\ 5 & 01:33:50.9 +30:34:51.2 & 108 & 0.96 & Half-Nyq.\ 6 & 01:33:50.9 +30:38:23.6 & 382 & 0.71 & Nyquist\ 7 & 01:33:50.9 +30:41:42.2 & 324 & 0.84 & Nyquist\ 8 & 01:33:50.9 +30:45:14.5 & 323 & 0.87 & Nyquist\ 9 & 01:34:06.8 +30:34:51.2 & 112 & 0.95 & Half-Nyq.\ 10 & 01:34:06.8 +30:38:23.6 & 436 & 0.73 & Nyquist\ 11 & 01:34:06.8 +30:41:42.2 & 108 & 0.96 & Half-Nyq.\ 12 & 01:34:06.8 +30:45:07.7 & 108 & 0.97 & Half-Nyq.\ 13 & 01:34:06.8 +30:48:40.0 & 320 & 0.71 & Nyquist\ 14 & 01:34:22.8 +30:48:40.0 & 163 & 0.46 & Nyquist\ We observed from 2005 February 9 to February 17 as long as M33 was at suitable elevation, excluding time lost to maintenance and bad weather. In that time, we completed observations of 14 of the $(3.5')^2$; six fields are Nyquist sampled and the remaining eight fields are half-Nyquist sampled (see Figure \[obspos\] and Table \[fieldprops\] for details). Since each element of the BEARS receiver consists of a double side band SIS receiver, we calibrated the array with observations using a single-sideband receiver to obtain an accurate intensity scale. At the latitude of the Nobeyama Radio Observatory, M33 transits above the elevation limit of the 45-m telescope ($80^{\circ}$) rendering it unobservable for 90 minutes in the middle of the observations. During that time we calibrated the efficiency of each pixel of the BEARS receiver by comparing observations of NGC 7538 ($\alpha_{2000}=23^\mathrm{h} 11^\mathrm{m} 45\fs 5$ ,$\delta_{2000}=+61^{\circ} 28' 09''$) made with each element of the array to those made of the same position with the single-sideband S100 receiver. The intensity scaling factor for each BEARS pixel relative to the S100 receiver was the ratio of the integrated intensities in the respective observations. The BEARS spectra were multiplied by these scaling factors averaged over the course of the observing run. The scaling factors ranged from 1.7 to 3.6 with a median value of 2.4. Daily variation of the scaling factors was $\lesssim 20\%$ for all elements; this variation dominates the uncertainty in the overall flux calibration. This scaling places BEARS spectra onto the S100 intensity scale, and the spectra must be scaled up by a further factor of 2.56 ($=1/\eta_{mb}$) to correct for the main beam efficiency. With all corrections, the noise level of the final map is 85 mK on the $T_A^*$ scale. Data Reduction {#redux} ============== Nobeyama Radio Observatory Data ------------------------------- Inspection revealed several pathologies in the NRO/BEARS data. These are: (1) pointing errors induced by high wind speeds, (2) baseline variations, and (3) interloper signals in the base band. Because of the data volume — 62300 raw spectra — automated processing was needed to correct these problems. Scans taken when wind velocities exceeded 7.5 m s$^{-1}$ were rejected in their entirety. Spectral baselines were established for every spectrum by a fourth-order b-spline fit with break points established every 100 km s$^{-1}$, ignoring regions within 32 channels of the beginning and end of the spectrum, within 20 channels of a bad correlator section near channel 300, and within 30 km s$^{-1}$ of the line-of-sight velocity at observed and reference positions in M33. These exclusions prevented the routine from fitting a baseline to actual CO emission in M33[^1]. The baseline fit accounts for low-order baseline ripples of arbitrary shape but cannot account for high-order “standing wave” patterns in the resulting data caused by interloper signals in the base band. Spectra affected by such signals can be readily identified because the rms residual from the b-spline fit is dramatically larger than for the remainder of the observations. The small number of spectra for which the rms residual is 4 times the median were rejected. Table \[fieldprops\] reports the fraction of scans that remain after wind speed and baseline rejection. The acceptable spectra were Hanning smoothed and down-sampled to a final channel width of $2.6$ km s$^{-1}$ before being gridded into a data cube using a Gaussian smoothing kernel with a FWHM of $20''$. Merging the Interferometer and Single Dish Maps ----------------------------------------------- The first step in the analysis was to merge the FCRAO (HCSY) and BIMA (EPRB) data sets. As noted in §1, the interferometer map recovers only $\sim$ 20% of the flux found in the single-dish map, though the interferometer map detects nearly all the GMCs in the galaxy. The 14-m FCRAO dish recovers all flux at the small spatial frequencies not sampled by the BIMA observations, which have minimum baselines $\lesssim$ 7 m. Therefore, the two maps can be combined to produce a single high-resolution, fully-sampled map. The power in the overlapping range of spatial frequencies shows that the flux scales of the single-dish and interferometer maps are consistent within their uncertainties, and no relative flux scaling is required. To assess systematic uncertainties in the resulting map, the data sets were combined using two techniques: (1) image-domain linear combination as described by @lincom and implemented by @song2 and (2) Fourier-domain combination [@immerge] implemented in MIRIAD [@miriad]. The two methods gave results indistinguishable within the uncertainties. Figure \[comap2\] shows the result produced by the linear method, which is used throughout this paper. This map will be called the “the merged map.” Its final resolution is $13''\times 2.03\mbox{ km s}^{-1}$ (a projected beam size of 53 pc), and its median noise level is 240 mK. The boundaries of the FCRAO and BIMA maps do not exactly coincide, but the region of overlap (shown in Figure \[comap2\]) covers the majority of the galaxy to a radius of $R_{gal}\approx 5.5$ kpc. Combining All Data ------------------ The NRO 45-m map shows no systematic differences from the merged map. Figure \[datacomp\] compares the two maps in the region of overlap after convolving the merged map to match the 20$''$ resolution of the NRO map. The noise in the NRO map shows greater variation with position than in the merged map, and therefore many positive and negative outliers are seen in the pixel comparison plot (Figure \[datacomp\], left). Despite that, the concentration of points near the locus of equality shows agreement in the relative calibration between the two maps. The spectral comparison of the emission from the central $7'$ (1.7 kpc) of the galaxy (Figure \[datacomp\], right) highlights the agreement in the flux calibration across the observing band. The plot also shows that the combination with the BIMA data does not significantly affect the amount of flux found in the merged data set. The agreement between all three data sets is quite good except near $V_{LSR}=105$ km s$^{-1}$ where emission in the reference position of the NRO 45-m data produces a negative, though irrelevant, feature in the data. Because the two data sets agree well over their common area, a final map of the central region of M33 requires simply averaging the two data cubes. The average was weighted by the inverse square of the noise estimate determined for each pixel in the data cube where the Chauvenet criterion [@taylor] was applied iteratively to remove high-significance outliers (see EPRB for details). The final data cube has a typical rms noise temperature of 60 mK and a resolution of 20$''$ $\times$ 2.6 km s$^{-1}$, corresponding to a projected resolution of 81 pc. At this angular resolution, the noise level of of the BIMA+FCRAO map is 100 mK, so combining the data yields a 40% improvement in the sensitivity. The NRO+FCRAO+BIMA map is shown in Figure \[comap\]; it will be referred to here as “the combined map.” Because of variable integration time and atmospheric conditions, the noise varies spatially across the map. Figure \[noisemap\] shows the 1$\sigma$ rms noise level as a function of position for the final data cube. Table \[cubecomp\] compares the properties of the various maps. [cccc]{} Resolution & $45''\times 1.0\mbox{ km s}^{-1}$& $13'' \times 2.03~\mbox{km s}^{-1}$ & $20'' \times 2.6~\mbox{km s}^{-1}$\ $V_{LSR}$ Coverage (km s$^{-1}$) & $[-340,0]$& $[-340, 0]$ & $[-300, -49]$\ Maximum $R_{gal}$ (kpc) & 5.5 & 5.5 & 2.0\ Mass Sensitivity ($M_{\odot}$) & $7.6\times 10^4$ & $5\times 10^4$ & $3.5\times 10^4$\ Column Density Sensitivity(cm$^{-2}$) & $8.2\times 10^{19}$ & $7.5\times10^{20}$ & $2.2\times 10^{20}$\ Central Flux (K km s$^{-1}$) & $1.48\pm 0.02$ & $1.47\pm 0.02$ & $1.50\pm 0.01$\ Revisiting the Giant Molecular Clouds of M33 {#gmcs-again} ============================================ EPRB presented the first complete catalog of GMCs in M33 based on the BIMA survey data alone. The $13''$ (50 pc) resolution of the interferometer data is ideal for identifying individual GMCs though insufficient for measuring cloud sizes. The combination of the FCRAO and BIMA maps is even better for identifying GMCs because the data are not subject to many of the pathologies associated with interferometer-only data such as spatial filtering and non-linear flux recovery. In addition, the sensitivity of the merged map is marginally increased over the BIMA data alone because FCRAO and BIMA sample some of the same spatial frequencies. As shown in Figure \[comap\], the molecular gas is distributed in many complexes with low surface brightness filaments connecting the complexes, which are also connected in velocity space. The bright CO complex in the north of the map contains the most massive molecular cloud in the galaxy (Cloud 1 of EPRB). We have generated a new catalog of GMCs in M33 from the merged data set. The combined map is less suitable for the identification of GMCs since the coarser spatial and velocity resolution match GMCs poorly resulting in a higher completeness limit in the resulting catalog. The merged catalog is restricted to the region of overlap between the BIMA and FCRAO maps where all spatial information is recovered. The catalog was generated using a contouring method, as used by EPRB. The first step was to identify all regions in the data set with larger than a 2.7$\sigma_{rms}$ detection in two adjacent, independent channels. For each such detection, all pixels with significance larger than 2$\sigma_{rms}$ and connected to the 2.7$\sigma_{rms}$ “core” became part of the cloud candidate. For each candidate, we calculated the probability ($P$) of drawing the most significant spectrum from a random distribution of noise (see Appendix A of EPRB). We then multiplied by the number of independent measurements in the catalog region ($N\approx 2\times 10^{6}$) and considered $-\ln (NP)$ as the statistic of merit. The final catalog contains all candidates with $-\ln (NP) > 8.0$ and with CO velocity centroids within 20 km s$^{-1}$ of the velocity centroid [@deul] at that location. The selection criteria differ from those of EPRB, who included high significance detections across the entire bandpass and found four clouds at large velocity separation from the atomic gas. Subsequent observations of those apparent clouds showed that they are not real CO emitters but rather could be attributed to malfunctions in the BIMA correlator (Rosolowsky, Blitz & Engargiola, unpublished). We have selected a 20 km s$^{-1}$ separation from the velocity since this velocity range contains 90% of the CO flux (Figure \[cocdf\]) while containing no spurious sources. Table \[catalog\] gives the new cloud catalog wherein objects are named “M33GMC.” Clouds are listed in order of increasing galactocentric radius. The cloud masses are based on a CO-to-H$_2$ conversion factor of $2\times 10^{20}\mbox{ cm}^{-2} (\mbox{K km s}^{-1})^{-1}$ and the previously-quoted distance of 840 kpc to M33 [@Freedman2001][^2]. The conversion factor is appropriate for the GMCs in M33 and does not appear to vary significantly over the inner 4 kpc of the galaxy [@rpeb03]. The method for calculating mass differs from that of EPRB; masses here are based on the method of @props. This method attempts to account for the amount of emission not included above the 2$\sigma_{rms}$ cutoff in the data cube by extrapolating the emission profile to the 0 K km s$^{-1}$ intensity contour. Accounting for this emission roughly doubles the derived cloud masses. Cloud major and minor axes and orientations are also given in Table \[catalog\]; however, the sizes are not corrected for beam convolution and are thus upper limits. Deconvolution of the cloud sizes is unstable for large beam sizes (50 pc) and low signal-to-noise values [@props]. [cccccccccccc]{} It is possible that the catalog method adopted in this study will blend together individual GMCs into a larger complex that, at higher resolution, would be identified as separate clouds. In their study of this problem, EPRB found many instances of cloud groups identified as distinct in the high-resolution work of @ws90 that were merged in the EPRB catalog. The same effects must be present in the current catalog and will affect studies of the mass distributions below §\[massspec\]. In general, appropriately decomposing a blend moves an individual cloud to multiple clouds of lower masses, steepening the mass distribution and reinforcing any truncations in the distribution (provided the mass of the original cloud was near the truncation mass). We proceed with this caveat in mind but make no effort to separate the catalog objects into smaller units. Decomposing blends is inaccurate at this physical resolution [50 pc; @props]. The new catalog contains 149 clouds to an estimated completeness limit of $1.3\times 10^{5}$ [$M_{\sun}$]{}. The limit is based on linearly scaling the completeness limit derived from EPRB’s detailed examination of the BIMA-only survey to the sensitivity limit in the BIMA+FCRAO data. The noise level of the merged map does not vary significantly in the region where molecular clouds are found, although it does increase in the outer regions of the galaxy. The original catalog of EPRB contained 144 clouds (omitting the spurious high-velocity outliers) with total mass $1.7\times 10^7$ [$M_{\sun}$]{}. Total mass of clouds in the new catalog would be $2.0\times 10^7$ [$M_{\sun}$]{} using the same mass formula as EPRB. This increase in the mass estimate stems from inclusion of the low surface brightness emission that is not detected in the interferometer-only map. The larger increase in the catalog GMC mass, to $3.9\times 10^7$ [$M_{\sun}$]{}, comes from accounting for the flux below the 2$\sigma_{rms}$ limit. This total mass accounts for about 1/3 of all the mass implied by emission in the merged map, $1.2\times 10^8$ [$M_{\sun}$]{}. (This differs from the value reported by HCSY in part because we include helium and adopt a different distance and CO-to-H$_2$ conversion factor.) The remaining 2/3 of the emission is from sources that are of smaller mass than GMCs. This represents quite a contrast to the inner Milky Way, where the GMCs comprise 80% of the molecular mass [@wm97]. The amount of CO emission from low mass clouds can be examined by averaging the data in annuli of galactocentric radius. Figure \[sdcompare\] compares the radial surface density profile for GMCs ($M>10^5~M_{\odot}$) and for all CO emission. Within 1 kpc of M33’s center, GMCs comprise $\lesssim$60% of the total molecular mass, and the fraction decreases rapidly with radius to $<$10% near $R_{gal}\approx 4.0$ kpc. Beyond that radius, GMCs vanish almost entirely with only a few, low-mass clouds beyond this radius although there is plenty of molecular gas. The dearth of GMCs beyond 4 kpc is real: the catalog completeness limit is only 10% higher in mass at $R_{gal}=5$ kpc than at $R_{gal}=0$. Thus there are significantly fewer GMCs per unit total molecular mass beyond $R_{gal}=4.1$ kpc than within that radius. The radius 4 kpc corresponds to the outer ends of the major spiral arms in M33 identified in and [@hs80]. In addition, the neon abundance shows a sharp drop at this radius [@m33-neon]. However, star formation as traced by H$\alpha$ continues out to $R_{gal}=6.7$ kpc [@k89], and the disk is gravitationally unstable to that radius [@c03]. Whatever the explanation for the changing conditions at $R_{gal}>4$ kpc, star formation in the outer part of M33 must occur in molecular clouds with $M\la1.3\times10^5$ [$M_{\sun}$]{}. It is worth noting that this effect confounded the total mass estimate of EPRB who extrapolated the GMC mass to the total mass of the galaxy based on the fraction of emission in GMCs for the center of the galaxy. The location of the low mass clouds in the inner region of M33 is shown directly in the combined map (Figure \[deepmap\]) with its 50% lower rms noise. The size of objects identified in the merged map is indicated with ellipses, representing the size and orientation of the objects as they would appear in the combined map after accounting for the differing resolutions and sensitivities. Emission beyond the boundaries of the identified GMCs arises from low mass clouds. This emission forms a filamentary structure, generally surrounding and linking the high mass GMCs. The GMCs themselves appear to define the compact peaks of the lower surface brightness structures. The low mass clouds are clearly correlated with the high mass GMCs, and we quantify this relationship by measuring the amount of flux found within a given separation from [*the edge*]{} of a catalog GMC. The results are shown in Figure \[localized\] which plots the cumulative distribution of CO flux as a function of projected separation from the edge of the catalog GMCs shown in Figure \[deepmap\]. The curve rises from $\sim 30\%$ at zero separation (the amount of flux in GMCs) to over 90% at 100 pc separation. The dotted curve indicates the fraction of the area found within the same separation, which would be the curve that the flux distribution would follow if there were no spatial correlation between low mass and high mass clouds. The excess above the dotted curve shows the spatial correlation of the emission. There is no evidence for a galaxy-spanning, diffuse molecular gas component, confirming the conclusions of @rpeb03. Variations in the GMC Mass Distribution {#massspec} ======================================= Radial Variation {#radvar} ---------------- The cloud size distribution varies with galactocentric radius in a surprising way. Fig. \[sdcompare\] shows the absence of GMCs beyond 4.1 kpc, but the most massive clouds ($M>8\times10^5$ [$M_{\sun}$]{}) are also absent inside a radius of 2.1 kpc. Indeed all six clouds with $M>8\times10^5$ [$M_{\sun}$]{} are at $2.1\la R_{gal}\la2.5$ kpc. Evidently something in the inner galaxy eliminates very large clouds, and something in the outer galaxy eliminates GMCs nearly altogether. To examine the cloud distribution more quantitatively, we consider two regions divided at $R_{gal}= 2.1$ kpc. This division yields approximately equal GMC mass in each region. Figure \[inout\] shows the mass functions for the GMCs in these two regions. The mass distributions are significantly different. In the outer GMC region (the annulus $2.1 < R_{gal} < 4.1$ kpc), the GMC mass distribution follows a power law from high mass down to the completeness limit of the catalog. In the inner region, however, despite the existence of many more GMCs (about three times more per unit area), the mass distribution is truncated at the high end and may also fall slightly below a power law distribution at the low end. A two-sided KS test [@numrec] shows there is only a 0.8% probability that the two samples could be drawn from a single distribution while exhibiting such a large difference. The cumulative mass distribution functions can be characterized quantitatively by truncated power-law functions with the form: $$N(M'>M)= N_0 \left[\left(\frac{M}{M_0}\right)^{\gamma+1}-1\right], \label{cumdist}$$ where $N_0$, $M_0$, and $\gamma$ are parameters, and the fit is restricted to clouds above the completeness limit of $1.3\times 10^5~M_{\odot}$. This functional form is adopted for comparison with the GMC population in the Milky Way. The parameter $\gamma$ is the index of the mass distribution. If $\gamma<-2$, most of the mass is found in low mass clouds, and if $\gamma>-2$, the high mass clouds dominate as is the case for the inner Milky Way. As long as $N_0$ is significantly larger than unity, the mass distribution has a truncation near $M=2^{1/(\gamma+1)} M_0$. In the inner Milky Way, the mass distribution is commonly expressed as a differential mass distribution. The differential mass distribution is well-characterized by a power-law with a truncation at large cloud masses [$M_0\sim 6\times 10^6~M_{\odot}$ @wm97 and references therein]: $$\label{diff} \frac{dN}{dM}\propto \left(\frac{M}{M_0}\right)^{\gamma},~ M<M_0.$$ A similar form of the distribution, albeit with different slopes, has been reported in the outer Milky Way [e.g. @hc01], the LMC [@nanten-mspec], M33 (EPRB), and other systems farther afield. Fitting the cumulative distribution instead of the differential one is a better way to describe the data because the cumulative function is not affected by biases introduced by binning, and it can account for uncertainties in the cloud masses [@mspec]. Table \[mspectable\] gives the best-fit coefficients for the M33 data, and the resulting distributions are plotted Figure \[inout\]. As noted above, there is no significant truncation of the mass distribution for the outer galaxy clouds, and Table \[mspectable\] reports coefficients for a simple power law fit. For comparison, Table \[mspectable\] also includes the fit to the cumulative mass distribution of all the clouds in the catalog. Both the fits and Figure \[inout\] indicate that the primary difference between the two distributions is the truncation in the upper end of the mass distribution of the inner galaxy clouds. No such truncation is apparent in the outer portion of the galaxy, and including one in the fit does not appreciably change the derived values or goodness of fit. In the mass range where the inner galaxy distribution fits a power law, $1.3 < M/(10^5~M_{\odot})< 5.0$, the two distributions are indistinguishable. While the inner galaxy has a larger fraction of its molecular gas in GMCs (Figure \[sdcompare\]), the formation or survival of the highest-mass GMCs ($\gtrsim 7\times 10^{5}~M_{\odot}$) is suppressed. One possibility is that high mass clouds are sheared apart by galactic tides in this region; but this possibility is unlikely because clouds with masses of $10^7~M_{\odot}$ would be stable[^3] according to the criteria of @tides. [ccccc]{} All Clouds & $-2.0 \pm 0.2$ & $11 \pm 2$ & $ 14 \pm 7$\ Outer galaxy & $-2.1\pm 0.1$ & & $34\pm 14$\ Inner galaxy & $-1.8 \pm 0.2$ & $21 \pm 11$ & $7.4 \pm 0.5$\ Inner galaxy & $-2.4\pm 0.2$ & & $23 \pm 5$\ North & $-2.0\pm 0.2$ & $5.4\pm 4.3$ & $34\pm 12$\ North & $-2.25\pm 0.15$ & & $34\pm 11$\ South & $-1.4 \pm 0.2$ & $41 \pm 11$ & $6.4\pm 0.5$\ South & $-2.4 \pm 0.3$ & & $20\pm 16$\ Arm & $-1.9 \pm 0.2$ & $5 \pm 4$ & $16 \pm 5$\ Interarm & $-2.2 \pm 0.2$ & $4 \pm 4$ & $13 \pm 4$\ The mass truncation and differences in the slope of the distribution may be effected by several aspects of the observations and the cataloging procedure, but none of the likely effects would produce a truncation where none is present. @ws90 have already reported a truncation in the mass distribution of GMCs, but it is difficult to assess their results as their survey is spatially incomplete and selected targets based on dust and CO priors. Since the merged data span the entire galaxy, spatial biases will not affect the mass distribution. The @ws90 data had significantly higher resolution and several of the clouds considered as separate in their study are not resolved into individual clouds here (or in EPRB, see their §3.5 and Figures 13 and 14). Blending will act to minimize the effects of truncation since separate small clouds will appear as a single cloud of higher mass, so the truncations are certainly present in the GMC mass distributions. The correction for emission found below the $2\sigma$ boundary of the clouds will also change the mass distribution slightly. Since flux loss primarily affects clouds near the completeness limit (since most of the emission in these clouds is below the identification contour), correcting for the effects of the flux loss will move these clouds to higher mass, thereby steepening the spectrum. Since this is a correction for an instrumental effect, making this correction likely improves estimates of the mass distribution. Finally, regardless of the precise nature of the systematic effects on the measured mass distributions, this reported measurement is [*differential*]{} between two regions cataloged with a uniform method. As such, the reported differences must reflect some underlying difference in the character of molecular gas in the two regions. Similar variations in the GMC mass distribution as a function of position in a galaxy have not been directly quantified in previous work, but there is some evidence for such variations in the Milky Way. @mspec argued for different mass distribution functions between the inner and outer Galaxy. This variation within the Milky Way is uncertain, however, because differences in cloud cataloging methods can produce apparent differences in the mass distribution while the underlying cloud populations are the same. In addition, large arm-interarm contrasts observed in many spiral galaxies suggest that the mass distribution of GMCs changes azimuthally (see also §\[sparm\]). North-South Variation {#northsouth} --------------------- The right-hand panel of Figure \[datacomp\] implies that the approaching and receding sides of the galaxy have a significant flux asymmetry. The northern (approaching) portion of the galaxy has a total flux of $8200\mbox{ Jy km s}^{-1}$ whereas the southern (receding) half of the galaxy has a flux of $7800\mbox{ Jy km s}^{-1}$, a difference of $5\%$. The asymmetry is also seen in the (where the approaching half of the galaxy has 13% more flux than the receding) and the H$\alpha$ (5%) distributions [based on the images of @lgs-m31m33], making M33 slightly lopsided. Motivated by this flux asymmetry, we also consider the variation in the GMC mass distributions between the northern and southern halves of the galaxy. We define the southern (receding) portion of the galaxy as the portion of the disk with galactocentric azimuth within $90^{\circ}$ of the kinematic major axis and the northern region as the complement of the southern region. We fit truncated power-laws to the two distributions and report the derived parameters in Table \[mspectable\]. Like the variation between the inner and the outer galaxies, there is substantial difference between the northern and southern mass distribution, particularly at high masses. The southern portion of the galaxy lacks high mass clouds that are present in the northern portion of the galaxy. Both mass distributions have some evidence for truncation at the high masses, though the evidence is relatively weak for clouds in the northern portion of the galaxy. While it would be interesting to separate the variation in the mass distributions into 4 separate regions (inner vs. outer and northern vs. southern), we lack sufficient numbers of clouds for reliable fits. Since differences between the inner and outer regions of the galaxy are more pronounced, we focus our analysis on these difference, though with better data, the north-south asymmetry could be explored. Arm-Interarm Variation {#sparm} ---------------------- For galaxies with well defined spiral structure, the spiral arms contain nearly all of the molecular material in the galaxy [@m51-co; @song; @iram-m31-aa]. The Milky Way appears to be such a galaxy [e.g. @dht01; @mw-spiral]. Molecular gas in the interarm regions of spiral galaxies appears to have different properties than that found in the arms [@n6946-walsh; @m51-1213; @mw-spiral] though @gmas-5055 suggest that the properties of individual GMCs may not vary significantly. While our observations lack the resolution to assess whether the GMC properties are substantially different in interarm regions of M33, we can investigate whether there is variation in the mass distributions of GMCs in and out of spiral arms. Unfortunately, the spiral arms in M33 are not nearly as well defined as they are in the Milky Way. @hs80 identified ten spiral arms in the flocculent structure of M33 with two arms being dominant. @hs80 attributed the two primary arms to spiral density waves. This conjecture was bolstered by observations in the near infrared, which revealed that the spiral structure of the old stellar population, and hence the stellar mass, was dominated by these two “grand design” spiral arms [@m33-nir]. The remaining eight arms are mainly located in the outer galaxy. They were primarily defined by the presence of young stellar associations but also correlate well with narrow filaments of atomic gas [@deul EPRB]. @hs80 attributed these eight secondary arms to an unknown mechanism other than density waves. For present purposes, we have defined the locations of the M33 spiral arms using a 3.6 $\mu$m image of the galaxy taken using the IRAC instrument of the [*Spitzer Space Telescope*]{} [@m33-irac]. A small correction was made for dust emission in the 3.6  image leaving emission primarily from the old stellar population. An azimuthally-symmetric exponential disk was fit to the result. The spiral arms become apparent as regions of positive surface brightness after subtracting the model disk. Detailed analysis will be presented elsewhere, but Figure \[comap2\] shows the spiral arm locations with respect to the CO emission. Figure \[inoutarm\] compares the mass distributions of GMCs in and out of the grand design spiral arms of M33. Only slight differences are seen. The distributions in both arm and interarm regions show a truncation at high mass, but massive GMCs are found in both environments. The truncation levels and masses are roughly equal. Although a two-sided KS test suggests that the distributions have different shapes, the difference is barely significant ($P_{KS}=0.04$). Unlike the sharp cutoff in the inner galaxy mass distribution, the arm and interarm mass distributions lack a specific feature that sets them apart. Table \[mspectable\] gives parameters of truncated power law fits. In addition to comparing arm-interarm mass distributions of GMCs, we examined arm-interarm variations in the column density of molecular material. Figure \[arminter\] shows the ratio of the average surface density of CO found in spiral arms compared to that in interarm regions as a function of galactocentric radius. In the inner part of the galaxy, the ratio has a roughly constant value of 1.5, comparable to other flocculent galaxies (NGC 5055: 2–3, [@n5055-arminter]; NGC 6946: $\sim 1.4$, [@n6946-walsh]). The value is significantly smaller than galaxies with stronger grand-design structure such as M31, where the ratio approaches 20 [@iram-m31-aa], or M51, where contrasts of a factor of 17 have been observed [@gb-m51]. In M33, much of the “interarm” molecular material is associated with the secondary spiral arms identified by @hs80. However this is a natural consequence of star formation associated with GMCs. Wherever GMCs exist, one would expect newly formed stars to exist as well and create secondary “arms.” The strong correlation between GMCs and high mass stars (as traced by regions) was established in EPRB. Regardless of the visual appearance, in the inner regions of M33, the CO distribution is only weakly coupled to the stellar mass distribution, as evidenced by the small arm/interarm variation. The increase in the arm-interarm contrast at large galactocentric radius appears statistically significant. However, at these distances, the spiral density wave becomes poorly defined and may be following enhancements in the stellar light distribution associated with star formation in molecular clouds thereby increasing the arm-interarm contrast. Carefully determining the location and amplitude of the spiral pattern present in the old stellar population will clarify these results. Understanding Variations in GMC Mass Distributions {#variation} -------------------------------------------------- The results of §§\[gmcs-again\] and \[massspec\] demonstrate that the mass distribution of GMCs is changing significantly across the face of the galaxy. The highest mass GMCs are found at intermediate galactic radii (2 kpc $\la R_{gal} \la$ 4 kpc) and no clouds with masses larger than $8\times 10^5~M_{\odot}$ occur outside this region. These changes are circumstantially related to variations in the galaxy roughly demarcated at these radii ($\sim$2 and $\sim$4 kpc). To generally describe the galaxy, we divide it into three sections at small ($R_{gal}<2$ kpc), intermediate ($2<R_{gal}<4$ kpc) and large ($R_{gal}>4$ kpc) galactic radii. At small galactocentric radii, the spiral structure is not as well-defined as it is for intermediate radii [@hs80; @m33-nir]. At large radii, the disk of the galaxy develops a warp [@cs97; @cs00] and ultimately becomes gravitationally stable [$R_{gal}>6$ kpc; @c03]. These divisions between these gross features of the galaxy correspond roughly to the regions where we see changes in the molecular cloud mass distribution. While this correlation is suggestive, the likely regulator of the molecular cloud masses is structure of the atomic gas from which the GMCs must have formed [@rpeb03; @psp5]. The GMCs are found at the peaks of the atomic gas distribution (EPRB), a generic feature of the ISM in galaxies dominated by atomic gas [@psp5]. In M33, the atomic gas is distributed in what appears to be a filamentary network [@deul]. The character of this network varies with radius in the galaxy consisting of small, relatively faint filaments in the inner galaxy whereas there are large (kiloparsec scale) filaments in at intermediate radii associated with the spiral arms and then large, less bright filaments are large radius. We quantify this description by decomposing the bright filamentary network into a set of clouds and examining the “cloud” properties. While we adopt a similar procedure to the segmentation that identifies GMCs, such “clouds” do not necessarily represent distinct physical objects. The segmentation is a way to measure the characteristic sizes and masses of objects in the atomic ISM of M33 across the galaxy. To generate a catalog of objects, we apply a three-tiered brightness cut at $T_A=64,48,$ and $32$ K to the map of @deul. Connected regions of pixels are identified at each brightness cut. New regions at a given brightness cut are identified as new clouds. Regions containing pixels from higher levels have those pixels assigned to the closest predefined region. Here “closest” means the region that was identified at a higher level to which a pixel is connected by the shortest path contained entirely within the new region. The algorithm is similar to CLUMPFIND [@clumpfind], but the coarse contouring levels prevent every local maximum from being identified as as a cloud. The shortest-path criterion is necessary for decomposing the emission since the “clouds” blend into a continuous network at the low brightness thresholds. Figure \[deepmap\] suggest that a similar criterion would be necessary for identifying GMCs in a CO map with higher sensitivity. We reiterate that the decomposition is only a means to characterize the sizes of structures in the atomic gas across the galaxy in a uniform manner. In Figure \[himap\], we show the integrated intensity map of @deul, restricted to the GMC survey area, with the ellipses indicating measurements of the major ($\ell_{maj}$) and minor ($\ell_{min}$) axes of the clouds in blue. For each cloud of mass $M$, we calculate the characteristic mass ($M_{char}$) of objects that would form by gravitational instability along the filament using the prescription of @eg94. A major result of Elmegreen’s analysis including both spiral potential and magnetic fields is that the fragmentation of neutral gas into “superclouds” (which would be substructures in the “clouds” we catalog) can be evaluated in terms of the local quantities that characterize the gas, rather than using disk-averaged values. The characteristic mass of these superclouds is thought to establish the upper cutoff observed in the molecular cloud mass distribution in the Milky Way [@eg94; @kos03]. In this picture, disk instabilities form atomic superclouds which are (marginally) self-gravitating and GMC complexes form by turbulent fragmentation and collapse inside the superclouds. The total mass of the parent supercloud would then establish the maximum mass of the distribution. We apply the local fragmentation formalism to all the clouds emphasizing the principal result of the Elmegreen analysis: the use of local conditions to evaluate the characteristic fragmentation masses. This application will illustrate how variations in the cataloged atomic cloud properties would translate into variations in the characteristic mass of superclouds and thus into the cutoffs in the GMC mass distribution. In terms of the properties of the identified atomic gas clouds, the characteristic mass of superclouds is given by @eg94 as: $$M_{char}=\pi \ell_{min} c_{g}\sqrt{\frac{\mu}{2G}}$$ where $\mu$ is the mass per unit length of the clouds ($M/\ell_{maj}$) and $c_g$ is the three-dimensional velocity dispersion of the gas, taken to be 18 km s$^{-1}$ for M33 [@cs97] assuming isotropy. We plot the characteristic mass of supercloud formation as a function of galactocentric radius in Figure \[charmass\]. The figure shows that the atomic structures that could support fragmentation into large mass superclouds are most abundant at intermediate values of $R_{gal}$. Indeed, the factor-of-three variation in the maximum mass is consistent with the idea that the mass of fragmentation controls the variations in the maximum mass of molecular clouds seen between inner and intermediate radii. However, the results do not adequately explain the abrupt fall-off in the number of GMCs seen at large $R_{gal}$ since there are apparently massive clouds at this radius which fulfill the instability criteria. Other additional factors may contribute to the dearth of GMCs at this radius. The surface density of the disk becomes comparable to the stellar disk at $R_{gal}\approx 4.5$ kpc which will result in a significant increase in the scale height [@pressure2], not accounted for in our simplification of @eg94. The disk also develops a warp [@cs97; @cs00] at this radius though the disk remains gravitationally unstable to structure formation out to larger $R_{gal}$ [@c03]. The locations and mass distributions of GMCs are consistent with being governed by the structure in the atomic gas. For $R_{gal} \la 2$ kpc it appears that massive stars establish the structure of the ISM based on the preponderance of wind-driven shells in the inner galaxy [@ddh90]. The GMCs in the inner galaxy appear on the edges of these holes (EPRB) suggesting that the clouds are formed in the shells swept up in winds from high mass stars. At intermediate and large (i.e. $R_{gal}\ga 4$ kpc) galactic radii, there are substantially fewer wind-driven shells [@ddh90 EPRB], and the molecular ISM becomes more concentrated along the spiral arms (Figure \[arminter\]). At large radii, GMCs disappear entirely, as does significant spiral structure, and the rotation curve flattens appreciably [@c03]. The molecular gas that is present at large radii may be the result of small scale compressions of the neutral ISM producing less massive molecular clouds. Given that applying simple instability analysis to real galactic disks only leads to suggestive correlations, resolving the variation of cloud mass distributions will likely require sophisticated simulations of molecular cloud formation across the galactic disk with sufficient detail to match to these observations. Further progress on the effects of spiral structure and molecular cloud formation will be possible in conjunction the high resolution map VLA+GBT map of emission by Thilker & Braun (forthcoming). Progress on simulations and observations will facilitate the critical evaluation of the above conjectures and better establish the association between the molecular and atomic phases of the ISM. The variations in the mass distributions of molecular clouds reported in this paper will represent an important observational feature to reproduce in theoretical analyses. Summary and Conclusions {#summ} ======================= New maps of CO ($J=1\to 0$) emission in M33, produced by combining existing interferometric and single-dish maps, including new single-dish observations from the NRO 45-m, give an improved catalog of giant molecular clouds (GMCs) and a more sensitive measurement of the non-GMC CO emission. The GMC properties in the new catalog are unaffected by the spatial filtering and non-linear flux recovery of the existing interferometer catalog. The relative calibrations of all three data sets agree, and the total flux in the merged and combined maps match the FCRAO map in the regions of overlap. The GMC properties in the new catalog are corrected for instrumental effects using the method of @props resulting in superior estimates of the cloud properties. Improvements in the estimates of cloud properties as well as a high sensitivity map of the inner galaxy yield several new results. The fraction of molecular gas found in GMCs ($M>1.3\times10^5~M_{\odot}$) declines with galactocentric radius, ranging from 60% at $R_{gal}=0$ to 20% at $R_{gal}=4.0$ kpc. Based on the higher sensitivity BIMA+FCRAO+NRO map, molecular gas that is not associated with GMCs is located in diffuse and filamentary structures around the GMCs. Nearly 90% of the diffuse emission is found within 100 pc projected separation of a catalog GMC boundary. Beyond 4.0 kpc, the GMC mass fraction cuts off sharply, though molecular gas is detectable to the edge of the surveyed region at $R_{gal}=5.5$ kpc. Even within the area $R_{gal}<4$ kpc, where GMCs are found, the mass distribution in the inner galaxy ($R_{gal}<2.1$ kpc) is significantly different from the mass distribution in the outer galaxy. In the annulus $2.1 < R_{gal} < 4.1$ kpc, the GMCs have a power-law mass distribution of $dN/dM\propto M^{-2.1}$. Inside 2.1 kpc, the GMC mass distribution is truncated at high mass ($\gtrsim 4\times 10^5~M_{\odot}$), but lower mass clouds appear to follow the same distribution in both regions. We argue that variations in the gas structure are responsible for the observed changes in the GMC mass distribution. The southern (receding) half of the galaxy shows a truncated mass distribution when compared to the northern half of the galaxy. There is no obvious difference between the mass distribution of GMCs associated with the grand design spiral arms of the galaxy and those GMCs in the interarm region. In molecular surface brightness, M33 shows a arm-interarm contrast of 1.5, typical of other flocculent galaxies. We are grateful for the opportunity to acquire data on the 45-m telescope at the Nobeyama Radio Observatory (NRO). NRO is a division of the National Astronomical Observatory of Japan under the National Institutes of Natural Sciences. We thank Mark Heyer for providing the FCRAO data on M33 used in this work. ER’s work is supported by a National Science Foundation Astronomy and Astrophysics Postdoctoral Fellowship (NSF AST-0502605). [46]{} natexlab\#1[\#1]{} , E. & [van Albada]{}, G. D. 1979, , 75, 251 , L., [Fukui]{}, Y., [Kawamura]{}, A., [Leroy]{}, A., [Mizuno]{}, N., & [Rosolowsky]{}, E. 2007, in Protostars and Planets V, ed. B. [Reipurth]{}, D. [Jewitt]{}, & K. [Keil]{}, 81–96 , L. & [Rosolowsky]{}, E. 2006, ArXiv Astrophysics e-prints , E. 2003, , 342, 199 , E. & [Salucci]{}, P. 2000, , 311, 441 , E. & [Schneider]{}, S. E. 1997, , 479, 244 , T. M., [Hartmann]{}, D., & [Thaddeus]{}, P. 2001, , 547, 792 , E. R. & [den Hartog]{}, R. H. 1990, , 229, 362 , E. R. & [van der Hulst]{}, J. M. 1987, , 67, 509 , B. G. 1994, , 433, 39 , G., [Plambeck]{}, R. L., [Rosolowsky]{}, E., & [Blitz]{}, L. 2003, , 149, 343 , W. L., [Madore]{}, B. F., [Gibson]{}, B. K., [Ferrarese]{}, L., [Kelson]{}, D. D., [Sakai]{}, S., [Mould]{}, J. R., [Kennicutt]{}, Jr., R. C., [Ford]{}, H. C., [Graham]{}, J. A., [Huchra]{}, J. P., [Hughes]{}, S. M. G., [Illingworth]{}, G. D., [Macri]{}, L. M., & [Stetson]{}, P. B. 2001, , 553, 47 , Y., [Mizuno]{}, N., [Yamaguchi]{}, R., [Mizuno]{}, A., & [Onishi]{}, T. 2001, , 53, L41 , S., [Guelin]{}, M., & [Cernicharo]{}, J. 1993, , 274, 123 , R. D., [Polomski]{}, E., [Woodward]{}, C. E., [McQuinn]{}, K., [Boyer]{}, M., [Humphreys]{}, R. M., [Brandl]{}, B., [van Loon]{}, J. T., [Fazio]{}, G., [Willner]{}, S. P., [Barmby]{}, P., [Ashby]{}, M., [Pahre]{}, M., [Rieke]{}, G., [Gordon]{}, K., [Hinz]{}, J., [Engelbracht]{}, C., [Alonso-Herrero]{}, A., [Misselt]{}, K., [P[é]{}rez-Gonz[á]{}lez]{}, P. G., & [Roellig]{}, T. 2005, Bulletin of the American Astronomical Society, 37, 451 , T. T., [Thornley]{}, M. D., [Regan]{}, M. W., [Wong]{}, T., [Sheth]{}, K., [Vogel]{}, S. N., [Blitz]{}, L., & [Bock]{}, D. C.-J. 2003, , 145, 259 , M. H., [Carpenter]{}, J. M., & [Snell]{}, R. L. 2001, , 551, 852 , M. H., [Corbelli]{}, E., [Schneider]{}, S. E., & [Young]{}, J. S. 2004, , 602, 723 , R. M. & [Sandage]{}, A. 1980, , 44, 319 , R. C. 1989, , 344, 685 , W., [Ostriker]{}, E. C., & [Stone]{}, J. M. 2003, , 599, 1157 , N., [Tosaki]{}, T., [Nakai]{}, N., & [Nishiyama]{}, K. 1997, , 49, 275 , P., [Olsen]{}, K. A. G., [Hodge]{}, P. W., [Strong]{}, S. B., [Jacoby]{}, G. H., [Schlingman]{}, W., & [Smith]{}, R. C. 2006, , 131, 2478 , C., [Neininger]{}, N., [Guélin]{}, M., [Berkhuijsen]{}, E., & [Beck]{}, R. 2006, , 000, in press , W. H., [Teukolsky]{}, S. A., [Vetterling]{}, W. T., & [Flannery]{}, B. P. 1992, [Numerical recipes in C. The art of scientific computing]{} (Cambridge: University Press, |c1992, 2nd ed.) , M. W., [Thornley]{}, M. D., [Helfer]{}, T. T., [Sheth]{}, K., [Wong]{}, T., [Vogel]{}, S. N., [Blitz]{}, L., & [Bock]{}, D. C.-J. 2001, , 561, 218 , M. W. & [Vogel]{}, S. N. 1994, , 434, 536 , E. 2005, , 117, 1403 , E. & [Leroy]{}, A. 2006, , 118, 590 , E. W., [Plambeck]{}, R., [Engargiola]{}, G., & [Blitz]{}, L. 2003, , 599, 258 , R. J., [Teuben]{}, P. J., & [Wright]{}, M. C. H. 1995, in ASP Conf. Ser. 77: Astronomical Data Analysis Software and Systems IV, 433–+ , S., [Staveley-Smith]{}, L., [Dickey]{}, J. M., [Sault]{}, R. J., & [Snowden]{}, S. L. 1999, , 302, 417 , A. A. & [Blitz]{}, L. 1978, , 225, L15 , A. A. & [Lee]{}, Y. 2006, , 641, L113 , K., [Yamaguchi]{}, C., [Nakai]{}, N., [Sorai]{}, K., [Okumura]{}, S. K., & [Ukita]{}, N. 2000, in Proc. SPIE Vol. 4015, p. 237-246, Radio Telescopes, Harvey R. Butcher; Ed., ed. H. R. [Butcher]{}, 237–246 , J. 1997, [Introduction to Error Analysis, the Study of Uncertainties in Physical Measurements, 2nd Edition]{} (Published by University Science Books, 648 Broadway, Suite 902, New York, NY 10012, 1997.) , T., [Hasegawa]{}, T., [Shioya]{}, Y., [Kuno]{}, N., & [Matsushita]{}, S. 2002, , 54, 209 , T., [Shioya]{}, Y., [Kuno]{}, N., [Nakanishi]{}, K., & [Hasegawa]{}, T. 2003, , 55, 605 , S. N., [Helfer]{}, T. T., [Sheth]{}, K., [Harris]{}, A. I., [Thornley]{}, M. D., [Regan]{}, M. W., [Blitz]{}, L., [Wong]{}, T., & [Bock]{}, D. C.-J. 1999, in Science with the Atacama Large Millimeter Array (ALMA) , W., [Beck]{}, R., [Thuma]{}, G., [Weiss]{}, A., [Wielebinski]{}, R., & [Dumke]{}, M. 2002, , 388, 7 , J. P., [de Geus]{}, E. J., & [Blitz]{}, L. 1994, , 428, 693 , J. P. & [McKee]{}, C. F. 1997, , 476, 166 , S. P. & [Nelson-Patel]{}, K. 2002, , 568, 679 , C. D. & [Scoville]{}, N. 1989, , 347, 743 —. 1990, , 363, 435 , C. D., [Walker]{}, C. E., & [Thornley]{}, M. D. 1997, , 483, 210 [^1]: The velocities were determined as a function of position based on the orientation parameters and rotation curve of @cs00: inclination 52, major axis position angle 22. [^2]: For other distances, scale cloud mass in proportion to $D^2$. [^3]: We have assumed the surface densities and virial parameters would comparable to other GMCs in M33 and used average values of these quantities from @psp5.
--- abstract: 'We present ellipsoidal light-curve fits to the quiescent $B$, $V$, $R$ and $I\/$ light curves of GRO J1655–40 (Nova Scorpii 1994). The fits are based on a simple model consisting of a Roche-lobe filling secondary and an accretion disc around the black-hole primary. Unlike previous studies, no assumptions are made about the interstellar extinction or the distance to the source; instead these are determined self-consistently from the observed light curves. In order to obtain tighter limits on the model parameters, we used the distance determination from the kinematics of the radio jet as an additional constraint. We obtain a value for the extinction that is lower than was assumed previously; this leads to lower masses for both the black hole and the secondary star of 5.4 $\pm$ 0.3 M$_{\sun}$ and 1.45 $\pm$ 0.35 M$_{\sun}$, respectively. The errors in the determination of the model parameters are dominated by systematic errors, in particular due to uncertainties in the modeling of the disk structure and uncertainties in the atmosphere model for the chemically anomalous secondary in the system. A lower mass of the secondary naturally explains the transient nature of the system if it is either in a late case A or early case B mass-transfer phase.' author: - | Martin E. Beer[^1] and Philipp Podsiadlowski\ University of Oxford, Nuclear and Astrophysics Laboratory, Oxford, OX1 3RH, England title: 'The quiescent light curve and evolutionary state of GRO J1655–40' --- \[firstpage\] accretion, accretion discs - stars: individual: GRO J1655–40 - binaries: close - X-rays: stars. Introduction ============ GRO J1655–40 is a well studied soft X-ray transient (SXT) with an orbital period of $2.62168 \,\pm\, 0.00014\,$d (van der Hooft et al.1997, hereafter vdH). It contains a F3–5 giant or sub-giant with an effective temperature of approximately 6500K (Orosz & Bailyn 1997, hereafter OB). Shahbaz et al. 1999 (hereafter S99) obtained the radial-velocity curve of the system during quiescence using high-resolution spectroscopy and found a mass function of $2.73 \,\pm\, 0.09\,$M$_{\sun}$. This value is significantly lower than the previous estimates by Bailyn et al. (1995b, $3.16 \,\pm\, 0.15\,$M$_{\sun}$) and OB ($3.24 \,\pm\, 0.09\,$M$_{\sun}$), most likely because their radial-velocity determination relied on some data that were obtained during outburst (see Phillips, Shahbaz & Podsiadlowski 1999). From the rotational broadening of the spectral lines, they also obtained a constraint on the mass ratio of 2.29–3.06. In their original study, OB modelled the ellipsoidal light curve during quiescence assuming a polar temperature of 6500 K for the secondary and found an inclination of $69\fdg 50\,\pm\, 0\fdg 08$ and a mass ratio of $2.99 \,\pm\, 0.08$. These values imply masses of $7.02 \,\pm\, 0.22\,$M$_{\sun}$ and $2.34 \,\pm\, 0.12\,$M$_{\sun}$ for the black hole and secondary star, respectively. VdH also modelled the ellipsoidal light curve during quiescence and obtained an inclination of $63\fdg 7 - 70\fdg 7$ and a secondary mass in the range of 1.60–3.10M$_{\sun}$, which implies a mass ratio of 2.43–3.99. VdH’s estimates have larger uncertainties since, unlike OB, they considered models with three different luminosities (31, 41 and 54L$_{\sun}$) taking into account uncertainties in the distance ($3.2\,\pm\, 0.2\,$kpc, obtained from the kinematics of the observed radio jet; Hjellming & Rupen 1995) and in the colour excess ($E(B-V)= 1.3\,\pm\, 0.1$, based on various previous estimates; see vdH). Combining a value for $E(B-V)$ of 1.3 with the observed $B-V$ colour of approximately 1.55 implies an intrinsic $(B-V)_0$ of less than 0.25 (this is an upper limit, since any disc contribution tends to be redder than this). A $(B-V)_0$ of less than 0.25 corresponds to a sub-giant of spectral type A8 or earlier (Fitzgerald 1970) whilst the $(B-V)_0$ of a F3–5 giant or sub-giant is 0.39–0.44. This immediately demonstrates that a value for $E(B-V)$ of 1.3 is not consistent with the observed spectral type. An overestimate of $E(B-V)$ leads to an overestimate of the bolometric luminosity of the secondary, which in turn requires a larger and more massive secondary. More recent estimates of the ultraviolet extinction (Hynes et al. 1998) yielded an $E(B-V)$ of $1.2 \,\pm\, 0.1$. This implies a secondary of lower luminosity than considered by vdH. Both OB and vdH allowed an arbitrary magnitude offset for each passband ($B$, $V$, $R$, $I$) when they modelled the ellipsoidal light curves. This has the same effect as allowing the distance and the colour excess to vary independently for different passbands. It also means that they were not using all the available information. In particular, this did not allow them to check whether their best-fitting models were actually consistent with the observed spectral type. In the present study we avoid these problems by fitting the ellipsoidal light curves for all passbands simultaneously (without arbitrary offsets), but allowing the distance and the colour excess to vary freely. As a consequence, the distance and the colour excess are determined self-consistently from the best-fitting models instead of being taken as input parameters. In Section \[modeldescript\] we outline the basic model used in this study. In Section \[lcmodel\] we apply it to fit the quiescent light-curve data of OB for different disc structures and obtain a new model for all parameters of GRO J1655–40 and examine their uncertainties. In Section \[sectemp\] we show how the variation in temperature across the surface of the secondary limits the accuracy to which the spectral type of the secondary can be determined. Finally, in Section \[discuss\] we compare our results to previous studies and discuss the implications of our new model for the evolutionary state of the system. Description of the model {#modeldescript} ======================== Our method to model the ellipsoidal light curve of Nova Sco is similar in many respects to the methods used previously by OB and vdH. It consists of a Roche-lobe filling secondary and a simple model for the accretion disc. For a given value of the polar temperature, the temperature across the secondary is determined using a standard linear limb-darkening law, where the coefficients for each surface element are calculated by interpolating the tables of Wade & Rucinski (1985). Similar to OB and vdH, we model the accretion disc as a flat cylindrical disc with an opening angle of 2. For the inner disc radius we follow OB and take, at least initially, a small value of 0.005 of the effective Roche-lobe radius ($r_{\rm{L}}$). We varied the outer disc radius, which is limited by the tidal disruption radius, considering outer disc radii of 0.7, 0.8 and 0.9 $r_{\rm{L}}$, respectively. For the temperature profile we adopted a simple power-law profile, $T_{\rm disc}\propto r^{\alpha}$, where we considered both a flat temperature profile with $\alpha = -0.1$ and a standard steady-disc profile with $\alpha=-0.75$ (Shakura & Sunyaev 1973). The constant of proportionality in the temperature power-law profile depends on both the outer disc temperature and the outer disc radius. In order to make the temperature structure independent of $r_{\rm{out}}$, we scaled the disc rim temperature to correspond to the same temperature at $r = 0.9\,r_{\rm{L}}$ (i.e. discs with different outer radii will all have the same temperature at $r$ = 0.7 $r_{\rm{L}}$). To model the emission from the secondary and the disc, we divide the secondary and the surface of the accretion disc into discrete elements and determine the local gravity and temperature for each element. Unlike previous studies, we do not assume that the emission on the secondary is that of a blackbody; instead we calculate it from the model atmospheres by Kurucz (1992). For this purpose, we have constructed a table, which gives the emission in different passbands ($B$, $V$, $R$, $I$) as a function of temperature, surface gravity and $E(B-V)$. To calculate the emission, we first corrected the Kurucz model atmospheres for interstellar extinction for a specified value of $E(B-V)$ using a mean Galactic extinction curve (Fitzpatrick 1999) and calculated the extinction at each wavelength. It is important to correct for the extinction first since there is a large variation in extinction across the broadband passbands: for example in the $V$ band, $E(\lambda-V)/E(B-V)$ varies from 3.8 to 2.1. We then folded the extinction-corrected atmospheres through standard filter response curves (Bessell 1990) to obtain the emission in each passband. Standard Vega fluxes (T" ug, White & Lockwood 1977) were used to calculate the zero point corrections to the magnitudes. The emission from the disc was calculated similarly except that extinction-corrected blackbody spectra were folded through the passbands rather than model atmospheres. Recently Greene, Bailyn & Orosz (2001) have also modelled the ellipsoidal light curves using model atmosphere fluxes and their analysis is discussed further in Section \[greenecomp\]. The model atmospheres we used in this analysis were of solar abundance. The metal abundances of the secondary in GRO J1655–40 have been measured by Israelian et al. (1999) who found \[Fe/H\]$=0.1 \,\pm\, 0.2$, but with an overabundance of $\alpha$-elements compared to solar values. Their $\alpha$-element abundances are \[O/H\]$=1.0 \,\pm\, 0.3$, \[S/H\]$=0.75 \,\pm\, 0.2$, \[Mg/H\]$=0.9 \,\pm\, 0.40$, \[Si/H\]$=0.9 \,\pm\, 0.3$, \[Ti/H\]$=0.9 \,\pm\, 0.4$ and \[N/H\]$=0.45 \,\pm\, 0.5$. A solar Fe abundance is supported by Buxton & Vennes (2001) who find \[Fe/H\]$=-0.25$–0.00. As detailed $\alpha$-element-enhanced model atmospheres are not freely available we are unable to use $\alpha$-element-enhanced model atmospheres in our calculations. The majority of the spectral lines, however, are due to Fe and so the use of the correct Fe abundance is the most important factor in choosing the metallicity and reassures us that this is the correct metallicity to choose. In Fig. \[spectra\] we demonstrate how important it is to use model atmospheres rather than blackbody spectra. It shows the difference between a 6500 K blackbody and a model atmosphere of the same temperature with a gravity of 3.0 dex. This temperature corresponds to a star with a spectral type similar to GRO J1655–40. The spectra are significantly different, especially in the $B$ band region. For the $I$ band region, the model atmosphere is similar to the Rayleigh-Jeans tail of a blackbody; hence the blackbody assumption would be valid in this passband. Orosz & Hauschildt (2000) have investigated the difference between blackbodies and model atmosphere calculations and found that, with the inclusion of model atmospheres, the minima of the light curve tend to be deeper for the visible and infrared passbands, since at the minima the coolest parts of the secondary are visible (e.g. the L1 point at phase 0.5) and for cool temperatures the differences between model atmospheres and blackbodies are largest. For a particular disc model, our ellipsoidal light-curve model has five free model parameters: the mass ratio, system inclination, colour excess, distance to Nova Sco and polar temperature ($T_{\rm{pole}}$) of the secondary star. The individual component masses ($M_1$ and $M_2$) are calculated from the mass function, where we use the value of $2.73 \,\pm\,0.09\,$M$_{\sun}$ (S99). We determine these five free parameters by comparing our model ellipsoidal light curves to the quiescent $B$, $V$, $R$ and $I\/$ band data (OB), kindly provided by J. A. Orosz, using a standard chi-squared test. The quiescent data are much better sampled in the $V$ and $I$ bands than the in $B$ and $R\/$ bands (there are 6-7 times as many data points in $V$ and $I$ compared to $B$ and $R$). This has the consequence of giving more weight to the $V$ and $I$ data points and hence produces a better fit in these passbands than in $B$ and $R$. OB tried to compensate for this by giving each point in the $B$ and $R$ passbands seven times the weight in their fitting procedure than in the other passbands. However, this gives too much weight to any outlying points the data contains as the number of $\sim$ 25 points in each of these two passbands is not large enough to eliminate small number statistical effects (e.g. the different $R$ magnitudes at phases 0.25 and 0.75 of the data cannot be modelled properly). We therefore chose to give each point equal weight. A grid search was used to find the best-fitting model as this is the most reliable way of finding the global minimum. Since the measurement errors are not normally distributed, it is not valid to establish quantitative relationships between $\Delta\chi^2$ and the confidence limits. The $\chi^2$ test therefore provides a merit function not a maximum likelihood estimator. Because of this, the Hessian matrix (the inverse of the covariance matrix) was not used for error estimation, but a constant $\Delta\chi^2$ was chosen as the boundary to define confidence regions. The parameters of the model are highly correlated with each other. For example, an increase in distance (which makes the object fainter) can be compensated, at least to some degree, in several different ways: (1) by a decrease in mass ratio, which increases the secondary’s Roche-lobe radius and hence increases its surface area; (2) a decrease in $E(B-V)$, which reduces the extinction; or (3) an increase in temperature, which makes the object more luminous. The distance to Nova Sco is relatively well determined from the kinematics of the radio jet (Hjellming & Rupen 1995), and we use this as an additional constraint in the modelling, in order to obtain tighter limits on the various model parameters. The colour excess and temperature of the secondary can be used to estimate the spectral type. This allows us to check whether the best-fitting parameters are consistent with the observed spectral type of Nova Sco. This is preferable to the previous analyses, which allowed an arbitrary shift in the light curves, thereby losing information on the distance and extinction. Shifting the light curves also has the effect of making the light-curve amplitudes strongly dependent on the proportion of disc flux. The present method avoids this problem as altering the proportion of disc flux shifts the light curves while also altering their amplitudes. Therefore, an overestimate of the disk flux will be seen as a shift in the light curves toward brighter magnitudes. If the light curves were arbitrarily offset, this would only appear as a change in amplitude which could be compensated by a change in inclination, mass ratio or temperature. The treatment of the distance and the colour excess as free parameters, and the inclusion of model atmosphere fluxes, are the only significant differences between our model and those of OB and vdH. Indeed, we have checked that, using the same assumptions as OB and vdH, we obtain light-curve fits that are essentially identical to those in these respective studies. Light-curve modelling {#lcmodel} ===================== The disc structure ------------------ In our initial modelling we assumed a steady-state disc where the temperature profile follows a $-3/4$ power law (i.e. $T_{\rm disc}\propto r^{-3/4}$). This corresponds to an optically thick disc where each element emits like a blackbody (Shakura & Sunyaev 1973). In such a disc, most of the contribution to the observed disc flux comes from the inner parts of the disc. We systematically varied the temperature at the disc rim, but did not find any model that produced a good fit to the observed light curves since all models produced too much blue light in the inner parts of the disc. The best fit was obtained for a cool accretion disc with a rim temperature of 1000K, which had a $\chi^2_{\nu\rm{,min}}$ of 2.1. However, in this model the fitted distance was much closer than the distance of $3.2 \,\pm\, 0.2$ kpc found by Hjellming & Rupen (1995); hence this model also had to be discarded. We then considered two alternative types of disc model: (1) an accretion disc with a central hole and (2) a disc with a flat temperature profile. A central hole in the accretion disc of GRO J1655–40 has previously been proposed by Hameury et al. (1997) to explain the time delay between the optical and the X-ray outbursts. Models for discs with central holes have also been suggested by various authors for a variety of systems (e.g. Meyer & Meyer-Hofmeister 1994; Meyer-Hofmeister & Meyer 2000; Narayan, McClintock & Yi 1996). In these models the hole is caused by evaporation of the inner parts of the disc. Here, we assumed that the temperature at the inner edge of the disc did not exceed 7000K (a higher temperature would produce too much blue light). The 7000 K cutoff corresponds to $r = 0.067\,r_{\rm{L}}$ and $r = 0.17\,r_{\rm{L}}$ for rim temperatures of 1000 and 2000K, respectively. In cataclysmic variables (CVs), maximum entropy eclipse mapping (MEM) has been used to determine the radial temperature profile of the discs (Horne 1993). In outburst, the disc temperature profile is generally found to be consistent with a $-3/4$ power law, whilst in quiescence the profile is significantly less steep, occasionally even flat (Wood et al. 1986, 1989). It has been suggested that the temperature profile in SXTs will also be flat during quiescence (Janet Wood, private communication). In their study, OB let the power-law index of the disc temperature profile be a free parameter and found a value of $-0.12$ for their best-fitting model. Based on this finding, we adopted a power-law index of $-0.1$ for our model with a flat temperature profile. Since for such a profile, most of the flux comes from the outer parts of the disc, this model is not sensitive to the chosen inner disc radius. In models with a disc rim temperature of 1000K, the disc does not contribute much flux to the system. Its principle effect is in eclipsing the secondary and thereby increasing the depth of the minimum at phase 0.5. Therefore, the addition of a central hole did not improve the fit. The best-fitting model for the disc with the central hole was obtained for a rim temperature of 2000K. This model has a reduced $\chi^2_{\nu\rm{,min}}$ of 1.7 for 370 degrees of freedom. The 2000K disc contributes a greater proportion of the system’s flux than the cooler 1000K disc. The disc with the flat temperature profile was modelled so that it produced the same amount of flux as the 2000K disc with the central hole ($\sim 5$ per cent in $V$). This corresponds to a rim temperature of 3500K at 0.9 $r_{\rm{L}}$. This model produced the overall best fit with a $\chi^2_{\nu\rm{,min}}$ of 1.65. To improve the fits further, we then used a finer grid with spacings of 0.1 in the mass ratio, 0.1 in inclination, 0.001 in $E(B-V)$, 0.002 kpc in distance and 25 K in polar temperature for the two disc models. The results of the best fits are shown in Table \[bfvalues\] for the two types of disc models. The light-curve fits for the best-fitting model with a flat disc profile and outer disc radius of $0.8\,r_{\rm{L}}$ are shown in Fig. \[bestfits\] (this model has a reduced $\chi^2_{\nu\rm{,min}}$ of 1.614). [lccccccc]{} Parameter &\ ------------------------------------------------------------------------ & & &\ & 0.7 & 0.8 & 0.9 & & 0.7 & 0.8 & 0.9\ Mass ratio & 3.4 & 3.9 & 4.0 & & 3.9 & 4.4 & 5.3\ Inclination & 695 & 684 & 673 & & 700 & 687 & 674\ $T_{\rm{pole}}$ (K) & 6625 & 6525 & 6350 & & 6850 & 6750 & 6725\ $E(B-V)$ & 1.037 & 1.001 & 0.950 & & 1.089 & 1.067 & 1.055\ Distance (kpc) & 3.340 & 3.208 & 3.248 & & 3.138 & 2.998 & 2.820\ Minimum $\chi^2_{\nu}$ & 1.644 & 1.614 & 1.764 & & 1.697 & 1.675 & 1.673\ & & &\ $M_1$ ($\rm{M}_{\sun}$) & 5.65 & 5.35 & 5.45 & & 5.20 & 5.10 & 4.90\ $M_2$ ($\rm{M}_{\sun}$) & 1.65 & 1.40 & 1.35 & & 1.35 & 1.15 & 0.90\ $L_2$ ($\rm{L}_{\sun}$) & 25.0 & 20.5 & 18.5 & & 24.5 & 21.0 & 17.5\ $T_{\rm{eff}}$ (K) & 6200 & 6100 & 5925 & & 6450 & 6325 & 6325\ ------------------------- --------------- --------------- --------------- Parameter 0.7 0.8 0.9 Mass ratio 3.65 - 4.15 4.20 - 4.75 5.00 - 5.90 Inclination 6975 - 7015 6850 - 6890 6720 - 6765 $T_{\rm{pole}}$ (K) 6720 - 6935 6670 - 6875 6610 - 6785 $E(B-V)$ 1.057 - 1.117 1.039 - 1.103 1.030 - 1.074 Distance (kpc) 2.995 - 3.296 2.890 - 3.096 2.685 - 2.912 $M_1$ ($\rm{M}_{\sun}$) 5.05 - 5.35 4.95 - 5.20 4.70 - 5.00 $M_2$ ($\rm{M}_{\sun}$) 1.20 - 1.45 1.05 - 1.25 0.80 - 1.00 $L_2$ ($\rm{L}_{\sun}$) 22.5 - 26.5 19.5 - 22.5 15.5 - 19.0 $T_{\rm{eff}}$ (K) 6250 - 6500 6225 - 6425 6150 - 6350 ------------------------- --------------- --------------- --------------- ------------------------- --------------- --------------- --------------- Parameter 0.7 0.8 0.9 Mass ratio 3.30 - 3.60 3.55 - 4.00 3.65 - 4.50 Inclination 6935 - 6975 6825 - 6860 6710 - 6760 $T_{\rm{pole}}$ (K) 6550 - 6745 6380 - 6570 6210 - 6410 $E(B-V)$ 1.017 - 1.067 0.970 - 1.019 0.917 - 0.962 Distance (kpc) 3.167 - 3.496 3.116 - 3.398 3.106 - 3.352 $M_1$ ($\rm{M}_{\sun}$) 5.35 - 5.70 5.30 - 5.60 5.15 - 5.70 $M_2$ ($\rm{M}_{\sun}$) 1.45 - 1.80 1.30 - 1.60 1.10 - 1.60 $L_2$ ($\rm{L}_{\sun}$) 22.5 - 27.0 19.0 - 23.0 15.0 - 20.5 $T_{\rm{eff}}$ (K) 6100 - 6300 5950 - 6150 5750 - 6025 ------------------------- --------------- --------------- --------------- Confidence limits for the system parameters ------------------------------------------- The fitting procedure has two sources of errors: one is the general statistical error and the other is caused by the finite size of the grid. Ideally, an iterative process would be used to find the global minimum at each value of a parameter. However, we found that there were a large number of minima and that an iterative process was not guaranteed to find the global minimum. To estimate the effect of the finite grid size, we used the following procedure. At the minimum we estimated the effect of the finite grid size on each parameter in turn by fixing that parameter to the value it has at the minimum and by varying the other parameters systematically. For each parameter we calculated the increase in $\chi^2$ corresponding to a value half a grid spacing away. The largest increase in the value of $\chi^2$ was then taken as the uncertainty, $\Delta\chi^2$, in $\chi^2$ due to the grid resolution for the particular parameter kept fixed. The 90 per cent confidence limits generally generate an increase in $\chi^2$ of 2.71 (Avni 1976). We added this to $\Delta\chi^2$ and used the resulting value to define the 90 per cent confidence region and to obtain the confidence limits for each individual parameter. The masses of the two binary component masses are not parameters that are fitted directly, but are determined from the mass function (kept fixed) and the values of the inclination and mass ratio at a particular grid-point in the five-dimensional model parameter space. To determine the uncertainty in $\chi^2$ due to the finite grid size, we used a similar method as above, except that this time both the mass ratio and the inclination of the system were held fixed at their values at the minimum and the remaining three parameters were varied to find $\Delta\chi^2$. The 90 per cent confidence limits for a quantity depending on two parameters generates an increase in $\chi^2$ of 4.61 (Avni 1976). This was added to $\Delta\chi^2$ and the result was used to define the confidence regions for the component masses. Similarly, to calculate the confidence limits for the luminosity ($L_2$) and effective temperature ($T_{\rm{eff}}$) of the secondary, the same method was used, except that these quantities depend on three parameters: the mass ratio, the inclination and the polar temperature. The luminosity was calculated by summing $\sigma T^4$ over the surface, and the effective temperature by dividing the luminosity by the surface area. The effect of the finite size of the grid in the distance and colour excess was then found and added to the increase in $\chi^2$, corresponding to the 90 per cent confidence limits for a quantity depending on three parameters (6.25, Avni 1976). The 90 per cent confidence limits for the steady-state disc models with a central hole with an inner edge disc temperature of 7000 K are shown in Table \[bfhole\]. This inner edge disc temperature corresponds to $r_{\rm{in}} = 0.17\,r_{\rm{L}}$. Table \[bfhole\] shows that, as the outer disc radius is increased in this model, the mass ratio increases and the distance decreases while the secondary temperature and luminosity remain the same. The secondary temperature and luminosity are consistent with the observed $B-V$, whilst only the outer disc radius of $0.7\,r_{\rm{L}}$ has a distance which is consistent with that of Hjellming & Rupen (1995). (The model with an outer disc radius of $0.8\,r_{\rm{L}}$ is marginally consistent.) The 90 per cent confidence limits for the parameters and the derived quantities are shown in Table \[bfflat\] for the disc model with a $-0.1$ temperature profile. Table \[bfflat\] shows that, as the outer disc radius is increased in this model, the temperature of the secondary decreases, as does the colour excess. This is consistent as a lower temperature implies a later spectral type which has a larger $(B-V)_0$. The distances are similar to the measurement of Hjellming & Rupen (1995) for all values of the outer disc radius. Comparison of the various models in Tables \[bfhole\] and \[bfflat\] shows that the variation in the parameters from model to model is larger than the statistical errors for each individual model (as given in the tables) and that the systematic errors due to the uncertainties in the modelling are the dominant sources of error. We can use this variation of parameters as an [*indication*]{} of the systematic errors, although we need to emphasize that this is really only a lower limit, since we only examined a limited number of models. Using the distance measurement by Hjellming & Rupen (1995) as an additional discriminant, we obtain the best-guess estimates for the parameters given in Table \[bestfit\] with uncertainties that include both the statistical errors and an estimate of the systematic errors. ----------------- --------------------------------- Mass ratio $3.9 \pm 0.6$ Inclination $68\fdg 65 \pm 1\fdg 5$ $T_{\rm{pole}}$ $6575 \pm 375\,$K $M_1$ $5.40 \pm 0.30\, \rm{M}_{\sun}$ $M_2$ $1.45 \pm 0.35\, \rm{M}_{\sun}$ $L_2$ $21.0 \pm 6.0\, \rm{L}_{\sun}$ $T_{\rm{eff}}$ $6150 \pm 350\,$K $E(B-V)$ $1.0 \pm 0.1$ ----------------- --------------------------------- : Best-fitting model parameters for Nova Sco[]{data-label="bestfit"} Checking for self-consistency ----------------------------- ### The colour and spectral type of the secondary To determine the intrinsic $(B-V)_0$ of the secondary, the contribution of the disc to the overall $B-V$ has to be subtracted. This was done by calculating the $B-V$ for the best-fitting models when the disc flux was not included. This resulted in a decrease in $B-V$ of 0.01–0.02. The $E(B-V)$ of the best-fitting models then leads to a $(B-V)_0$ of 0.42–0.61 for the secondary, which corresponds to a spectral-type range of F4–G0 (Fitzgerald 1970). This is a somewhat later spectral type than found in previous estimates but is consistent with the analysis using quiescent data (S99). The spectral type can also be determined from the effective temperature ($6150\,\pm\, 350\,$K). This temperature range corresponds to the spectral-type range F5–G2 (Strai$\check{\rm{z}}$ys & Kuriliene 1981). The two measurements are consistent with each other, implying a spectral type for the secondary of F5–G0. We note, however, that due to the highly anomalous chemical composition of the secondary (Israelian et al. 1999), it is not clear how well these standard relations apply. In our model we have assumed that the temperature distribution of the secondary star is described by a gravity-darkening law appropriate for a radiative atmosphere (von Zeipel 1924), for which the local temperature is proportional to the local value of gravity to the 0.25 power. If the atmosphere were convective, we would expect a gravity-darkening coefficient of 0.08 (Lucy 1967). The deduced spectral type of the secondary (F5–G0) is on the boundary between hot stars with radiative atmospheres and cool stars with convective atmospheres. If the atmosphere were convective (with a coefficient of 0.08), then the temperature variation over the surface would be smaller than in the radiative case, and the system would require a higher inclination to give the same light-curve amplitude. However, a larger inclination would produce deeper, sharper eclipses which are not observed. This suggests that the steeper gravity-darkening coefficient for radiative atmospheres is indeed the most appropriate to use. Near the L1 point, the star’s temperature ($\sim\! 4000\,$K) is cooler than the polar temperature because of the lower surface gravity (2.5 vs. 3.5 dex). This implies that the L1 point may be convective rather than radiative. However, this would only affect the light curve near phase 0.5, when the accretion disc is in front of the secondary. Thus, any difference could be represented by a variation in the amount of eclipsing, i.e. the size of the disc. As we have considered three different disc sizes, we can be confident that this effect does not significantly add to the uncertainties. Unlike the previous studies, we considered a large range of temperatures and colour excesses (and hence luminosities), both of which have been consistent with the spectral type. We have, however, assumed a mean Galactic extinction curve in calculating the extinction. The model is very dependent on the extinction curve as this affects the relative offsets of the light curves in different passbands. If the extinction curve were incorrect, then the fitting procedure would compensate for this by choosing different values for the distance, temperature and $E(B-V)$. Our temperature and $E(B-V)$ are in good agreement with the ‘observed’ spectral type, which implies that the actual extinction cannot significantly deviate from the mean Galactic curve. The greatest variations in Galactic extinction occur in the UV (Fitzpatrick 1999). Since our fits do not rely on UV data, we suspect that deviations from the Galactic mean are unlikely to be important. The alternative to using the mean Galactic extinction curve would have been to shift the light curves in the modelling. However, as discussed previously, shifting the light curves arbitrarily is not desirable (since this leads to the loss of information) and so using the mean Galactic extinction curve is preferable, especially as the modelling proves to be fully self-consistent. ### Modelling the disc {#discmodel} Our disc model is rather simple. For example, we assumed a flat cylindrical disc; this is unrealistic as the disc will almost certainly be more complicated with well defined structure. Possible evidence of structure can be seen in the $I$ band data near phase 0.85 where there is a systematic offset between the data points and the light curve. The data points are dimmer with flux variations of up to 2.5 per cent. Even though the disc in our model contributes only a small fraction of the total flux ($\sim\! 5\,$per cent in $V$), the model is sensitive to its contribution (as can be seen from the comparison of the best-fitting models for different disc structures in Tables \[bfhole\] and \[bfflat\]). In our modelling, we varied the size, temperature and power-law index of the disc. The size of the disc determines the depth of the grazing eclipse and hence the inclination. Since we varied these key disc parameters over a wide range of plausible values, we expect our estimates of the model parameters (e.g. the inclination and the component masses) and their uncertainties to be reasonably realistic. The proportion of flux, the cool accretion discs in our models contribute to the total flux, increases with wavelength i.e. the minimum contamination is in $B$. This is different from models for the SXT A0620–00 (V616 Mon) where the disc contribution appears to decrease from $43 \pm 6$ per cent in $V$ (Oke 1977; McClintock & Remillard 1986) to less than 27 per cent in the infrared ( Shahbaz, Bandyopadhyay & Charles 1999). This shows that even in A0620–00, which has a much shorter orbital period (7.75 hr; McClintock & Remillard 1986) and has been in quiescence for much longer, the disc contributes appreciably to the total flux of the system. On the other hand, Greene et al. (2001) have justified the use of a model without a disc by claiming that an accretion-disc model does not fit the data well and that the flux contribution from it will be negligible. The disc model they consider, however, contributes a large proportion of the $K$ flux, as evidenced by the eclipses in their accretion-disc model at phase 0.0 as opposed to phase 0.5 for eclipses of the secondary. A model with a smaller proportion of $K$ flux would have a larger amplitude than their accretion-disc model. This would better fit the data, decreasing the requirement for a model without an accretion disc. In the next section we will show that models without an accretion disc affect other model parameters, in particular the inferred distance, and that no fully self-consistent models can be obtained without a disc. ### J and K magnitudes {#greenecomp} Our model can also be used to make predictions for other passbands. Using our best-fitting models, we calculated the expected $J$ and $K$ band light curves. For the $J$ and $K$ bands we find mean magnitudes of 13.8 and 13.0, respectively, with the disc contributing between 10 and 20 per cent of the total flux depending on the model. Greene et al. (2001) have recently presented $BVIJK$ photometry of GRO J1655–40 in quiescence. They find mean $J$ and $K$ magnitudes of 13.85 and 13.25, respectively. Our $J$ band prediction is in good agreement with their photometry apart from the depth of the minima, which is somewhat too low (possibly because the inclination is slightly too low). However, the general agreement in the $J$ band provides some direct confirmation that the luminosity and colour excess found in our modelling are accurate. The amplitude of our $K$ band light curve is also in good agreement with their photometry. We find a slightly larger amplitude than their best-fitting model but entirely consistent with their data. The reason for the $K\/$ band mean magnitude discrepancy is not clear, although we have only calculated the best-fitting models and so do not know what range of $K$ magnitudes is represented by our range of parameters. Johnson (1966) provides intrinsic infrared colours for luminosity classes III and V. We may compare these to the colour implied by the photometry of Greene et al. (2001). Their $J-K$ colour is 0.6. Assuming the standard $J$ and $K$ extinction coefficients of Fitzpatrick (1999), i.e. $(J-K)_0 = (J-K) - 0.5 E(B-V)$, and our $E(B-V)$ of $1.0\,\pm\,0.1$, we obtain a $(J-K)_0$ of $0.1\, \pm\,0.05$. For luminosity class III, Johnson (1966) only lists the infrared colours for spectral type G5 and later. For luminosity class V, Johnson (1966) gives $(J-K)_0$ colours of 0.28 and 0.32 for spectral type F5 and G0, respectively – a $(J-K)_0$ colour of 0.1 would correspond to an A star. The $(J-K)_0$ colour appropriate for a F5 or G0 star would require a $J-K$ of 0.8, similar to the value in our model. Hence there appears to be a deficit of $K$ flux in the system, the cause of which is not clear. We can easily check whether a similar discrepancy exists in the modelling of Greene et al. (2001). Using the parameters found in their modelling, we calculated how much their light curves have been shifted in each passband and determined the [*implied*]{} $E(B-V)$ and distance in their model. Using the shifts in all five bands, we find an $E(B-V)$ of $1.041\,\pm\,0.022$ and a distance of $4.008\,\pm\,0.045\,$kpc. For this fit, however, the mean magnitudes in the different passbands are not in reasonable agreement with the data, mainly because of the inclusion of the $K$ band. If we repeat the analysis without using the $K$ band, we find an $E(B-V)$ of $1.076\,\pm\,0.003$ and a distance of $3.805\,\pm\,0.013\,$kpc. This provides good agreement with the mean $B$, $V$, $I$ and $J$ magnitudes but produces a mean $K$ magnitude of 13.06. Hence the discrepancy in the $K$ band is present in their modelling as well. We also note that the implied distance in their model is substantially larger than that found from the kinematics of the radio jet. The reason is that their secondary is too massive and hence too luminous, requiring the system to be more distant in order to have the correct visual magnitude. We ran our model without an accretion disc to see if a self-consistent model could be obtained in this way when fitting to the $B$, $V$, $R$ and $I$ data. We found a $\chi^2_{\nu,\rm{min}}$ of 3.120, substantially larger than the models with the accretion discs. Table \[nodisc\] shows the best-fitting parameters for this case and the best-fitting values of Greene et al.(2001) who assumed a fixed effective temperature and who did not consider the distance or colour excess in calculating their light curves (the values shown are those implied by their model as described above). It is clear that the secondary in our model is similar to the secondary in their model, with a similar mass, temperature and luminosity. We find, however, a higher inclination. This is due to the higher temperature of the secondary which is correlated with the inclination. We note, however, that the secondary is similar and that in eclipsing systems there is a strong constraint on the inclination due to the depth of the eclipse. Hence, we can be satisfied that both the inclination and the other system parameters are accurate in the previous models. The lower mass ratio is clearly a consequence of neglecting the accretion disc. Due to the large $\chi^2_{\nu}$ of the model without an accretion disc, we therefore conclude that an accretion is required for any self-consistent model of Nova Sco. [lcc]{}\ Parameter &\ & Self-consistent & Greene et al. (2001)\ Mass ratio & 2.4 & $2.6 \pm 0.3$\ Inclination & 777 & $70\fdg 2 \pm 1\fdg 9$\ $E(B-V)$ & 1.127 & 1.076\ Distance (kpc) & 3.712 & 3.805 \ $T_{\rm{pole}}$ (K) & 6950 & 6768\ Minimum $\chi^2_{\nu}$ & 3.118 & 1.612\ & &\ $M_1$ ($\rm{M}_{\sun}$) & 5.9 & $6.3 \pm 0.5$\ $M_2$ ($\rm{M}_{\sun}$) & 2.4 & $2.4 \pm 0.4$\ $L_2$ ($\rm{L}_{\sun}$) & 40.1 & 31.9–40.6\ $T_{\rm{eff}}$ (K) & 6500 & 6336 (Fixed)\ The variation in surface temperature and implications for the determination of the spectral type {#sectemp} ================================================================================================ As a result of gravity darkening, there is a large variation in temperature across the surface of the secondary, and hence an observer will see a variation of the secondary’s temperature as a function of orbital phase. To determine the magnitude of this effect, a flux-weighted average temperature and its standard deviation were calculated at each orbital phase. For the flux-weighting we used the $R$ band which has a wavelength range similar to that of the spectra previously used for the determination of the spectral type (6350–6750Å). Fig. \[tmean\] shows the variation in average temperature and the standard deviation in the flux-weighted temperature distribution with phase for the best-fitting model parameters of Section \[lcmodel\]. It demonstrates that the average ‘observed’ temperature of the secondary is substantially lower ($\sim\!450\,$K) than the temperature at the poles where the gravity is largest. There is also a clear variation in average temperature with phase ($\sim\! 200\,$K). This variation would be large enough to change the observed spectral sub-type with phase if there was not such a large range of temperatures across the observable part of the secondary at each phase. The total range within one standard deviation is 5600–6500K. This range corresponds to stars of spectral sub-type F5–G5 (Strai$\check{\rm{z}}$ys & Kuriliene 1981). The effect this variation in temperature has on the observed line profiles is unknown and is worth further investigation. We conclude, however, that it is difficult to determine the spectral type to better than a few sub-types due to this large variation in surface temperature. Discussion {#discuss} ========== Comparison to previous studies ------------------------------ The accretion-disc model with a flat temperature profile provides the best-fitting model. This result is consistent with disc structures in CVs during quiescence (based on MEM), which appear to have temperature profiles that are much flatter than those expected for steady-state discs. VdH found that, for a steady-state disc, there were no reasonable solutions with rim temperatures greater than 1000K. This is consistent with our results as the models with a 2000K disc do not provide good fits to the data unless the inner disc is removed. OB allowed the power-law exponent for the temperature profile, the rim temperature and the size of the disc to be free parameters. They obtained a best-fitting power-law exponent of $-0.12\,\pm\,0.01$, a rim temperature of $4317\,\pm\,75\,$K and an outer disc radius of $0.747\,r_{\rm{L}}$. With an exponent of $-0.1$, we fit the data with a disc rim temperature of 3500K at $0.9\,r_{\rm{L}}$, which at the radius $0.747\,r_{\rm{L}}$ has a temperature of 3580K. Hence the disc in this model is 750K cooler than that found by OB. The accretion disc model with a central hole contributes predominantly red light as it has a low average temperature, similar to the disc with the flat temperature profile. Hence we can conclude that the disc in GRO J1655–40, when it is in quiescence, is predominantly red and cool. Although the flatter temperature profile is preferred in the modelling, the actual disc will almost certainly be more complicated. OB only considered one accretion disc structure in their model, although their accretion-disc parameters were free to vary. Our analysis shows that the system parameters vary with the accretion-disc model. Hence, a range of accretion-disc models need to be considered in order to avoid underestimating the systematic errors. Relying on the minimization of $\chi^2$ is insufficient as the modelling is not sufficiently accurate to enable a reliable determination of the precise accretion-disc structure. This is because, as mentioned earlier, the actual accretion-disc will be more complicated than the simple model assumed in these analyses. Indeed, from our analysis, it is clear that a large range of mass ratios and hence masses will fit the light-curve amplitudes. These amplitudes are strongly dependent on the accretion-disc model and the proportion of flux for each model in each band. GRO J1655–40 has a secondary of earlier spectral type and hence higher luminosity than most other SXTs. This means that accretion-disc contamination is less important than in other systems, although it still strongly affects the inferred system paramaters. We therefore used the independent distance determination from the kinematics of the radio jet (Hjellming & Rupen 1995), as used previously to model the jets in SS433 (Hjellming & Johnston 1988) to further constrain the allowed disc models. There are, however, a number of possible systematic errors in this measurement. The inclination of the jet axis of $85\degr \,\pm\, 2\degr$ is significantly different from the system inclination of $68\fdg 65 \,\pm\, 1\fdg 5$, implying a more complicated geometry than the simple model they use. This model also does not describe all the structure observed in the jet, and there are deviations from simple linear expansion which they assume. In addition there is motion over the length of an observation which would result in smearing. On the other hand, they chose a beam-width larger than the proper motion during each six-hour VLBA observation to minimize this effect. These observations did not have absolute positional information resulting in the central source being used for reference. This is undesirable as it makes the alignment of the different images in their analysis a free parameter. If this distance measurement were incorrect, this could somewhat alter our best-fitting parameters. Distance estimates by other authors are, however, consistent with their determination: ${\sim \! 3}\,$kpc (Bailyn et al. 1995a); ${\sim \! 3}\,$kpc (Greiner, Predehl & Pohl 1995); $3.5\,$kpc (McKay & Kesteven 1994) and $3\,$–$\,5\,$kpc (Tingay et al. 1995). Hence, using their distance determination of $3.2\,\pm\,0.2\,$kpc as additional constraint is reasonable. A lower value for the colour excess than used previously is needed to provide good fits to the data in all passbands simultaneously. We found an $E(B-V)$ of $1.0\,\pm\,0.1$ (90 per cent confidence). This value is somewhat, but not dramatically lower than various previous estimates (1.15, Bailyn et al. 1995a; 1.3, Horne et al. 1996; $1.2\,\pm\,0.1$, Hynes et al. 1998, 1$\sigma$ limits). This $E(B-V)$ along with the effective temperature implies a spectral type range for the secondary of F5–G0, consistent with previous estimates (S99). VdH used a value for the colour excess of 1.3 in their study and hence had to assume a significantly larger luminosity of the secondary (31–54L$_{\sun}$ as compared to $21.0\,\pm\,6.0\,$L$_{\sun}$ found here). This resulted in significantly larger masses for both the black hole and the secondary. A lower luminosity, which alters GRO J1655–40’s position in the HR diagram, along with the lower masses implied by our model has implications for the interpretation of the system’s evolutionary state (see Section \[evolsec\]). Mass estimates for the components have also been obtained by Buxton & Vennes (2001) and Wagoner, Silbergleit & Ortega-Rodríguez (2001). Buxton & Vennes (2001) fitted model spectra to the observed quiescent spectrum and measured the rotational broadening in order to calculate a mass ratio of 2.56–4.35 using the radial velocity amplitude of S99 (see Section \[vrot\]). Combined with the inclination of vdH the masses are $7.91 \,\pm\, 3.79\,$M$_{\sun}$ and $2.76 \,\pm\, 1.79\,$M$_{\sun}$ for the primary and secondary respectively. Wagoner et al. (2001) have modelled the quasi-periodic X-ray oscillations and find a primary mass of $5.9 \,\pm\, 1.0\,$M$_{\sun}$. Both of these measurements are consistent with our values. Rotational broadening and the mass ratio {#vrot} ---------------------------------------- Our mass ratio of $3.9\,\pm\,0.6$ is larger than that found in the study of OB (2.60–3.45, 3$\sigma$ limits) but is in agreement with the upper range obtained by vdH (2.43–3.99, 3$\sigma$). A larger mass ratio implies a smaller and hence less luminous secondary, as required by our lower value for the colour excess. S99 found a mass ratio of 2.29–3.06 (95 per cent confidence) based on their estimate of the rotational broadening of the spectral lines. They assumed that the secondary was spherical and that all the observed broadening was a consequence of rotation. Fitting a broadening profile to template star spectra and minimizing the residuals, they found a rotational velocity, $v\sin i$, of 82.9–94.9kms$^{-1}$. Our mass ratio range corresponds to a $v\sin i$ of 68.3–79.1kms$^{-1}$, using their broadening to mass ratio relation. In their determination of the average spectrum, S99 did not correct for the orbital smearing of the spectra due to the length of the exposure time of the observations. Most of their spectra were taken near the quadrature phases when orbital smearing would be at its greatest. This results in the apparent broadening of the lines by 7kms$^{-1}$ (at the orbital phase when the spectra were taken). Since this broadening should be added linearly to the rotational broadening (rather than in quadrature), it introduces an additional, but spurious rotational broadening of 3.5kms$^{-1}$ (about half of the change of the radial velocity during the observations). S99 took the radius of a spherical star in their model to be the effective Roche-lobe radius. In reality, a Roche-lobe filling object can be described by an ellipsoid to first approximation. This implies that the light contributing to a spectral line will come from a larger range of radii (and hence velocities) than for a spherical star of the same volume and that the assumption of sphericity will lead to an overestimate of the rotational velocity. The dependence of $v\sin i$ on the mass ratio for a realistic model containing a Roche-lobe filling star is consequently non-trivial and requires detailed modelling. Orosz & Hauschildt (2000) have investigated the difference in rotational broadening kernels between Roche-lobe filling models and analytical models. They found that, for a Roche-lobe filling star, the broadening kernel changes with phase and is significantly different near the quadrature phases where it is wider and asymmetric. The calculation using the analytical kernel then provides an upper limit to $v\sin i$. Since an upper limit on the rotational broadening provides a lower limit to the mass ratio, we conclude that our estimates are likely to be consistent with the findings of S99, once all of these effects are properly taken into account. Greene et al. (2001) calculated a numerical broadening kernel for photometric phase zero, where the Israelian et al. (1999) measurements were taken. They found almost no systematic biases in comparison to the analytical kernel, justifying the use of $v\sin i$ of $93\,\pm\,3\,$kms$^{-1}$ in their model fitting. Buxton & Vennes (2001) have calculated $v\sin i$ by fitting model spectra to the observed spectrum taken during quiescence and found a $v\sin i$ of $80 \,\pm\, 10$kms$^{-1}$ as well as $T_{\rm{eff}} = 6500\,\pm\,50\,$K and \[Fe/H\]=$-$0.25–0.00. This is consistent with our calculation as well as with previous measurements. Their Fe abundance agrees with that of Israelian et al. (1999) who found \[Fe/H\]=0.1$\pm$0.2 and larger overabundances in $\alpha-$process elements. Recently, Bleach et al. (2000) have demonstrated the unreliability of using the rotational broadening $v\sin i$ for the calculation of system parameters in the CV EG UMa. They fitted broadened spectra to individual lines and found a large range of values of $v\sin i$ of 19–59kms$^{-1}$, with typical error on each value of $\sim 10\,$kms$^{-1}$. As the velocity semi-amplitude of both components in this system is measurable, they predict a $v\sin i$ of 28.6kms$^{-1}$. The reason for this large variation is unknown, but is not due to irradiation and may be a result of anomalous composition (for further discussion see Wood et al. 2001). The evolutionary state of GRO J1655–40 {#evolsec} -------------------------------------- Modelling the binary evolution of GRO J1655–40 with a present secondary mass of 2.35M$_{\sun}$ (OB) leads to an average (secular) mass-transfer rate that tend to be much larger than is consistent with its X-ray transient behaviour (van Paradijs 1996; King, Kolb & Szuskiewicz 1997). Kolb et al. (1997) and Kolb (1998) showed that this problem could be solved if the secondary were in a special position in the Hertzsprung gap; this, however, required a somewhat cooler secondary than is consistent with its spectral type. Regős, Tout & Wickramasinghe (1998) suggested as an alternative solution that the secondary is still on the main sequence. This also requires some fine-tuning and significant convective overshooting (but is consistent with recent findings; Pols et al. 1997). The lower mass found in this re-analysis largely removes this problem, irrespective of whether the secondary is still on the main sequence or in the Hertzsprung gap. To illustrate this we present a case A binary calculation (similar to the model of Regős et al. 1998) in Fig. \[evol\] (for a detailed description of the binary evolution code see Podsiadlowski, Rappaport & Pfahl 2001a). The dashed curve in the panel showing the mass-transfer rate gives the critical mass-transfer rate below which transient behaviour is expected (from King et al.1997). The mass-transfer rate is well below the critical rate until the end of the main sequence. We have also performed some early case B binary calculations (i.e. where the secondary has just evolved off the main sequence) and also find that, with the lower mass of the secondary, the present mass-transfer rate is below the critical rate for transient behaviour, although case B models generally do not fit as well as late case A models. In our case A model, the initial masses, i.e. immediately after the supernova in which the black hole was formed, were 4.1 and 2.5M$_{\sun}$ for the black hole and secondary, respectively. These masses are very similar to the post-supernova masses required to explain the pollution of the secondary with $\alpha$-process material (Israelian et al. 1999) that was ejected in the supernova (Podsiadlowski et al. 2001b)[^2]. From all of this a complete picture starts to emerge for the evolutionary history of Nova Sco. The pollution of the secondary with supernova material proves that the black hole formed in a supernova (or a hypernova). Some material that was produced in the supernova had to be captured by the secondary; this requires both fallback of material and mixing in the supernova ejecta (Podsiadlowski et al. 2001b). In a typical model, the orbital period after the binary has re-circularized after the supernova is about 20hr (see the mixing models in table 3 of Podsiadlowski et al. 2001b), very close to the period needed for a late case A scenario as shown in Fig. \[evol\]. Since then the secondary has transferred some 1M$_{\sun}$, increasing the orbital period to its present value. Self-consistent models also require that the black hole received a significant kick at birth in order to produce the high observed system space velocity, consistent with the fallback suggestion by Brandt, Podsiadlowski & Sigurdsson (1995). Conclusions =========== We have obtained self-consistent fits to the ellipsoidal light curves of GRO J1655–40 for the $B$, $V$, $R$ and $I$ passbands simultaneously without making [*a priori*]{} assumptions about the distance and the colour excess. Using the distance estimate of $3.2\,\pm\,0.2\,$kpc, based on the kinematics of the radio jet, as additional constraint, we find that our $E(B-V)$ of $1.0\,\pm\,0.1$ along with our effective temperature of the secondary of $6150\,\pm\,350\,$K corresponds to a spectral type range for the secondary of F5–G0. This is consistent with the spectral type found by S99. Our mass ratio, larger than found previously, of $3.9\,\pm\,0.6$, along with our inclination of 6865$\,\pm\, 1\fdg 5$ implies lower masses of $5.4\,\pm\,0.3\,$M$_{\sun}$ and $1.45\,\pm\,0.35\,$M$_{\sun}$ for the black hole and the companion, respectively. The mass ratio along with the $E(B-V)$ also implies a lower luminosity of the secondary of $21.0\,\pm\,6.0\,$L$_{\sun}$. These results are, however, rather sensitive to the assumptions about the disc structure (even though in $V$ the disc only contributes some 5 per cent of the light). Our best-fitting models have a disc temperature profile that is much flatter than that of a steady-state disc. The reduced revised masses also help to explain the transient nature of the system Acknowledgements {#acknowledgements .unnumbered} ================ We thank Jerry Orosz for kindly providing the light-curve data which has made this analysis possible. MEB thanks Tariq Shahbaz and Janet Wood for useful discussion. Avni Y., 1976, ApJ, 210, 642 Bailyn C. D. et al., 1995a, Nat, 374, 701 Bailyn C. D., Orosz J. A., McClintock J. E., Remillard R. A., 1995b, Nat, 378, 157 Bessell M. S., 1990, PASP, 102, 1181 Bleach J. N., Wood J. H., Catal' an M. S., Welsh W. F., Robinson E. L., Skidmore W., 2000, MNRAS, 312, 70 Brandt W. N., Podsiadlowski Ph., Sigurdsson S., 1995, MNRAS, 277, L35 Buxton M., Vennes S., 2001, PASA, 18 (1), 91 Fitzgerald M. P., 1970, A&A, 4, 234 Fitzpatrick E. L., 1999, PASP, 111, 63 Greene J., Bailyn C. D., Orosz J. A., 2001, ApJ, 554, 1290 Greiner J., Predehl P., Pohl M., 1995, A&A, 297, L67 Hameury J. M., Lasota J. P., McClintock J. E., Narayan R., 1997, ApJ, 489, 234 Hjellming R. M., Johnston K. J., 1988, ApJ, 328, 600 Hjellming R. M., Rupen M. P., 1995, Nat, 375, 464 Horne K., 1993, in Wheeler J. C., ed., Accretion disks in compact stellar systems. World Scientific Publishing Company, Singapore, p. 117 Horne K. et al., 1996, IAU Circ. 6406 Hynes R. I. et al., 1998, MNRAS, 300, 64 Israelian G. et al., 1999, Nat, 401, 142 Johnson H. L., 1966, ARA&A, 4, 193 King A. R., Kolb U., Szuskiewicz E., 1997, ApJ, 488, 89 Kolb U., 1998, MNRAS, 297, 419 Kolb U., King A. R., Ritter H., Frank J., 1997, ApJ, 485, L33 Kurucz R. L., 1992, Rev. Mex. Astron. Astophys. 23, 181 Lucy L. B., 1967, Z. Astrophysik, 65, 89 McClintock J. E., Remillard R. A., 1986, ApJ, 308, 110 McKay D., Kesteven M., 1994, IAU Circ. 6062 Meyer F., Meyer-Hofmeister E., 1994, A&A, 288, 175 Meyer-Hofmeister E., Meyer F., 2000, A&A, 355, 1073 Narayan R., McClintock J. E., Yi I., 1996, ApJ, 451, 821 Oke J. B., 1977, ApJ, 217, 181 Orosz J. A., Bailyn C. D., 1997, ApJ, 477, 876 (OB) Orosz J. A., Hauschildt P. H., 2000, A&A, 364, 265 Phillips S. N., Shahbaz T., Podsiadlowski  Ph., 1999, MNRAS, 304, 839 Podsiadlowski Ph., Rappaport S., Pfahl E., 2001a, ApJ, submitted Podsiadlowski Ph., Nomoto K., Maeda K., Nakamura T., Mazzali P., Schmidt B., 2001b, ApJ, submitted Pols O. R., Tout C. A., Schr$\ddot{\rm{o}}$der K., Eggleton P. P., Manners J., 1997, MNRAS, 289, 869 Regős E., Tout C. A., Wickramasinghe D., 1998, ApJ, 509, 362 Shahbaz T., van der Hooft F., Casares J., Charles P. A., van Paradijs J., 1999, MNRAS, 306, 89 (S99) Shahbaz T., Bandyopadhyay R. M., Charles P. A., 1999, A&A, 346, 82 Shakura I. N., Sunyaev R. A., 1973, A&A, 24, 337 Strai$\check{\rm{z}}$ys V., Kuriliene G., 1981, Ap&SS, 80, 353 Tingay S. J. et al., 1995, Nat. 374, 101 T" ug H., White N. M., Lockwood G. W., 1977, A&A, 61, 679 van der Hooft F., Heemskerk M. H. M., Alberts F., van Paradijs J., 1998, A&A, 329, 538 (vdH) van Paradijs J., 1996, ApJ, 464, L139 von Zeipel H., 1924, MNRAS, 84, 665 Wade R. A., Rucinski S. M., 1985, A&AS, 60, 471 Wagoner R. V., Silbergleit A. S., Ortega-Rodríguez M., 2001, ApJL submitted (astro-ph/0107168) Wood J. H., Horne K., Berriman G., Wade R., O’Donoghue D., Warner B., 1986, MNRAS, 219, 629 Wood J. H., Horne K., Berriman G., Wade R., 1989, ApJ, 341, 974 Wood J. H., Bleach J. N., Catal' an M. S., Welsh W. F., Robinson E. L., 2001, in Podsiadlowski Ph., Rappaport S., King A. R., D’Antona F., Burderi L., eds, ASP Conf. Ser. Vol. 229, Evolution of Binary and Multiple Star Systems. Astron. Soc. Pac., San Francisco, p. 267 \[lastpage\] [^1]: E-mail: beer@astro.ox.ac.uk [^2]: We note that, in order to explain its anomalous composition, the secondary had to capture some $0.25\,$M$_{\sun}$ of material mainly composed of heavy elements, which was then mixed completely with the secondary (Podsiadlowski et al. 2001b). This must have dramatically changed the composition (in particular the metallicity) of the secondary, an effect that was not included in our binary calculations.
--- abstract: 'In the theory of belief functions, many measures of uncertainty have been introduced. However, it is not always easy to understand what these measures really try to represent. In this paper, we re-interpret some measures of uncertainty in the theory of belief functions. We present some interests and drawbacks of the existing measures. On these observations, we introduce a measure of contradiction. Therefore, we present some degrees of non-specificity and Bayesianity of a mass. We propose a degree of specificity based on the distance between a mass and its most specific associated mass. We also show how to use the degree of specificity to measure the specificity of a fusion rule. Illustrations on simple examples are given.' author: - - - title: Contradiction measures and specificity degrees of basic belief assignments --- [**Keywords: Belief function, uncertainty measures, specificity, conflict.**]{} Introduction {#sec:Introduction} ============ The theory of belief functions was first introduced by [@Dempster67] in order to represent some imprecise probabilities with [*upper*]{} and [*lower probabilities*]{}. Then [@Shafer76] proposed a mathematical theory of evidence.\ Let $\Theta$ be a frame of discernment. A *basic belief assignment* (bba) $m$ is the mapping from elements of the powerset $2^\Theta$ onto $[0,1]$ such that: $$\sum_{X\in 2^\Theta} m(X)=1. \label{normDST}$$ The axiom $m(\emptyset)=0$ is often used, but not mandatory. A [*focal element*]{} $X$ is an element of $2^\Theta$ such that $m(X)\neq 0$.. The difference of a bba with a probability is the domain of definition. A bba is defined on the powerset $2^\Theta$ and not only on $\Theta$. In the powerset, each element is not equivalent in terms of precision. Indeed, $\theta_1 \in \Theta$ is more precise than $\theta_1 \cup \theta_2 \in 2^\Theta$. In the case of the DSmT introduced in [@Smarandache04], the bba are defined on an extension of the powerset: the hyper powerset noted $D^\Theta$, formed by the closure of $\Theta$ by union and intersection. The problem of signification of each focal element is the same as in $2^\Theta$. For instance, $\theta_1 \in \Theta$ is less precise than $\theta_1 \cap \theta_2 \in D^\Theta$. In the rest of the paper, we will note $G^\Theta$ for either $2^\Theta$ or $D^\Theta$. In order to try to quantify the measure of uncertainty such as in the set theory [@Hartley28] or in the theory of probabilities [@Shannon48], some measures have been proposed and discussed in the theory of belief functions [@Yager83a; @Dubois85; @Klir94; @George96]. However, the domain of definition of the bba does not allow an ideal definition of measure of uncertainty. Moreover, behind the term of uncertainty, different notions are hidden. In the section \[measureUncertainty\], we present different kinds of measures of uncertainty given in the state of art, we discuss them and give our definitions of some terms concerning the uncertainty. In section \[contradiction\], we introduce a measure of contradiction and discuss it. We introduce simple degrees of uncertainty in the section \[degreesUncer\], and propose a degree of specificity in the section \[degreeSpecificity\]. We show how this degree of specificity can be used to measure the specificity of a combination rule. Measures of uncertainty on belief functions {#measureUncertainty} =========================================== In the framework of the belief functions, several functions (we call them [*belief functions*]{}) are in one to one correspondence with the bba: ${\mathrm{bel}}$, ${\mathrm{pl}}$ and ${\mathrm{q}}$. From these belief functions, we can define several measures of uncertainty. Klir in [@Klir94] distinguishes two kinds of uncertainty: the non-specificity and the discord. Hence, we recall hereafter the main belief functions, and some non-specificity and discord measures. Belief functions ---------------- Hence, the credibility and plausibility functions represent respectively a minimal and maximal belief. The [*credibility*]{} function is given from a bba for all $X \in G^\Theta$ by: $$\begin{aligned} {\mathrm{bel}}(X)=\sum_{Y \subseteq X, Y \not\equiv \emptyset} m(Y).\end{aligned}$$ The [*plausibility*]{} is given from a bba for all $X \in G^\Theta$ by: $$\begin{aligned} {\mathrm{pl}}(X)=\displaystyle \sum_{Y \in G^\Theta, Y\cap X \not\equiv \emptyset} m(Y).\end{aligned}$$ The [*commonality*]{} function is also another belief function given by: $$\begin{aligned} {\mathrm{q}}(X)=\displaystyle \sum_{Y \in G^\Theta, Y\supseteq X} m(Y).\end{aligned}$$ These functions allow an implicit model of imprecise and uncertain data. However, these functions are monotonic by inclusion: ${\mathrm{bel}}$ and ${\mathrm{pl}}$ are increasing, and ${\mathrm{q}}$ is decreasing. This is the reason why the most of time we use a probability to take a decision. The most used projection into probability subspace is the pignistic probability transformation introduced by [@Smets90] and given by: $$\begin{aligned} \label{betp} {\mathrm{betP}}(X)=\sum_{Y \in G^\Theta, Y \not\equiv \emptyset} \frac{|X \cap Y|}{|Y|} m(Y),\end{aligned}$$ where $|X|$ is the cardinality of $X$, in the case of the DSmT that is the number of disjoint elements corresponding in the Venn diagram. From this probability, we can use the measure of uncertainty given in the theory of probabilities such as the Shannon entropy [@Shannon48], but we loose the interest of the belief functions and the information given on the subsets of the discernment space $\Theta$. Non-specificity --------------- The non-specificity in the classical set theory is the imprecision of the sets. Such as in [@Ristic06], we define in the theory of belief functions, the non-specificity related to vagueness and non-specificity. **Definition** *The [*non-specificity*]{} in the theory of belief functions quantifies how a bba $m$ is imprecise.* The non-specificity of a subset $X$ is defined by Hartley [@Hartley28] by $\log_2(|X|)$. This measure was generalized by [@Dubois85] in the theory of belief functions by: $$\begin{aligned} \label{nonspeDubois} {\mathrm{NS}}(m)=\sum_{X \in G^\Theta,~X\not\equiv\emptyset} m(X)\log_2(|X|).\end{aligned}$$ That is a weighted sum of the non-specificity, and the weights are given by the basic belief in $X$. Ramer in [@Ramer87] has shown that it is the unique possible measure of non-specificity in the theory of belief functions under some assumptions such as symmetry, additivity, sub-additivity, continuity, branching and normalization. If the measure of the non-specificity on a bba is low, we can consider the bba is specific. Yager in [@Yager83a] defined a specificity measure such as: $$\label{speYager} S(m)=\sum_{X\in G^\Theta,~X\not\equiv\emptyset} \frac{m(X)}{|X|}.$$ Both definitions corresponded to an accumulation of a function of the basic belief assignment on the focal elements. Unlike the classical set theory, we must take into account the bba in order to quantify (to weight) the belief of the imprecise focal elements. The imprecision of a focal element can of course be given by the cardinality of the element. First of all, we must be able to compare the non-specificity (or specificity) between several bba’s, event if these bba’s are not defined on the same discernment space. That is not the case with the equations and . The non-specificity of the equation takes its values in $[0,\log_2(|\Theta|)]$. The specificity of the equation can have values in $[\frac{1}{|\Theta|},1]$. We will show how we can easily define a degree of non-specificity in $[0,1]$. We could also define a degree of specificity from the equation (\[speYager\]), but that is more complicated and we will later show how we can define a specificity degree. The most non-specific bba’s for both equations and are the total ignorance bba given by the categorical bba $m_\Theta:m(\Theta)=1$. We have ${\mathrm{NS}}(m)=\log_2(|\Theta|)$ and $S(m)=\frac{1}{|\Theta|}$. This categorical bba is clearly the most non-specific for us. However, the most specific bba’s are the Bayesian bba’s. The only focal elements of a Bayesian bba are the simple elements of $\Theta$. On these kinds of bba $m$ we have ${\mathrm{NS}}(m)=0$ and $S(m)=1$. For example, we take the three Bayesian bba’s defined on $\Theta=\{\theta_1,\theta_2,\theta_3\}$ by: $$\begin{aligned} m_1(\theta_1)= m_1(\theta_2)= m_1(\theta_3)= 1/3,\\ m_2(\theta_1)= m_2(\theta_2)= 1/2, \, m_2(\theta_3)= 0,\\ m_3(\theta_1)=1, \, m_3(\theta_2)=m_3(\theta_3)= 0.\end{aligned}$$ We obtain the same non-specificity and specificity for these three bba’s. That hurts our intuition; indeed, we intuitively expect that the bba $m_3$ is the most specific and the $m_1$ is the less specific. We will define a degree of specificity according to a most specific bba that we will introduce. Discord ------- Different kinds of discord have been defined as extensions for belief functions of the Shannon entropy, given for the probabilities. Some discord measures have been proposed from plausibility, credibility or pignistic probability: $$\begin{aligned} E(m)=-\sum_{X \in G^\Theta} m(X)\log_2({\mathrm{pl}}(X)),\end{aligned}$$ $$\begin{aligned} C(m)=-\sum_{X \in G^\Theta} m(X)\log_2({\mathrm{bel}}(X)),\end{aligned}$$ $$\begin{aligned} D(m)=-\sum_{X \in G^\Theta} m(X)\log_2({\mathrm{betP}}(X)),\end{aligned}$$ with $E(m)\leq D(m) \leq C(m)$. We can also give the Shanon entropy on the pignistic probability: $$\begin{aligned} -\sum_{X \in G^\Theta} {\mathrm{betP}}(X)\log_2({\mathrm{betP}}(X)).\end{aligned}$$ Other measures have been proposed, [@Klir94] has shown that these measures can be resumed by: $$\begin{aligned} -\sum_{X \in G^\Theta} m(X)\log_2(1-{\mathrm{Con}}_m(X)),\end{aligned}$$ where ${\mathrm{Con}}$ is called a conflict measure of the bba $m$ on $X$. However, in our point of view that is not a conflict such presented in [@Wierman01], but a contradiction. We give the both following definitions: **Definition** *A [*contradiction*]{} in the theory of belief functions quantifies how a bba $m$ contradicts itself.* **Definition** (C1) *The [*conflict*]{} in the theory of belief functions can be defined by the contradiction between 2 or more bba’s.* In order to measure the conflict in the theory of belief functions, it was usual to use the mass $k$ given by the conjunctive combination rule on the empty set. This rule is given by two basic belief assignments $m_1$ and $m_2$ and for all $X \in G^\Theta$ by: $$\begin{aligned} \label{conjunctive} m_{\mathrm{c}}(X)=\displaystyle \sum_{A\cap B =X} m_1(A)m_2(B):=(m_1\oplus m_2 )(X).\end{aligned}$$ $k=m_{\mathrm{c}}(\emptyset)$ can also be interpreted as a non-expected solution. In [@Yager83a], Yager proposed another conflict measure from the value of $k$ given by $-\log_2(1-k)$. However, as observed in [@Liu06], the weight of conflict given by $k$ (and all the functions of $k$) is not a conflict measure between the basic belief assignments. Indeed this value is completely dependant of the conjunctive rule and this rule is non-idempotent - the combination of identical basic belief assignments leads generally to a positive value of $k$. To highlight this behavior, we defined in [@Osswald06] the [*auto-conflict*]{} which quantifies the intrinsic conflict of a bba. The auto-conflict of order $n$ for one expert is given by: $$\begin{aligned} \label{autoconf} a_n=\displaystyle \left(\mathop{\oplus}_{i=1}^n m\right)(\emptyset).\end{aligned}$$ The auto-conflict is a kind of measure of the contradiction, but depends on the order. We studied its behavior in [@Martin08]. Therefore we need to define a measure of contradiction independent on the order. This measure is presented in the next section \[contradiction\]. A contradiction measure {#contradiction} ======================= The definition of the conflict (C1) involves firstly to measure it on the bba’s space and secondly that if the opinions of two experts are far from each other, we consider that they are in conflict. That suggests a notion of distance. That is the reason why in [@Martin08], we give a definition of the measure of conflict between experts assertions through a distance between their respective bba’s. The conflict measure between $2$ experts is defined by: $$\begin{aligned} \label{2conflict_measure} {\mathrm{Conf}}(1,2)=d(m_1,m_2).\end{aligned}$$ We defined the conflict measure between one expert $i$ and the other $M-1$ experts by: $$\begin{aligned} \label{conflict_measure1} {\mathrm{Conf}}(i,\mathcal{E})=\frac{1}{M-1}\sum_{j=1, i\neq j}^M {\mathrm{Conf}}(i,j),\end{aligned}$$ where $\mathcal{E}=\{1,\ldots, M\}$ is the set of experts in conflict with $i$. Another definition is given by: $$\begin{aligned} \label{conflict_measure2} {\mathrm{Conf}}(i,M)=d(m_i,\overline{m_M}),\end{aligned}$$ where $\overline{m_M}$ is the bba of the artificial expert representing the combined opinions of all the experts in $\mathcal{E}$ except $i$. We use the distance defined in [@Jousselme01], which is for us the most appropriate, but other distances are possible. See [@Florea09] for a comparison of distances in the theory of belief functions. This distance is defined for two basic belief assignments $\textbf{m}_1$ and $\textbf{m}_2$ on $G^\Theta$ by: $$\begin{aligned} \label{distance} d(m_1,m_2)=\sqrt{\frac{1}{2} (\textbf{m}_1-\textbf{m}_2)^T\underline{\underline{D}}(\textbf{m}_1-\textbf{m}_2)},\end{aligned}$$ where $\underline{\underline{D}}$ is an $G^{|\Theta|}\times G^{|\Theta|}$ matrix based on Jaccard distance whose elements are: $$\begin{aligned} \label{DMatrix} D(A,B)=\left\{ \begin{array}{l} 1, \, \mbox{if} \, A= B=\emptyset,\\ \\ \displaystyle \frac{|A\cap B|}{|A\cup B|}, \, \forall A, B \in G^\Theta.\\ \end{array} \right.\end{aligned}$$ However, this measure is a *total conflict* measure. In order to define a contradiction measure we keep the same spirit. First, the contradiction of an element $X$ with respect to a bba $m$ is defined as the distance between the bba’s $m$ and $m_X$, where $m_X(X)=1$, $X \in G^\Theta$, is the categorical bba: $$\label{Eq_cont_element} {\mathrm{Contr}}_m(X)=d(m,m_X),$$ where the distance can also be the Jousselme distance on the bba’s. The contradiction of a bba is then defined as a weighted contradiction of all the elements $X$ of the considered space $G^\Theta$: $$\label{Eq_contra_bba} {\mathrm{Contr}}_m=c \sum_{X\in G^\Theta}m(X) d(m,m_X),$$ where $c$ is a normalized constant which depends on the type of distance used and on the cardinality of the frame of discernment in order to obtain values in $[0,1]$ as shown in the following illustration. Illustration ------------ Here the value $c$ in the equation is equal to 2. First we note that for all categorical bbas $m_Y$, the contradiction given by the equation gives ${\mathrm{Contr}}_{m_Y}(Y)=0$ and the contradiction given by the equation brings also ${\mathrm{Contr}}_{m_Y}=0$. Considering the bba $m_1(\theta_1)=0.5$ and $m_1(\theta_2)=0.5$, we have ${\mathrm{Contr}}_{m_1}=1$. That is the maximum of the contradiction, hence the contraction of a bba takes its values in $[0,1]$.  \ at (-2.5,0) [$\begin{array}{c} \theta_1 \\ \scriptsize 0.5 \end{array}$]{}; at (0,0) [$\begin{array}{c} \theta_2 \\ \scriptsize 0.5 \end{array}$]{}; at (2.5,0) [$\begin{array}{c} \theta_3 \\ \scriptsize 0 \end{array}$]{}; at (-4,0) [$m_1$:]{};  \ at (-2.5,0) [$\begin{array}{c} \theta_1 \\ \scriptsize 0.6 \end{array}$]{}; at (0,0) [$\begin{array}{c} \theta_2 \\ \scriptsize 0.3 \end{array}$]{}; at (2.5,0) [$\begin{array}{c} \theta_3 \\ \scriptsize 0.1 \end{array}$]{}; at (-4,0) [$m_2$:]{}; Taking the Bayesian bba given by: $m_2(\theta_1)=0.6$, $m_2(\theta_2)=0.3$, and $m_2(\theta_3)=0.1$. We obtain: $$\begin{aligned} {\mathrm{Contr}}_{m_2}(\theta_1)& \simeq& 0.36, \\ {\mathrm{Contr}}_{m_2}(\theta_2) & \simeq &0.66, \\ {\mathrm{Contr}}_{m_2}(\theta_3) &\simeq& 0.79\end{aligned}$$ The contradiction for $m_2$ is ${\mathrm{Contr}}_{m_2}=0.9849$.  \ at (-2.5,0) [$\begin{array}{c} \theta_1 \\ \scriptsize 0 \end{array}$]{}; at (0,0) [$\mathbf{\begin{array}{c} \theta_2 \\ \scriptsize 0.3 \end{array}}$]{}; at (2.5,0) [$\begin{array}{c} \theta_3 \\ \scriptsize 0.1 \end{array}$]{}; (-3.4,0) – (-3.4,0.9) – (3.4,0.9) – (3.4,-1.2) – (-3.4,-1.2) – (-3.4,0); at (-1.25,-0.75) [$0.6$]{}; at (-4,0) [$m_3$:]{}; Take $m_3(\{\theta_1,\theta_2,\theta_3\})=0.6$, $m_3(\theta_2)=0.3$, and $m_3(\theta_3)=0.1$; the masses are the same than $m_2$, but the highest one is on $\Theta=\{\theta_1,\theta_2,\theta_3\}$ instead of $\theta_1$. We obtain: $$\begin{aligned} {\mathrm{Contr}}_{m_3}(\{\theta_1,\theta_2,\theta_3\}) & \simeq & 0.28, \\ {\mathrm{Contr}}_{m_3}(\theta_2)& \simeq & 0.56, \\ {\mathrm{Contr}}_{m_3}(\theta_3)&\simeq& 0.71\end{aligned}$$ The contradiction for $m_3$ is ${\mathrm{Contr}}_{m_3}=0.8092$. We can see that the contradiction is lowest thanks to the distance taking into account the imprecision of $\Theta$.  \ at (0:1.5) [$\theta_1$]{}; at (120:1.5) [$\theta_2$]{}; at (240:1.5) [$\theta_3$]{}; (0:2) – (-15:2) – (-20:1) – (0:1); (120:2) – (135:2) – (140:1) – (120:1); (0:2) arc (0:120:2) (120:2); (0:1) arc (0:120:1) (120:1); (120:2.1) – (105:2.1) – (100:1.1) – (120:1.1); (240:2.1) – (255:2.1) – (260:1.1) – (240:1.1); (120:2.1) arc (120:240:2.1) (240:2.1); (120:1.1) arc (120:240:1.1) (240:1.1); (240:1.9) – (225:1.9) – (220:0.9) – (240:0.9); (0:1.9) – (15:1.9) – (20:0.9) – (00:0.9); (240:1.9) arc (240:360:1.9) (0:1.9); (240:0.9) arc (240:360:0.9) (0:0.9); at (60:1.5) [$0.6$]{}; at (180:1.5) [$0.1$]{}; at (300:1.5) [$0.3$]{}; at (-2.9,0) [$m_4$:]{}; If we consider now the same mass values but on focal elements of cardinality 2: $m_4(\{\theta_1,\theta_2\})=0.6$, $m_4(\theta_1,\theta_3)=0.3$, and $m_4(\theta_2,\theta_3)=0.1$. We obtain:\ $$\begin{aligned} {\mathrm{Contr}}_{m_4}(\{\theta_1,\theta_2\}) & \simeq & 0.29, \\ {\mathrm{Contr}}_{m_4}(\{\theta_1,\theta_3\}) & \simeq & 0.53, \\ {\mathrm{Contr}}_{m_4}(\{\theta_2,\theta_3\}) & \simeq & 0.65\end{aligned}$$ The contradiction for $m_4$ is ${\mathrm{Contr}}_{m_4}=0.80$.\ Fewer of focal elements there are, smaller the contradiction of the bba will be, and more the focal elements are precise, higher the contradiction of the bba will be. Degrees of uncertainty {#degreesUncer} ====================== We have seen in the section \[measureUncertainty\] that the measure non-specificity given by the equation take its values in a space dependent on the size of the discernment space $\Theta$. Indeed, the measure of non-specificity takes its values in $[0,\log_2(|\Theta|)]$. In order to compare some non-specificity measures in an absolute space, we define a degree of non-specificity from the equation by: $$\begin{aligned} \label{degrenonspeDubois} \begin{array}{rcl} \delta_{{\mathrm{NS}}}(m)&=&\displaystyle \sum_{X \in G^\Theta,~X\not\equiv\emptyset} m(X)\frac{\log_2(|X|)}{\log_2(|\Theta|)}\\ &=& \displaystyle \sum_{X \in G^\Theta,~X\not\equiv\emptyset} m(X)\log_{|\Theta|}(|X|). \end{array}\end{aligned}$$ Therefore, this degree takes its values into $[0,1]$ for all bba’s $m$, independently of the size of discernment. We still have $\delta_{{\mathrm{NS}}}(m_\Theta)=1$, where $m_\Theta$ is the categorical bba giving the total ignorance. Moreover, we obtain $\delta_{{\mathrm{NS}}}(m)=0$ for all Bayesian bba’s. From the definition of the degree of non-specificity, we can propose a degree of specificity such as: $$\begin{aligned} \label{degrebayesianity} \begin{array}{rcl} \delta_{B}(m)&=&\displaystyle 1-\sum_{X \in G^\Theta,~X\not\equiv\emptyset} m(X)\frac{\log_2(|X|)}{\log_2(|\Theta|)}\\ &=& \displaystyle 1-\sum_{X \in G^\Theta,~X\not\equiv\emptyset} m(X)\log_{|\Theta|}(|X|). \end{array}\end{aligned}$$ As we observe already the degree of non-specificity given by the equation does not really measure the specificity but the Bayesianity of the considered bba. This degree is equal to 1 for Bayesian bba’s and not one for other bba’s. **Definition** *The [*Bayesianity*]{} in the theory of belief functions quantify how far a bba $m$ is from a probability.* We illustrate these degrees in the next subsection. Illustration ------------ In order to illustrate and discuss the previous introduced degrees we take some examples given in the table \[table\_bayesianity\]. The bba’s are defined on $2^\Theta$ where $\Theta=\{\theta_1,\theta_2,\theta_3\}$. We only consider here non-Bayesian bba’s, else the values are still given hereinbefore. We can observe for a given sum of basic belief on the singletons of $\Theta$ the Bayesianity degree can change according to the basic belief on the other focal elements. For example, the degrees are the same for $m_2$ and $m_3$, but different for $m_4$. For the bba $m_4$, note that the sum of the basic beliefs on the singletons is equal to the basic belief on the ignorance. In this case the Bayesianity degree is exactly 0.5. That is conform to the intuitive signification of the Bayesianity. If we look $m_5$ and $m_6$, we first note that there is no basic belief on the singletons. As a consequence, the Bayesianity is weaker. Moreover, the bba $m_5$ is naturally more Bayesian than $m_6$ because of the basic belief on $\Theta$. $m_1$ $m_2$ $m_3$ $m_4$ $m_5$ $m_6$ $m_\Theta$ -------------------------- ------- ------- ------- ------- ------- ------- ------------ $\theta_1$ 0.4 0.3 0.1 0.3 0 0 0 $\theta_2$ 0.1 0.1 0.3 0.1 0 0 0 $\theta_3$ 0.1 0.1 0.1 0.1 0 0 0 $\theta_1 \cup \theta_2$ 0.3 0.3 0.5 0 0.6 0.6 0 $\theta_1 \cup \theta_3$ 0.1 0.2 0 0 0.4 0 0 $\theta_2 \cup \theta_3$ 0 0 0 0 0 0 0 $\Theta$ 0 0 0 0.5 0 0.4 1 $\delta_B$ 0.75 0.68 0.68 0.5 0.37 0.23 0 $\delta_{{\mathrm{NS}}}$ 0.25 0.32 0.32 0.5 0.63 0.77 1 : Evaluation of Bayesianity on examples[]{data-label="table_bayesianity"} We must add that these degrees are dependent on the cardinality of the frame of discernment for non Bayesian bba’s. Indeed, if we consider the given bba $m_1$ with $\Theta=\{\theta_1,\theta_2,\theta_3\}$, the degree $\delta_B=0.75$. Now if we consider the same bba with $\Theta=\{\theta_1,\theta_2,\theta_3, \theta_4\}$ (no beliefs are given on $\theta_4$), the Bayesianity degree is $\delta_B=0.80$. The larger is the frame, the larger becomes the Bayesianity degree. Degree of specificity {#degreeSpecificity} ===================== In the previous section, we showed the importance to consider a degree instead of a measure. Moreover, the measures of specificity and non-specificity given by the equations (\[speYager\]) and (\[nonspeDubois\]) give the same values for every Bayesian bba’s as seen on the examples of the section \[measureUncertainty\]. We introduce here a degree of specificity based on comparison with the bba the most specific. This degree of specificity is given by: $$\label{degreeSpe} \delta_S(m)=1-d(m,m_s),$$ where $m_s$ is the bba the most specific associated to $m$ and $d$ is a distance defined onto $[0,1]$. Here we also choose the Jousselme distance, the most appropriated on the bba’s space, with values onto $[0,1]$. This distance is dependent on the size of the space $G^\Theta$, we have to compare the degrees of specificity for bba’s defined from the same space. Accordingly, the main problem is to define the bba the most specific associated to $m$. The most specific bba --------------------- In the theory of belief functions, several partial orders have been proposed in order to compare the bba’s [@Dubois86]. These partial ordering are generally based on the comparisons of their plausibilities or their communalities, specially in order to find the least-committed bba. However, comparing bba’s with plausibilities or communality can be complex and without unique solution. The problem to find the most specific bba associated to a bba $m$ does not need to use a partial ordering. We limit the specific bba’s to the categorical bba’s: $m_X(X)=1$ where $X\in G^\Theta$ and we will use the following axiom and proposition: **Axiom** *For two categorical bba’s $m_X$ and $m_Y$, $m_X$ is more specific than $m_Y$ if and only if $|X| < |Y|$.* In case of equality, the both bba’s are *isospecific*. **Proposition** *If we consider two isospecific bba’s $m_X$ and $m_Y$, the Jousselme distance between every bba $m$ and $m_X$ is equal to the Jousselme distance between $m$ and $m_Y$: $$\forall m,~d(m,m_X)=d(m,m_Y)$$ if and only if $m(X)=m(Y)$.* **Proof** *The proof is obvious considering the equations and . As the both bba’s $m_X$ and $m_Y$ are categoric there is only one non null term in the difference of vectors $m-m_X$ and $m-m_Y$. These both terms $a_X$ and $a_Y$ are equal, because $m_X$ and $m_Y$ are isospecific and so according to the equation $D(Z,X)=D(Z,Y)$ $\forall Z \in G^\Theta$. Therefore $m(X)=m(Y)$, that proves the proposition* $\square$ We define *the most specific bba* $m_s$ associated to a bba $m$ as a categorical bba as follows: $m_s(X_{\max})=1$ where $X_{\max}\in G^\Theta$. Therefore, the matter is now how to find $X_{\max}$. We propose two approaches: [**First approach, Bayesian case**]{} :  \ If $m$ is a Bayesian bba we define $X_{\max}$ such as: $$\label{bayesianspecific} X_{\max}={\operatornamewithlimits{arg\,max}}(m(X), \, X\in \Theta).$$ If there exist many $X_{\max}$ ([*i.e.*]{} having the same maximal bba and giving many isospecific bba’s), we can take any of them. Indeed, according to the previous proposition, the degree of specificity of $m$ will be the same with $m_s$ given by either $X_{\max}$ satisfying (\[bayesianspecific\]). [**First approach, non-Bayesian case**]{} :  \ If $m$ is a non-Bayesian bba, we can define $X_{\max}$ in a similar way such as: $$X_{\max}={\operatornamewithlimits{arg\,max}}\left(\frac{m(X)}{|X|},\,X\!\!\in G^\Theta,\,X\!\not\equiv\!\emptyset\right)\!\!.$$ In fact, this equation generalizes the equation . However, in this case we can also have several $X_{\max}$ not giving isospecific bba’s. Therefore, we choose one of the more specific bba, [*i.e.*]{} believing in the element with the smallest cardinality. Note also that we keep the terms of Yager from the equation . [**Second approach**]{} :  \ Another way in the case of non-Bayesian bba $m$ is to transform $m$ into a Bayesian bba, thanks to one of the probability transformation such as the pignistic probability. Afterwards, we can apply the previous Bayesian case. With this approach, the most specific bba associated to a bba $m$ is always a categorical bba such as: $m_s(X_{\max})=1$ where $X_{\max}\in \Theta$ and not in $G^\Theta$. Illustration ------------ In order to illustrate this degree of specificity we give some examples. The table \[table\_bayesianSpeDegree\] gives the degree of specificity for some Bayesian bba’s. The smallest degree of specificity of a Bayesian bba is obtained for the uniform distribution ($m_1$), and the largest degree of specificity is of course obtain for categorical bba ($m_8$). The degree of specificity increases when the differences between the mass of the largest singleton and the masses of other singletons are getting bigger: $\delta_S(m_3)<\delta_S(m_4)<\delta_S(m_5)<\delta_S(m_6)$. In the case when one has three disjoint singletons and the largest mass of one of them is 0.45 (on $\theta_1$), the minimum degree of specificity is reached when the masses of $\theta_2$ and $\theta_3$ are getting further from the mass of $\theta_1$ ($m_6$). If two singletons have the same maximal mass, bigger this mass is and bigger is the degree of specificity: $\delta_S(m_2)<\delta_S(m_3)$. $\theta_1$ $\theta_2$ $\theta_3$ $\delta_S$ ------- ------------ ------------ ------------ ------------ $m_1$ 1/3 1/3 1/3 0.423 $m_2$ 0.4 0.4 0.2 0.471 $m_3$ 0.45 0.45 0.10 0.493 $m_4$ 0.45 0.40 0.15 0.508 $m_5$ 0.45 0.3 0.25 0.523 $m_6$ 0.45 0.275 0.275 0.524 $m_7$ 0.6 0.3 0.1 0.639 $m_8$ 1 0 0 1 : Illustration of the degree of specificity on Bayesian bba.[]{data-label="table_bayesianSpeDegree"} In the case of non-Bayesian bba, we first take a simple example: $$\begin{aligned} m_1(\theta_1)=0.6,&\!\!\!\!& m_1(\theta_1 \cup \theta_2)=0.4 \\ m_2(\theta_1)=0.5,&\!\!\!\!& m_2(\theta_1 \cup \theta_2)=0.5. \end{aligned}$$ For these two bba’s $m_1$ and $m_2$, both proposed approaches give the same most specific bba: $m_{\theta_1}$. We obtain $\delta_S(m_1)=0.7172$ and $\delta_S(m_2)=0.6465$. Therefore, $m_1$ is more specific than $m_2$. Remark that these degrees are the same if we consider the bba’s defined on $2^{\Theta_2}$ and $2^{\Theta_3}$, with $\Theta_2=\{\theta_1,\theta_2\}$ and $\Theta_3=\{\theta_1,\theta_2,\theta_3\}$. If we now consider Bayesian bba $m_3(\theta_1)=m_3(\theta_2)=0.5$, the associated degree of specificity is $\delta_S(m_3)=0.5$. As expected by intuition, $m_2$ is more specific than $m_3$. We consider now the following bba: $$\begin{aligned} m_4(\theta_1)=0.6, \quad m_1(\theta_1 \cup \theta_2 \cup \theta_3)=0.4.\end{aligned}$$ The most specific bba is also $m_{\theta_1}$, and we have $\delta_S(m_4)=0.6734$. This degree of specificity is naturally smaller than $\delta_S(m_1)$ because of the mass 0.4 on a more imprecise focal element. Let’s now consider the following example: $$m_5(\theta_1 \cup \theta_2)= 0.7, \quad m_5(\theta_1 \cup \theta_3)= 0.3.$$ We do not obtain the same most specific bba with both proposed approaches: the first one will give the categorical bba $m_{\theta_1 \cup \theta_2}$ and the second one $m_{\theta_1}$. Hence, the first degree of specificity is $\delta_S(m_5)=0.755$ and the second one is $\delta_S(m_5)=0.111$. We note that the second approach produces naturally some smaller degrees of specificity. Application to measure the specificity of a combination rule ------------------------------------------------------------ We propose in this section to use the proposed degree of specificity in order to measure the quality of the result of a combination rule in the theory of belief functions. Indeed, many combination rules have been developed to merge the bba’s [@Martin07; @Smets07]. The choice of one of them is not always obvious. For a special application, we can compare the produced results of several rules, or try to choose according to the special proprieties wanted for an application. We also proposed to study the comportment of the rules on generated bba’s [@Osswald06]. However, no real measures have been used to evaluate the combination rules. Hereafter, we only show how we can use the degree of specificity to evaluate and compare the combination rules in the theory of belief functions. A complete study could then be done for example on generated bba’s. We recall here the used combination rules, see [@Martin07] for their description. The [*Dempster rule*]{} is the normalized conjunctive combination rule of the equation given for two basic belief assignments $m_1$ and $m_2$ and for all $X \in G^\Theta$, $X\not\equiv\emptyset$ by: $$\begin{aligned} m_{\mathrm{DS}}(X)=\displaystyle \frac{1}{1-k}\sum_{A\cap B =X} m_1(A)m_2(B).\end{aligned}$$ where $k$ is either $m_{\mathrm{c}}(\emptyset)$ or the sum of the masses of the elements of $\emptyset$ equivalence class for $D^\Theta$. The [*Yager rule*]{} transfers the global conflict on the total ignorance $\Theta$: $$m_{\mathrm{Y}}(X) = \left\{\begin{array}{ll} m_{\mathrm{c}}(X) & \mbox{if~} X \in 2^\Theta \setminus \{\emptyset, \Theta\} \\ m_{\mathrm{c}}(\Theta)+m_{\mathrm{c}}(\emptyset) & \mbox{if~} X = \Theta \\ 0 & \mbox{if~} X = \emptyset \end{array}\right.$$ The disjunctive combination rule is given for two basic belief assignments $m_1$ and $m_2$ and for all $X \in G^\Theta$ by: $$\begin{aligned} m_{\mathrm{Dis}}(X)=\displaystyle \sum_{A\cup B =X} m_1(A)m_2(B).\end{aligned}$$ The Dubois and Prade rule is given for two basic belief assignments $m_1$ and $m_2$ and for all $X \in G^\Theta$, $X\not\equiv\emptyset$ by: $$\label{DP} m_{\mathrm{DP}}(X)= \!\!\!\!\sum_{A \cap B = X}\!\!\! m_1(A) m_2(B) + \!\!\!\!\!\sum_{ \begin{array}{c} \scriptstyle A \cup B=X\\ \scriptstyle A \cap B \equiv \emptyset \end{array}} \!\!\!\!\! m_1(A)m_2(B).$$ The ${\mathrm{PCR}}$ rule is given for two basic belief assignments $m_1$ and $m_2$ and for all $X \in G^\Theta$, $X\not\equiv \emptyset$ by: $$\begin{aligned} \label{DSmTcombination} \begin{array}{l} m_{\mathrm{PCR}}(X)=m_{\mathrm{c}}(X)~+\\ \displaystyle \sum_{\begin{array}{l} \scriptstyle Y\in G^\Theta, \\ \scriptstyle X\cap Y \equiv \emptyset \end{array}} \!\!\!\!\!\left(\frac{m_1(X)^2 m_2(Y)}{m_1(X) \!+\! m_2(Y)}+\frac{m_2(X)^2 m_1(Y)}{m_2(X) \!+\! m_1(Y)}\!\right)\!\!, \end{array}\end{aligned}$$ The principle is very simple: compute the degree of specificity of the bba’s you want combine, then calculate the degree of specificity obtained on the bba after the chosen combination rule. The degree of specificity can be compared to the degrees of specificity of the combined bba’s. In the following example given in the table \[table\_fusionBayes\] we combine two Bayesian bba’s with the discernment frame $\Theta=\{\theta_1,\theta_2,\theta_3\}$. Both bba’s are very contradictory. The values are rounded up. The first approach gives the same degree of specificity than the second one except for the rules $m_{\mathrm{Dis}}$, $m_{\mathrm{DP}}$ and $m_{\mathrm{Y}}$. We observe that the smallest degree of specificity is obtained for the conjunctive rule because of the accumulated mass on the empty set not considered in the calculus of the degree. The highest degree of specificity is reached for the Yager rule, for the same reason. That is the only rule given a degree of specificity superior to $\delta_S(m_1)$ and to $\delta_S(m_2)$. The second approach shows well the loss of specificity with the rules $m_{\mathrm{Dis}}$, $m_{\mathrm{Y}}$ and $m_{\mathrm{DP}}$ having a cautious comportment. With the example, the degree of specificity obtained by the combination rules are almost all inferior to $\delta_S(m_1)$ and to $\delta_S(m_2)$, because the bba’s are very conflicting. If the degrees of specificity of the rule such as $m_{\mathrm{DS}}$ and $m_{\mathrm{PCR}}$ are superior to $\delta_S(m_1)$ and to $\delta_S(m_2)$, that means that the bba’s are not in conflict. $m_1$ $m_2$ $m_{\mathrm{c}}$ $m_{\mathrm{DS}}$ $m_{\mathrm{Y}}$ $m_{\mathrm{Dis}}$ $m_{\mathrm{DP}}$ $m_{\mathrm{PCR}}$ -------------------------- ---------------- ---------------- ------------------ ------------------- ------------------ ------------------------------ ------------------------------ -------------------- $\emptyset$ 0 0 0.76 0 0 0 0 0 $\theta_1$ 0.6 0.2 0.12 0.50 0.12 0.12 0.12 0.43 $\theta_2$ 0.1 0.6 0.06 0.25 0.06 0.06 0.06 0.37 $\theta_3$ 0.3 0.2 0.06 0.25 0.06 0.06 0.06 0.20 $\theta_1 \cup \theta_2$ 0 0 0 0 0 0.38 0.38 0 $\theta_1 \cup \theta_3$ 0 0 0 0 0 0.18 0.18 0 $\theta_2 \cup \theta_3$ 0 0 0 0 0 0.20 0.20 0 $\Theta$ 0 0 0 0 0.76 0 0 0 $m_s$ 1- $m_{\theta_1}$ $m_{\theta_2}$ $m_{\theta_1}$ $m_{\theta_1}$ $m_{\Theta}$ $m_{\theta_1 \cup \theta_2}$ $m_{\theta_1 \cup \theta_2}$ $m_{\theta_1}$ $m_s$ 2- $m_{\theta_1}$ $m_{\theta_2}$ $m_{\theta_1}$ $m_{\theta_1}$ $m_{\theta_1}$ $m_{\theta_1}$ $m_{\theta_1}$ $m_{\theta_1}$ $\delta_S$ 1- 0.639 0.655 0.176 0.567 0.857 0.619 0.619 0.497 $\delta_S$ 2- 0.639 0.655 0.176 0.567 0.457 0.478 0.478 0.497 : Degrees of specificity for combination rules on Bayesian bba’s.[]{data-label="table_fusionBayes"} Let’s consider now a simple non-Bayesian example in table \[table\_fusionNonBayes\].  \ at (-2.5,0) [$\begin{array}{c} \theta_1 \\ \scriptsize 0.4 \end{array}$]{}; at (0,0) [$\begin{array}{c} \theta_2 \\ \scriptsize 0.1 \end{array}$]{}; at (2.5,0) [$\begin{array}{c} \theta_3 \\ \scriptsize 0.3 \end{array}$]{}; (-3.5,0) – (-3.5,1) – (1,1) – (1,-1) – (-3.5,-1) – (-3.5,0); at (-1.25,0) [$0.2$]{}; at (-4,0) [$m_1$:]{};  \ at (-2.5,0) [$\begin{array}{c} \theta_1 \\ \scriptsize 0.2 \end{array}$]{}; at (0,0) [$\mathbf{\begin{array}{c} \theta_2 \\ \scriptsize 0.3 \end{array}}$]{}; at (2.5,0) [$\begin{array}{c} \theta_3 \\ \scriptsize 0.1 \end{array}$]{}; (-3.3,0) – (-3.3,1) – (0.8,1) – (0.8,-1) – (-3.3,-1) – (-3.3,0); at (-1.3,0) [$0.1$]{}; (3.3,0) – (3.3,0.9) – (-0.9,0.9) – (-0.9,-0.9) – (3.3,-0.9) – (3.3,0); at (1.3,0) [$0.2$]{}; (-3.5,0) – (-3.5,1.1) – (3.5,1.1) – (3.5,-1.5) – (-3.5,-1.5) – (-3.5,0); at (0,-1.2) [$0.1$]{}; at (-4,0) [$m_2$:]{}; $m_1$ $m_2$ $m_{\mathrm{c}}$ $m_{\mathrm{DS}}$ $m_{\mathrm{Y}}$ $m_{\mathrm{Dis}}$ $m_{\mathrm{DP}}$ $m_{\mathrm{PCR}}$ -------------------------- ---------------- ---------------- ------------------ ------------------- ------------------ ------------------------------ ------------------- -------------------- $\emptyset$ 0 0 0.47 0 0 0 0 0 $\theta_1$ 0.4 0.2 0.2 0.377 0.2 0.08 0.2 0.39 $\theta_2$ 0.1 0.3 0.17 0.321 0.17 0.03 0.17 0.28 $\theta_3$ 0.3 0.1 0.12 0.226 0.12 0.03 0.12 0.24 $\theta_1 \cup \theta_2$ 0.2 0.1 0.04 0.076 0.04 0.31 0.18 0.06 $\theta_1 \cup \theta_3$ 0 0 0 0 0 0.1 0.1 0 $\theta_2 \cup \theta_3$ 0 0.2 0 0 0 0.18 0.1 0.03 $\Theta$ 0 0.1 0 0 0.47 0.27 0.13 0 $m_s$ 1- $m_{\theta_1}$ $m_{\theta_2}$ $m_{\theta_1}$ $m_{\theta_1}$ $m_{\theta_1}$ $m_{\theta_1 \cup \theta_2}$ $m_{\theta_1}$ $m_{\theta_1}$ $m_s$ 2- $m_{\theta_1}$ $m_{\theta_2}$ $m_{\theta_1}$ $m_{\theta_1}$ $m_{\theta_1}$ $m_{\theta_1}$ $m_{\theta_1}$ $m_{\theta_1}$ $\delta_S$ 1- 0.553 0.522 0.336 0.488 0.389 0.609 0.428 0.497 $\delta_S$ 2- 0.553 0.522 0.336 0.488 0.389 0.456 0.428 0.497 : Degrees of specificity for combination rules on non-Bayesian bba’s.[]{data-label="table_fusionNonBayes"} Conclusion ========== First, we propose in this article a reflection on the measures on uncertainty in the theory of belief functions. A lot of measures have been proposed to quantify different kind of uncertainty such as the specificity - very linked to the imprecision - and the discord. The discord, we do not have to confuse with the conflict, is for us a contradiction of a source (giving information with a bba in the theory of belief functions) with oneself. We distinguish the contradiction and the conflict that is the contradiction between 2 or more bba’s. We introduce a measure of contradiction for a bba based on the weighted average of the conflict between the bba and the categorical bba’s of the focal elements. The previous proposed specificity or non-specificity measures are not defined on the same space. Therefore that is difficult to compare them. That is the reason why we propose the use of degree of uncertainty. Moreover these measures give some counter-intuitive results on Bayesian bba’s. We propose a degree of specificity based on the distance between a mass and its most specific associated mass that we introduce. This most specific associated mass can be obtained by two ways and give the nearest categorical bba for a given bba. We propose also to use the degree of specificity in order to measure the specificity of a fusion rule. That is a tool to compare and evaluate the several combination rules given in the theory of belief functions. Acknowledgments {#acknowledgments .unnumbered} --------------- The authors want to thanks [*Brest Metropole Océane*]{} and [ *ENSTA Bretagne*]{} for funding this collaboration and providing them an excellent research environment during spring 2010. [1]{} \[1\][\#1]{} url@rmstyle \[2\][\#2]{} A.P. Dempster, “Upper and Lower probabilities induced by a multivalued mapping”, *Annals of Mathematical Statistics*, vol. 83, pp. 325-339, 1967. D. Dubois and H. Prade, “A note on measures of specificity for fuzzy sets”, *International Journal of Genera System*, vol. 10, pp.  279-283, 1985. D. Dubois and H. Prade, “A set-theoretic view on belief functions : logical operations and approximations by fuzzy sets”, *International Journal of General Systems*, vol. 12, pp. 193-226, 1986. M.C.  Florea, and E. Bossé, “Crisis Management using [D]{}empster [S]{}hafer Theory: Using dissimilarity measures to characterize sources’ reliability”, in *C3I for Crisis, Emergency and Consequence Management*, Bucharest, Romania, 2009. R.V.L. Hartley, “Transmission of information”, *The Bell System Technical Journal*, vol.  7(3), pp.  535-563, 1928. A.-L. Jousselme, D. Grenier and E. Bossé, “A new distance between two bodies of evidence”, *Information Fusion*, vol. 2, pp. 91-101, 2001. T. George and N.R. Pal, “Quantification of conflict in [D]{}empster-[S]{}hafer framework: a new approach”, *International Journal of General Systems*, vol. 24(4), pp. 407-423, 1996, G.J. Klir, “Measures of uncertainty in the Dempster-Shafer theory of evidence”, *Advances in the Dempster-Shafer Theory of Evidence*, John Wiley and Sons, New York, R.R. Yager and M. Fedrizzi and J. Kacprzyk edition, pp. 35-49, 1994. W. Liu, “Analyzing the degree of conflict among belief functions”, *Artificial Intelligence*, vol. 170, pp. 909-924, 2006. A. Martin and C. Osswald, “Toward a combination rule to deal with partial conflict and specificity in belief functions theory”, *International Conference on Information Fusion*, Québec, Canada, 9-12 July 2007. A. Martin, A.-L. Jousselme and C. Osswald, “Conflict measure for the discounting operation on belief functions,” *International Conference on Information Fusion*, Cologne, Germany, 30 June-3 July 2008. C. Osswald and A. Martin, “Understanding the large family of Dempster-Shafer theory’s fusion operators - a decision-based measure”, *International Conference on Information Fusion*, Florence, Italy, 10-13 July 2006. A. Ramer, “Uniqueness of information measure in the theory of evidence”, *Fuzzy Sets and Systems*, vol. 24, pp. 183-196, 1987. B. Ristic and Ph. Smets, “The [TBM]{} global distance measure for association of uncertain combat ID declarations”, *Information fusion*, vol. 7(3), pp. 276-284, 2006. G. Shafer, *A mathematical theory of evidence*.1em plus 0.5em minus 0.4emLocation: Princeton University Press, 1976. C.E.  Shannon, “A mathematical theory of communication”, *Bell System Technical Journal*, vol. 27, pp. 379-423, 1948. F. Smarandache and J.  Dezert, *Applications and Advances of DSmT for Information Fusion*.1em plus 0.5em minus 0.4emAmerican Research Press Rehoboth, 2004. Ph. Smets, “Constructing the pignistic probability function in a context of uncertainty”, in *Uncertainty in Artificial Intelligence*, vol. 5, pp. 29–39, 1990. Ph. Smets, “Analyzing the combination of conflicting belief functions”, *Information Fusion*, vol. 8, no. 4, pp. 387-412, 2007. M.J. Wierman, “Measuring Conflict in Evidence Theory”, *IFSA World Congress and 20th NAFIPS International Conference*, vol. 3(21), pp. 1741-1745, 2001. R.R. Yager, “Entropy and Specificity in a Mathematical Theory of Evidence”, *International Journal of General Systems*, vol. 9, pp.  249-260, 1983.
--- abstract: 'We study a scheme of quantum simulator for two-dimensional xy-model Hamiltonian. Previously the quantum simulator for a coupled cavity array spin model has been explored, but the coupling strength is fixed by the system parameters. In the present scheme several cavity resonators can be coupled with each other simultaneously via an ancilla qubit. In the two-dimensional Kagome lattice of the resonators the hopping of resonator photonic modes gives rise to the tight-binding Hamiltonian which in turn can be transformed to the quantum xy-model Hamiltonian. We employ the transmon as an ancilla qubit to achieve [*in situ*]{} controllable xy-coupling strength.' author: - Mun Dae Kim date: 'Received: date / Accepted: date' title: 'Quantum simulation scheme of two-dimensional xy-model Hamiltonian with controllable coupling' --- [example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore Introduction ============ In spite of the remarkable advancements of coherent quantum operation the realization of fully controlled quantum computing is severely challenging in quantum information processing technology. On the other hand, significant attention has been paid to quantum spin models as a promising candidate for quantum simulation of many-body effects [@Georg; @Cirac; @Buluta]. Quantum many-body simulation may provide a variety of possibilities to study the properites of many-body systems, realize a new phase of quantum matter, and eventually lead to the scalable quantum computing, which is hard for classical approaches. Large-scale quantum simulators consisting of many qubits integrated have been experimentally demonstrated to study the quantum phenomena such as many-body dynamics and quantum phase transition. Quantum simulators have been studied in the so-called coupled cavity array (CCA) model, where a two-level atom in the cavity interacts with its own cavity and the hopping of a photon bewteen cavities gives rise to the cavity-cavity coupling. The CCA model has been applied to study the Jaynes-Cummings Hubbard model (JCHM) [@HartmanNP; @Xue; @Greentree; @Schmidt; @Koch; @Fan] and the Bose-Hubbard model [@Greentree; @Koch] to exhibit the phase transition between Mott insulator and superfluid. However, in the CCA model the cavity-cavity hopping amplitude is set by the system parameters and thus not tunable. In recent studies for one-dimensional quantum simulators using trapped cold atoms [@Nature51] and trapped ion systems [@Nature53] the coupling strength was tunable. Previously the superconducting resonators in two-dimensional lattice have been coupled through an interface capacitance, where the resonator-resonator coupling strength is not controllable as the capacitance is fixed [@NPreview; @AP]. For superconducting resonator cavities in circuit-quantum electrodynamics (QED) systems, qubit is located outside of the cavity [@MDK; @QIP]. Hence a qubit can interact with many resonator cavities surrounding the qubit. By using a qubit as a mediator of coupling between many resonators one can obtain a tunable resonator-resonator coupling which is quite different from the coupling by direct photon hopping in the CCA model. In this study we consider a lattice model of superconducting resonator cavities coupled by ancilla qubits for simulating the quantum xy-model Hamiltonian. The simulation for quantum xy-model has been studied in one-dimensional [@Hartmann; @Angelakis] and two-dimensional [@Koch] JCHM in the CCA model architecture. In the present model the intervening ancilla qubit which couples cavities has controllable qubit frequency. After discarding the ancilla qubit degrees of freedom by performing a coordinate transformation we show that the photon states in the resonators are described by the tight-binding Hamiltonian which, in turn, can be rewritten as the quantum xy-type interaction Hamiltonian. Consequently, the xy-coupling constant depends on the hopping amplitude of the tight-binding Hamiltonian and thus on the ancilla qubit frequency. We consider two-dimensional Kagome lattice model as well as one-dimensional chain model for the quantum simulation of xy-model Hamiltonian and show that the xy-coupling strength is [*in situ*]{} controllable. Hamiltonian of coupled $n$-resonators ===================================== In circuit-QED architectures qubits can be coupled with the transmission resonator at the boundaries of the resonator [@Blais; @Steffen; @Inomata] so that we may couple several resonators to a qubit as depicted in Fig. \[fig1\] (a). In principle, any kind of qubits are available, but in this study we employ the transmon as the ancilla qubit coupling the resonators with the advantage of controllability. The Hamiltonian of the system with $n$ resonators and an ancilla qubit in Fig. \[fig1\](a) is given by $$\begin{aligned} \label{H} H_{nR}=\frac12\omega_a\sigma^z_{a}+\sum^n_{p=1}[\omega_{rp}a^\dagger_p a_p-f_p(a^\dagger_p\sigma^-_{a}+\sigma^+_{a}a_p)],\end{aligned}$$ where $a^\dagger_p$ and $a_p$ with the frequency $\omega_{rp}$ are the creation and annihilation operators for microwave photon in $p$-th resonator, respectively, and the Pauli matrix $\sigma^z_{a}$ with the frequency $\omega_a$ represents the ancilla qubit state, and $f_p$ is the coupling amplitude between the photon mode in the $p$-th resonator and the ancilla qubit. This Hamiltonian conserves the excitation number $$\begin{aligned} \label{Ne} {\cal N}_e=\sum^n_{p=1}N_{rp}+(s_{az}+1/2),\end{aligned}$$ where $s_{az}\in \{-1/2,1/2\}$ are the eigenvalue of the operator $S_{az}=\frac12\sigma^z_{a}$ for ancilla qubit and $N_{rp}$ is the excitation number of oscillating mode in $p$-th resonator. Here, we consider the subspace that ${\cal N}_e=1$ and thus $N_{rp}\in \{0,1\}$, that is, the state of resonator is the superposition of zero and one-photon states which was generated in experiments previously [@Houck; @Hofheinz08; @Hofheinz09]. ![(a) $n$ cavities of circuit-QED resonators are coupled via an intervening ancilla qubit. (b) Effective cavity-cavity coupling, $J_3$, for $n=3$ as a function of ancilla qubit frequency $\omega_a$ and resonator-ancilla coupling $f$ with the frequency $\omega_r$ of resonator photon mode. []{data-label="fig1"}](ncavities.pdf){width="100.00000%"} In order to obtain the Hamiltonian describing the interaction between photon modes we introduce the transformation $$\begin{aligned} \label{tildeH} {\tilde H}_{nR}=U^\dagger H_{nR} U,\end{aligned}$$ where $$\begin{aligned} U=e^{-\sum^n_{p=1}\theta_p(a^\dagger_p\sigma^-_{a}-\sigma^+_{a}a_p)}.\end{aligned}$$ Here we, for simplicity, assume identical resonators and thus set $\omega_{rp}=\omega_r, f_p=f$ and $\theta_p=\theta$. We then expand $U=e^M$ with $M=-\sum^n_{p=1}\theta_p(a^\dagger_p\sigma^1_{a}-\sigma^+_{a}a_p)$ by using the relation $e^M=1+M+\frac{1}{2!}M^2+\frac{1}{3!}M^3+\cdots$ to obtain $$\begin{aligned} U_{pp}\!\!&=&\!\!1-\frac{1}{2!}\theta^2\!\!+\!\!\frac{1}{4!}n\theta^4\!\!-\!\!\frac{1}{6!}n^2\theta^6 \!\!+\!\!\cdots= \frac{1}{n}(n\!\!-\!\!1\!\!+\!\!\cos\sqrt{n}\theta)\\ U_{n+1,n+1}\!\!&=&\!\!1-\frac{1}{2!}n\theta^2+\frac{1}{4!}n^2\theta^4-\frac{1}{6!}n^3\theta^6+\cdots= \cos\sqrt{n}\theta\\ U_{p,n+1}\!\!&=&\!\!-\theta\!\!+\!\!\frac{1}{3!}n\theta^3\!\!-\!\!\frac{1}{5!}n^2\theta^5\!\!+\!\!\cdots = -\frac1{\sqrt{n}}\sin\sqrt{n}\theta=-U_{n+1,p}\\ U_{pq,p\neq q}\!\!&=&\!\!-\frac{1}{2!}\theta^2\!\!+\!\!\frac{1}{4!}n\theta^4\!\!-\!\!\frac{1}{6!}n^2\theta^6\!\!+\!\!\cdots= \frac{1}{n}(\cos\sqrt{n}\theta\!\!-\!\!1).\end{aligned}$$ Here $U$ is a $(n+1)\times (n+1)$ matrix in the basis of $|N_{r1},N_{r2},N_{r3}, \cdots, N_{rn},s_{az}\rangle$ and $p,q \in \{1,2,3, \cdots, n\}$. The degree of freedoms of ancilla qubit and resonator photon modes in the Hamiltonian of Eq. (\[H\]) can be decoupled by imposing the condition $$\begin{aligned} \tan2\sqrt{n}\theta=2\sqrt{n}\frac{f}{\Delta}\end{aligned}$$ which can be achieved by adjusting the detuning $\Delta\equiv \omega_a-\omega_r$ [@Blais]. The resulting transformed Hamiltonian of Eq. (\[tildeH\]) becomes $$\begin{aligned} \label{tildeHM} {\tilde H}_{nR}=\left( \begin{array}{cccccc} \epsilon^r_1 & J_n & J_n & \cdots & J_n &0 \\ J_n & \epsilon^r_2 & J_n & \cdots & J_n &0 \\ J_n & J_n & \epsilon^r_3 & \cdots & J_n &0 \\ \vdots & \vdots & \vdots & \ddots &\vdots &\vdots \\ J_n & J_n & J_n &\cdots & \epsilon^r_n &0 \\ 0 & 0 & 0& \cdots & 0 & \epsilon^a \end{array} \right),\end{aligned}$$ where $\epsilon^a$ is the energy for the state that $s_{az}=1/2$ and $N_{rp}=0$ for all $p \in \{1,2,3,\cdots,n\}$, and $\epsilon^r_p$ is the energy for the state that $s_{az}=-1/2$ and only the $p$-th resonator has one photon, $N_{rp}=1$ and $N_{rq}=0 ~(q\neq p)$. For identical resonators, $\epsilon^r_1=\epsilon^r_2=\epsilon^r_3= \cdots=\epsilon^r$ and $\epsilon^a$ are explicitly evaluated as $$\begin{aligned} \epsilon^r&=&-\frac{1}{2n}\left(\Delta+ sgn(\Delta)\sqrt{\Delta^2+4nf^2}\right)+\frac12\omega_r,\\ \epsilon^a&=&\frac12 sgn(\Delta)\sqrt{\Delta^2+4nf^2}+\frac12\omega_r,\end{aligned}$$ and the resonator-resonator coupling is given by $$\begin{aligned} \label{J} J_n=\frac{1}{2n}\left(\Delta-sgn(\Delta)\sqrt{\Delta^2+4nf^2}\right),\end{aligned}$$ where $sgn(\Delta)$ is $+1(-1)$ for $\Delta>0~(\Delta<0)$. In the subspace satisfying ${\cal N}_e=1$ the Hamiltonian ${\tilde H}_{nR}$ in Eq. (\[tildeHM\]) can be represented as $$\begin{aligned} \label{TB} {\tilde H}_{nR}&=&\frac12\sum^2_{p=1}\omega'_r(2a^\dagger_pa_p-1) +\sum^n_{p,q=1,p\neq q}J_n(a^\dagger_pa_q+a_pa^\dagger_q)\nonumber\\ &&+\frac12\omega'_a\sigma^z_{a}.\end{aligned}$$ Consequently, $\epsilon^r_p$ and $\epsilon^a$ can be rewritten as $\epsilon^r_p=\epsilon^r=-\frac{n-2}{2}\omega'_r-\frac12\omega'_a$ and $\epsilon^a=-\frac{n}{2}\omega'_r+\frac12\omega'_a$ so that we can have the relations, $\omega'_a=-(n\epsilon^r-(n-2)\epsilon^a)/(n-1)$ and $\omega'_r=-(\epsilon^r+\epsilon^a)/(n-1)$. In this tight-binding Hamiltonian the ancilla qubit operator $\sigma^z_{a}$ is decoupled from the resonator photon mode $a$, and afterward we will ignore the ancilla term. The tight-binding Hamiltonian ${\tilde H}_{nR}$ can be easily transformed to the xy-spin model by introducing a pseudo spin operator $\sigma_p$ such that $2a_pa^\dagger_p-1=|1_p\rangle\langle 1_p|-|0_p\rangle\langle 0_p|=\sigma^z_{p}$ and $a^\dagger_pa_q+a_pa^\dagger_q=|1_p0_q\rangle\langle 0_p1_q|+|0_p1_q\rangle\langle 1_p0_q| =\sigma^+_{p}\sigma^-_{q}+\sigma^-_{p}\sigma^+_{q}=(1/2)(\sigma^x_{p}\sigma^x_{q}+\sigma^y_{p}\sigma^y_{q})$ as follows: $$\begin{aligned} \label{xy} H_{xy}=\frac12\sum^n_{p=1}\omega'_r\sigma^z_{p} +\frac12\sum^n_{p,q=1,p\neq q}J_n(\sigma^x_{p}\sigma^x_{q}+\sigma^y_{p}\sigma^y_{q}).\end{aligned}$$ Here, the hopping parameter $J_n$ acts as a xy coupling constant between pseudo spins. xy-model with tunable coupling =============================== Figure \[fig2\](a) shows one-dimensional lattice model by extending the structure in Fig. \[fig1\](a) for two resonators and an ancilla qubit ($n=2$). The transformation of Hamltonian ${\tilde H}_{2R}=U^\dagger H_{2R} U$ in Eq. (\[tildeH\]) can be evaluated by using the transformation matrix $U=e^M=e^{-\sum^2_{j=1}\theta_j(a^\dagger_j\sigma^--\sigma^+a_j)}$ with $$\begin{aligned} H_{2R}=\left[ {\begin{array}{ccc} \omega_{r1}-\frac12\omega_a & 0 & -f_1\\ 0 & \omega_{r2}-\frac12\omega_a & -f_2 \\ -f_1 & -f_2 & \frac12\omega_a \\ \end{array} } \right], M= \left[ {\begin{array}{ccc} 0 & 0 & -\theta_1\\ 0 & 0 & -\theta_2 \\ \theta_1 & \theta_2 & 0 \\ \end{array} } \right].\end{aligned}$$ For identical resonators such that $\omega_{r1}=\omega_{r2}=\omega_r, f_1=f_2=f$, and thus $\theta_1=\theta_2=\theta$, the transformation matrix can be calculated as $$\begin{aligned} U= \left[ {\begin{array}{ccc} \frac12(\cos\sqrt{2}\theta+1)& \frac12(\cos\sqrt{2}\theta-1) & -\frac1{\sqrt{2}}\sin\sqrt{2}\theta \\ \frac12(\cos\sqrt{2}\theta-1) & \frac12(\cos\sqrt{2}\theta+1) & -\frac1{\sqrt{2}}\sin\sqrt{2}\theta \\ \frac1{\sqrt{2}}\sin\sqrt{2}\theta & \frac1{\sqrt{2}}\sin\sqrt{2}\theta & \cos\sqrt{2}\theta \\ \end{array} } \right]\end{aligned}$$ with the basis $|N_{r1},N_{r2},s_{az}\rangle \in \{|1,0,-1/2\rangle,|0,1,-1/2\rangle,|0,0,1/2\rangle\}$, the photon number in 1st (2nd) resonator $N_{r1} (N_{r2})$ and the ancilla qubit spin $s_{az}$. ![(a) One-dimensional chain of cavity resonators coupled via ancilla qubits with the effective cavity-cavity coupling $J_2$. (b) Two-dimensional Kagome lattice of cavity resonators consisting of three triangular sublattices, $a_{i,j}$ (red), $b_{i,j}$(purple) and $c_{i,j}$(black), with effective coupling strength $J_3$.[]{data-label="fig2"}](lattice.pdf){width="70.00000%"} The transformed Hamiltonian ${\tilde H}_{2R}$ can be represented as the tight-binding Hamiltonian of Eq. (\[TB\]), $$\begin{aligned} {\tilde H}_{2R}=\frac12\sum^2_{i=1}\omega'_r(2a^\dagger_ia_i-1) +\sum^N_{i=1}J_2(a^\dagger_ia_{i+1}+a^\dagger_{i+1}a_i),\end{aligned}$$ with the hopping parameter $J_2=\frac14(\Delta-\sqrt{\Delta^2+8f^2})$, discarding the decoupled ancilla term. This tight-binding Hamiltonian describes photon hopping in the chain model of Fig. \[fig2\](a), which can be subsequently transformed to the one-dimensional xy-model Hamiltonian similar to Eq. (\[xy\]) as $$\begin{aligned} H^{1D}_{xy}=\frac12\sum^N_{i=1}\omega'_r\sigma^z_{i} +\frac12\sum^N_{i=1}J_2(\sigma^x_{i}\sigma^x_{i+1}+\sigma^y_{i}\sigma^y_{i+1}).\end{aligned}$$ Further, for $n=3$ we can construct a two-dimensional lattice model as shown in Fig. \[fig2\](b). Here the ancilla qubits form the hexagonal lattice, but the resonators the dual lattice, i.e., the Kagoma lattice. The Kagome lattice has been widely studied in the relation of, for example, the frustrated spin model [@Mielke] and the interacting boson model [@You; @Petrescu]. The Kagome lattice in Fig. \[fig2\](b) consists of three triangular sublattices denoted as $a_{i,j}, b_{i,j}$ and $c_{i,j}$. Here, two triangles consisting of, for example, $a_{i,j}, b_{i,j}, c_{i,j}, a_{i+1,j-1}$ and $c_{i+1,j}$ in Fig. \[fig2\](b), make up the unit cell and thus the xy-model Hamiltonian in the Kagome lattice can be written as $$\begin{aligned} \label{Kagome} H^{Kagome}_{xy}&=&\frac12\sum^N_{i,j=1}\omega'_r(\sigma^z_{a,i,j}+\sigma^z_{b,i,j}+\sigma^z_{c,i,j})\nonumber\\ &&+\frac12\sum^N_{i,j=1}J_3(\sigma^x_{a,i,j}\sigma^x_{b,i,j}+\sigma^x_{b,i,j}\sigma^x_{c,i,j}+\sigma^x_{c,i,j}\sigma^x_{a,i,j}\nonumber\\ &&+\sigma^y_{a,i,j}\sigma^y_{b,i,j}+\sigma^y_{b,i,j}\sigma^y_{c,i,j}+\sigma^y_{c,i,j}\sigma^y_{a,i,j}\nonumber\\ &&+\sigma^x_{a,i+1,j-1}\sigma^x_{c,i+1,j}+\sigma^x_{b,i,j}\sigma^x_{a,i+1,j-1}+\sigma^x_{c,i+1,j}\sigma^x_{b,i,j}\nonumber\\ &&+\sigma^y_{a,i+1,j-1}\sigma^y_{c,i+1,j}+\sigma^y_{b,i,j}\sigma^y_{a,i+1,j-1}+\sigma^y_{c,i+1,j}\sigma^y_{b,i,j}).\nonumber\\\end{aligned}$$ Photons hop between resonators with amplitude $J_n$ which depends on the sign of detuning $\Delta$ in Eq. (\[J\]). If $\Delta>0$, the hopping amplitude is negative, $J_n<0$, indicating that the hopping process reduces the total system energy and the photons hop between cavities, while for $\Delta<0$ and $J_n>0$ the hopping process has energy cost and thus the photon state is localized in the resonator at the ground state. Since typically the transmon qubit frequency $\omega_a/2\pi \sim$ 10GHz [@KochPRA; @Wallraff] and the resonator microwave photon frequency in circuit-QED scheme is $\omega_r/2\pi\sim$ 5-10GHz [@Blais], we will consider the parameter range of $\Delta=\omega_a-\omega_r>0$. For three resonators coupled to an ancilla qubit ($n=3$) in Fig. \[fig1\](a) the hopping amplitude becomes $J_3=\frac{1}{6}(\Delta-\sqrt{\Delta^2+12f^2})$. Figure \[fig1\](b) shows $J_3$ as a function of the ancilla qubit frequency $\omega_a$ and the ancilla-resonator coupling strength $f$. For the resonant case, $\Delta=\omega_a-\omega_r=0$, the hopping ampltude has the maximum value, $|J_3|=f/\sqrt{3}$, and diminishes as the detuning $\Delta$ grows, which means that $J_3$ can be controllable between $-f/\sqrt{3}<J_3 <0$. Here the typical value of the coupling between transmon ancilla and resonator $f/2\pi\sim$ 100MHz [@Zeytin; @Keller; @Bosman]. If we can adjust the parameters, $\Delta=\omega_a-\omega_r$ and $f$, the coupling constant $J_3$ becomes tunable. The resonator frequency $\omega_r$ and the resonator-photon coupling $f$ are usually set in the experiment, but we can tune the ancilla qubit frequency $\omega_a$ during the experiment for some qubit scheme. For the transmon qubit the qubit frequency is represented as $\omega_a\sim \sqrt{8E_JE_C}$ with the Josephson coupling energy $E_J$ and the charging energy $E_C$ [@KochPRA]. Since the Josepson coupling energy $E_J=E_{J,max}|\cos(\pi\Phi/\Phi_0)|$ is controllable by varying the magnetic flux $\Phi$ threading a dc-SQUID loop [@KochPRA], we can adjust the frequency of the transmon qubit, $\omega_a$. In the Hamiltonian for the two-dimensional xy-model in Kagome lattice in Eq. (\[Kagome\]) $J_3$, corresponding to the coupling constant between pseudo spins $\sigma$, becomes tunable. Hence, in this way we can achieve a quantum simulator for the two-dimensional xy-model in Kagome lattice with [*in situ*]{} tunable coupling. We can measure the resonator states by attaching measurement ports to the resonators, resulting in a complex lattice design. Instead, as in a recent study [@Kollar] measurement ports can be attached at the boundary of the lattice, but the analysis of the simulation results becomes complicated. In this study we assume identical resonators with equal ancilla qubit-resonator coupling $f$ and further consider a restricted subspace with ${\cal N}_e=1$ in the Hilbert space as shown in Eq. (\[Ne\]). If the couplings $f_p$ have some fluctuations from the uniform value $f$, the transformed Hamiltonian will deviate from the exact xy-model Hamiltonian. Furthermore, multiple photons or higher harmonic modes in the resonators may be generated, giving rise to errors in the processes. The effect of these non-idealities should be considered in a future study. conclusion ========== We proposed a scheme for simulating quantum xy-model Hamiltonian in two-dimensional Kagome lattice of resonator cavities with tunable coupling. By using an intervening ancilla qubit several cavities are coupled with each other. We found that the cavity lattice formed by extending this structure can be transformed to the tight-binding lattice of photons after discarding the ancilla qubit degree of freedom. In the subspace of zero and one photon mode in the cavities this Hamiltonian can be described as the quantum xy-model Hamiltonian. We introduced the ancilla transmon qubit whose energy levels can be controlled by varying a threading magnetic flux. The coupling strength can be [*in situ*]{} tuned by adjusting the frequency of ancilla qubit intervening cavities. This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2011-0023467). I. M. Georgescu, S. Ashhab and F. Nori: Quantum simulation. Rev. Mod. Phys. [**86**]{}, 153 (2014) J. I. Cirac and P. Zoller: Goals and opportunities in quantum simulation. Nature Phys. [**8**]{}, 264 (2012) I. Buluta and F. Nori: Quantum simulators. Science [**326**]{}, 108 (2009). M. J. Hartmann, F. G. S. L. Brand$\tilde{a}$o, and M. B. Plenio: Strongly interacting polaritons in coupled arrays of cavities. Nature Phys. [**2**]{}, 849 (2006). P. Xue, Z. Ficek, and B. C. Sanders: Probing multipartite entanglement in a coupled Jaynes-Cummings system. Phys. Rev. A [**86**]{}, 043826 (2012) A. D. Greentree, C. Tahan, J. H. Cole, and L. C. L. Hollenberg: Quantum phase transitions of light. Natute Phys. [**2**]{}, 856 (2006). S. Schmidt and G. Blatter: Strong Coupling Theory for the Jaynes-Cummings-Hubbard Model. Phys. Rev. Lett. [**103**]{}, 086403 (2009) J. Koch and K. Le Hur: Superfluid–Mott-insulator transition of light in the Jaynes-Cummings lattice. Phys. Rev. A [**80**]{}, 023811 (2009). J. Fan, Y. Zhang, L. Wang, F. Mei, G. Chen, and S. Jia: Superfluid-Mott-insulator quantum phase transition of light in a two-mode cavity array with ultrastrong coupling. Phys. Rev. A [**95**]{}, 033842 (2017) Bernien, H. [*et al.*]{}: Probing many-body dynamics on a 51-atom quantum simulator Nature. [**551**]{}, 579 (2017) Zhang, J. [*et al.*]{}: Observation of a many-body dynamical phase transition with a 53-qubit quantum simulator. Nature [**551**]{}, 601 (2017) Houck, A. A., T[ü]{}reci, H. E., Koch, J.: On-chip quantum simulation with superconducting circuits. Nature Phys. [**8**]{}, 292 (2012) S. Schmidt and J. Koch: Circuit QED lattices: Towards quantum simulation with superconducting circuits. Ann. Phys. (Berlin) [**525**]{}, 395 (2013) M. D. Kim and J. Kim: Coupling qubits in circuit-QED cavities connected by a bridge qubit. Phys. Rev. A [**93**]{} 012321 (2016) M. D. Kim and J. Kim: Scalable quantum computing model in the circuit-QED lattice with circulator function. Quantum Inf. Process. [**16**]{}, 192 (2017) M. J. Hartmann, F. G. S. L. Brand$\tilde{a}$o, and M. B. Plenio: Effective Spin Systems in Coupled Microcavities. Phys. Rev. Lett. [**99**]{}, 160501 (2007) D. G. Angelakis, M. F. Santos, and S. Bose: Photon-blockade-induced Mott transitions and XY spin models in coupled cavity arrays. Phys. Rev. A [**76**]{}, 031805(R) (2007) A. Blais, J. Gambetta, A. Wallraff, D. I. Schuster, S. M. Girvin, M. H. Devoret, and R. J. Schoelkopf: Quantum-information processing with circuit quantum electrodynamics. Phys. Rev. A [**75**]{}, 032329 (2007) M. Steffen, S. Kumar, D. P. DiVincenzo, J. R. Rozen, G. A. Keefe, M. B. Rothwell, and M. B. Ketchen: High-Coherence Hybrid Superconducting Qubit. Phys. Rev. Lett. [**105**]{}, 100502 (2010) K. Inomata, T. Yamamoto, P.-M. Billangeon, Y. Nakamura, and J. S. Tsai: Large dispersive shift of cavity resonance induced by a superconducting flux qubit in the straddling regime. Phys. Rev. B [**86**]{}, 140508(R) (2012) A. A. Houck, D. I. Schuster, J. M. Gambetta, J. A. Schreier, B. R. Johnson, J. M. Chow, L. Frunzio, J. Majer, M. H. Devoret, S. M. Girvin and R. J. Schoelkopf: Generating single microwave photons in a circuit. Nature [**449**]{}, 328 (2007) M. Hofheinz, E. M. Weig, M. Ansmann, R. C. Bialczak, E. Lucero, M. Neeley, A. D. O’Connell, H. Wang, J. M. Martinis and A. N. Cleland: Generation of Fock states in a superconducting quantum circuit. Nature [**454**]{}, 310 (2008) M. Hofheinz, H. Wang, M. Ansmann, R. C. Bialczak, E. Lucero, M. Neeley, A. D. O’Connell, D. Sank, J. Wenner, J. M. Martinis and A. N. Cleland: Synthesizing arbitrary quantum states in a superconducting resonator. Nature [**459**]{}, 546 (2009) see, for example, A. Mielke: Exact ground states for the hubbard model on the Kagome lattice. J. Phys. A., [**25**]{}, 4335 (1992) Y.-Z. You, Z. Chen, X.-Q. Sun, and H. Zhai: Superfluidity of Bosons in Kagome Lattices with Frustration. Physical Review Letters [**109**]{}, 265302 (2012) A. Petrescu, A. A. Houck, and K. L. Hur: Anomalous Hall effects of light and chiral edge modes on the Kagome lattice. Physical Review A [**86**]{}, 053804 (2012) Koch, J., Yu, T. M., Gambetta, J., Houck, A. A., Schuster, D. I., Majer, J., Blais, A., Devoret, M. H., Girvin, S. M., Schoelkopf, R. J.: Charge-insensitive qubit design derived from the Cooper pair box. Phys. Rev. A [**76**]{}, 042319 (2007) A. Wallraff, D. I. Schuster, A. Blais, L. Frunzio, R.-S. Huang, J. Majer, S. Kumar, S. M. Girvin, and R. J. Schoelkopf: Strong coupling of a single photon to a superconducting qubit using circuit quantum electrodynamics. Nature [**431**]{}, 162 (2004) S. Zeytinoğlu, M. Pechal, S. Berger, A. A. Abdumalikov Jr., A. Wallraff, and S. Filipp: Microwave-induced amplitude- and phase-tunable qubit-resonator coupling in circuit quantum electrodynamics. Phys. Rev. A [**91**]{}, 043846 (2015) A. J. Keller, P. B. Dieterle, M. Fang, B. Berger, J. M. Fink, and O. Painter: Al transmon qubits on silicon-on-insulator for quantum device integration. Appl. Phys. Lett. [**111**]{}, 042603 (2017) S. J. Bosman, M. F. Gely, V. Singh, D. Bothner, A. Castellanos-Gomez, and G. A. Steele: Approaching ultrastrong coupling in transmon circuit QED using a high-impedance resonator. Phys. Rev. B [**95**]{}, 224515 (2017) A. J. Kollár, M. Fitzpatrick, and A. A. Houck: Hyperbolic Lattices in Circuit Quantum Electrodynamics. arXiv:1802.09549
--- abstract: 'This paper presents two methods for the first Micro-Expression Spotting Challenge 2019 by evaluating local temporal pattern (LTP) and local binary pattern (LBP) on two most recent databases, i.e. SAMM and CAS(ME)$^2$. First we propose LTP-ML method as the baseline results for the challenge and then we compare the results with the LBP-$\chi ^{2}$-distance method. The LTP patterns are extracted by applying PCA in a temporal window on several facial local regions. The micro-expression sequences are then spotted by a local classification of LTP and a global fusion. The LBP-$\chi ^{2}$-distance method is to compare the feature difference by calculating $\chi^2$ distance of LBP in a time window, the facial movements are then detected with a threshold. The performance is evaluated by Leave-One-Subject-Out cross validation. The overlap frames are used to determine the *True Positives* and the metric *F1-score* is used to compare the spotting performance of the databases. The *F1-score* of LTP-ML result for SAMM and CAS(ME)$^2$ are 0.0316 and 0.0179, respectively. The results show our proposed LTP-ML method outperformed LBP-$\chi ^{2}$-distance method in terms of *F1-score* on both databases.' author: - '[^1]' - | Anonymous FG 2019 submission\ Paper ID\ bibliography: - 'IEEEabrv.bib' - 'reference.bib' title: '**Spotting Micro-Expressions on Long Videos Sequences** ' --- INTRODUCTION ============ Facial micro-expression (ME) is a local brief facial movement, which can be triggered under high emotional pressure. The duration is less than 500ms [@Ekman_Friesen_1969]. It is a very important non-verbal communication clue, the involuntary nature make it possible to analyze personal genuine emotional state. ME analysis has many potential applications in national security [@Ekman_2009], medical care [@Endres_Laidlaw_2009], educational psychology [@Chiu_Liaw_Yu_Chou], and political psychology [@Stewart_Waller_Schubert_2009]. Due to the growth and importance of MEs, researchers [@yap2018facial] have worked collaboratively to solicit the works in this area by conducting challenges in datasets and methods for MEs. This year, the theme of the Second Facial Micro-Expression Grand Challenge has extended to spotting challenges. The main idea of most methods for ME spotting is to compare the feature differences between the first frame and the other frames in a time window. Meanwhile, the feature descriptors used in the state of the art are diverse, to name a few: LBP [@Moilanen_Zhao_Pietikainen_2014; @Li_Hong_Moilanen_Huang_Pfister_Zhao_Pietikainen_2017], HOG [@davison2018objective], optical flow [@yan2014casme; @Li_Yu_Zhan_2016; @Wang_Wu_Fu_2016; @ma2017region], integral projection [@lu2017micro], Riesz pyramid [@duque2018micro], and frequency domain [@li2018can]. Feature differences allow consistent comparisons between frames over a time window of the size of an ME. However, the movements spotted between frames might not be the ME movements, it could be noises, macro-movements and illumination changes. This is why the ability to distinguish MEs from other movements (such as blinking or subtle head movements) remains an open challenge. Nowadays, methods utilizing machine learning are emerging [@Xia_Feng_Peng_Peng_Zhao_2016; @tran2017sliding; @borza2017high; @husak2017spotting]. Furthermore, [@zhang2018smeconvnet] employed deep learning for the first time to perform the ME spotting. The machine learning process enhances the ability of distinguishing micro-expression from others. However, the spatial patterns are still the primary feature for the classifier. The temporal variation pattern of facial movement in a ME duration has yet to attract sufficient attention. Meanwhile, few articles spotted micro-expression directly from local region. However, the characteristic of that the micro-expression is a local facial movement could help to reduce the false positives. In this paper, we spot the micro-expression clips in two recently published databases, and establish the baseline method for ME spotting challenge by using directly a temporal pattern extracted from local region [@li2018ltp]. Frames in a ME duration are taken into account to obtain a real temporal and local pattern (LTP), and then the LTPs are recognized by a classifier. Even though the spatial pattern is not studied, the spotted facial motions are differentiated by a fusion process from local to global. This method helps to improve the ability to distinguish ME from other movements. Furthermore, it allows finding the ME spatial local region and the temporal onset index of ME. We compare the results of our proposed LTP-ML method with a LBP approach - LBP-$\chi ^{2}$-distance by Moilanen et al. [@Moilanen_Zhao_Pietikainen_2014]. The rest of the paper is organized as follows: Section \[sec:method\] presents the methodology and performance metrics. Section \[sec:result\] introduces the result and also shows the detailed experiment results. Section \[sec:conclusion\] concludes the paper. Methodology {#sec:method} =========== This section describes the benchmark databases, the proposed LTP-ML method, the state-of-the-art LBP method and the performance metrics. Databases --------- Two most recent long videos spontaneous micro-expression databases, SAMM [@davison2018samm] and CAS(ME)$^2$ [@qu2017cas], are used for ME spotting challenge. Both databases contain long videos, which were recorded in the strictly controlled laboratory environment. Table \[tab:bdd\_info\] compares the differences between these two databases. The notable differences are the resolution and frame rates used in the experimental settings. These are indeed a great challenge for computer vision and machine learning community to produce a robust method worked for both databases, The detailed information of these two databases is presented in the following two subsections. Database Participants Samples Resolution FPS ------------- -------------- --------- ------------------ ----- SAMM 32 79 2040$\times$1088 200 CAS(ME)$^2$ 22 97 640$\times$480 30 : A Comparison between SAMM and CAS(ME)$^2$. \[tab:bdd\_info\] ### SAMM Long Videos Database SAMM database consists of a total of 32 subjects and each has 7 videos [@davison2018samm]. The average length of videos is 35.3s. The original release of SAMM consists of micro-movement clips labelled in Action Units. Recently, the authors [@Davison2018] introduced objective classes and emotion classes for the database. The recognition challenge will be using the emotional classes from the database as ground truth. The spotting challenge focuses on 79 videos, each contains one/multiple micro-movements, with a total of 159 micro-movements. The index of onset, apex and offset frames of micro-movements were provided as the ground truth. The micro-movements interval is from onset frame to offset frame. In this database, all the micro-movements are labeled. Thus, the spotted frames can indicate not only ME but also other facial movements, such as eye blinks. ### CAS(ME)$^2$ Database In the part A of CAS(ME)$^2$ database [@qu2017cas], there are 22 subjects and 97 long videos. The average duration is 148s. The facial movements are classified as macro- and micro-expressions. The video samples may contain multiple macro or micro facial expressions. The onset, apex, offset index for these expressions are given in the excel file. In addition, the eye blinks are labeled with onset and offset time. LTP-ML: Our Proposed Baseline Method ------------------------------------ The baseline method is developed based on the proposed LTP-ML (local temporal pattern-machine learning) method in  [@li2018ltp]. The method is extended for long videos by employing a sliding temporal window. The main idea and the modification of LTP-ML method is presented in the following paragraphs. ### Pre-processing As the ME is a local facial movement, we analyze ME only on a selection of regions of interest (ROIs). First of all, as shown in Figure \[fig:landmarks\_track\], 84 facial landmarks are tracked in the video sequence by utilizing the Genfacetracker (©Dynamixyz). Then the size of ROI square $ a $ is determined by the distance $ L$ between the left and right inner corners of eyes: $ a = (1/5) \times L$. 12 ROIs squares are chosen based on the regions where ME happens most frequently, i.e. the corner of the eyebrows and of the mouth. Two ROIs of nose region are chosen as references because the nose is the most rigid facial region. Since the average duration of ME is around 300ms, and the subjects barely moved in one second, the long videos in these two databases are processed by a temporal sliding window $W_{video}$ whose length is 1s. The overlap is set to 300ms to avoid missing any possible ME movements. This, the video is separated into an ensemble of small sequences $[I_1, I_2,...,I_M] $ by sliding temporal window as shown in Figure \[fig:PCA\]. The positions of 12 chosen ROIs for all frames in one sequence are determined by the detected landmarks of the first frame in the window. ![PCA process analysis. The long video is divided into small sequences by a sliding window. Then the PCA process is performed respectively on time axis for 12 ROIs sequences in one small divided clip. []{data-label="fig:PCA"}](pca_on_long_video.png){width=".5\textwidth"} ### Feature Extraction In this part, local temporal patterns (LTPs) [@li2018ltp] are analyzed in the local region to distinguish ME from other movements. They are extracted from 12 ROIs respectively in each small sequence. Supposing in sequence $I_m$ ($m \leq M$), as illustrated in the lower part of Figure \[fig:PCA\], PCA is performed on the temporal axis of each ROI sequence to conserve the principal variation at this region. The first two components of each ROI frame are used to analyze the variation pattern of local movement. The PCA process for ROI sequence $ROI^m_j$ ($j \leq 12 $) in $I_m$ can be presented as in equation \[eq:one\]. $$\label{eq:one} \resizebox{0.7\textwidth}{!}{$ \left[ \begin{matrix} P_{1}^{m,j}(x) & \cdots & P_{N}^{m,j}(x)\\ P_{1}^{m,j}(y) & \cdots & P_{N}^{m,j}(y) \end{matrix} \right]= \Phi \times ( \left[ \begin{matrix} F_{1}^{m,j}(1) & \cdots & F_{N}^{m,j}(1)\\ & \ddots & \\ F_{1}^{m,j}(a^{2}) & \cdots & F_{N}^{m,j}(a^{2}) \end{matrix} \right]-\bar I ) $}$$ where $F_{n}^{m,j}$ represents the pixels in one ROI frame, $P_{n}^{m,j} = [P_{n}^{m,j}(x),P_{n}^{m,j}(y)]$ are the first two components of PCA, $n$ is the frame index in this ROI sequence ($n \leq N $). Hence, each frame in $ROI^m_j$ can be represented by a point $P_{n}^{m,j}$. Then, a sliding window $W_{ROI}$ is set depending on the average duration of ME (300ms). The distances between the first frame and the other frames in this window are calculated. The window goes through each frame in the sequence $ROI^m_j$, and the distance set can be got as $[\Delta^m_j(n,n+1),\Delta^m_j(n,n+w),...,\Delta^m_j(n,n+W_{ROI}-1) ] $, as shown in Figure \[fig:dist\_window\]. The values of distance are then normalized for the entire $ROI^m_j$ to avoid the influence of different movement magnitude in different videos. Hence, the feature of frame n for $ROI^m_j$ can be represented as: $[CN^m_j$, $ d^m_j(n,n+1),\cdots,d^m_j(n,n+W_{ROI}-1) ]$, where $d^m_j(n,n+1)$ is the normalized distance value and the $CN^m_j$ is the normalization coefficient. The more detailed deduction process can be found in [@li2018ltp]. The feature for one ROI sequence of the entire long video is the concatenation of features of all the separated sequences. ### Local Classification As presented in the above paragraph, one video contains 12 feature ensembles from 12 ROI. Li et al. [@li2018ltp] showed the LTP patterns are similar for all chosen ROIs for all kinds of ME. The patterns which can represent the ME local movements can be recognized by a local classification. A supervised classification SVM is employed with Leave-One-Subject-Out cross validation. The feature selection and label annotation are presented in [@li2018ltp]. ### Global Fusion After the LTPs which fit the local ME movement pattern are recognized, a global fusion is processed to eliminate the false positives concerning other movements and true negatives caused by our recognition process. As introduced in [@li2018ltp], there are three steps: a local qualification, a spatial fusion and a merge process. LBP-$\chi ^{2}$-distance Method ------------------------------- This method is firstly proposed in  [@Moilanen_Zhao_Pietikainen_2014]. It is the most commonly used method for result comparison for ME spotting. Based on [@Moilanen_Zhao_Pietikainen_2014] and [@tran2017sliding], the configuration of LBP-$\chi^2$ is set as follows: the entire face region is divided into 36 blocks. The overlap rates between blocks on axis X and Y are are 0.2 and 0.3 respectively. LBP features are extracted from blocks with uniform mapping. The radius r is set to $ r = 3 $, and the number of neighboring points p is set to $ p = 8 $. The $ \chi^{2} $ distances of the each frame are computed in an $ 2 \times L_{interval}+1 $ interval. First of all, the value of LBP-$\chi^2$-distance is compared in the whole long video. However, the method can barely spot any micro-expression intervals, while there are many false positives. This is due to this method spots the maximal movements in the video, and there are some larger movements than ME in both databases. Hence, the entire video is separated into a sub-video set by a sliding window, the setting is the same as the LTP-ML method. For each sub-video, the feature differences are calculated and sorted to find the maximal movement in this short interval. This gives the chance to spot more MEs which could be ignored in entire video comparison. Method ----------- ----------------- ----------------- ------------------ --------------- ----------------- ------------------ --------------- -- database SAMM$_{ME}^{c}$ SAMM$_{ME}^{f}$ CAS(ME)$^2_{ME}$ CAS(ME)$^{2}$ SAMM$_{ME}^{c}$ CAS(ME)$^2_{ME}$ CAS(ME)$^{2}$ nb\_vid 79 79 32 97 79 32 97 TP 34 47 16 16 12 10 10 FP 1958 3891 1711 5742 4172 1729 5435 FN 125 112 41 41 147 47 47 Precision 0.0171 0.0043 0.0093 0.0028 0.0028 0.0057 0.0018 Recall 0.2138 0.2956 0.2807 0.2807 0.0755 0.1754 0.1754 F1-score **0.0316** 0.0229 **0.0179** **0.0055** 0.0055 0.0111 0.0035 \[tab:result\] Performance Metrics ------------------- There are three evaluation methods used to compare the performance of the spotting tasks: **1. True positive in one video definition** Supposing there are $m$ micro-expressions in the video, and $n$ intervals are spotted. The result of this spotted interval $W_{spotted}$ is considered as *true positive (TP)* if it fits the following condition: $$\frac{W_{spotted}\cap W_{groundTruth}}{W_{spotted}\cup W_{groundTruth}} \geq k$$ where $k$ is set to 0.5, $W_{groundTruth}$ represents the micro-expression interval (onset-offset). Otherwise, the spotted interval is regarded as *false positive (FP)*. **2. Result evaluation in one video** Supposing the number of *TP* in one video is $a$ ($a\leq m$ and $a\leq n$), then *FP* = $n-a$, *false positive (FN)* = $m-a$, the *Recall*, *Precision* and *F1-score* are defined: $$Recall = \frac{a}{m},\ Precision = \frac{a}{n}$$ $$F-score = \frac{2TP}{2TP+FP+FN} = \frac{2a}{m+n}$$ In practical, these metrics might not be suitable for some videos, as there exist the following situations on a single video: - The test video does not have micro-expression sequences, thus, $m=0$, the denominator of recall will be zeros. - The spotting method does not spot any intervals. The denominator of precision will be zeros since $n=0$. - If there are two spotting methods, Method$_1$ spots p intervals and Method$_2$ spots q intervals, and $p\leq q$. Supposing for both methods, the number of true positive is 0, thus the metrics (*recall*, *precision* or *F1-score*) values both equal to zeros. However, in fact, the Method$_1$ performs better than Method$_2$. Considering these situations, we propose for a single video, we record the result in terms of *TP*, *FP* and *FN*. For performance comparison, we produce a final calculation of other metrics for the entire database. **3. Evaluation for entire database** Supposing in the entire database, there are $V$ videos and $M$ micro-expression sequences, and the method spot $N $ intervals in total. The database could be considered as one long video, thus, the metrics for entire database can be calculated by: $$Recall_{D} = \frac{{{\sum^V_{i=1}}} a_i}{{{\sum^V_{i=1}}} m_i} = \frac{A}{M}$$ $$Precision_{D} = \frac{{{\sum^V_{i=1}}} a_i}{{{\sum^V_{i=1}}} n_i} = \frac{A}{N}$$ $$F1-score_{D} = \frac{2\times (Recall_{D} \times Precision_{D})} {Recall_{D}+Precision_{D}}$$ The final results by different methods would be evaluated by *F1-score* since it considers the both *recall* and *precision*. Results and Discussion {#sec:result} ====================== As introduced in Section \[sec:method\], SAMM and CAS(ME)$^2$ have different frame rates and resolution. Hence, the lengths of sliding window $W_{video}$, the overlap size, the interval length of $W_{ROI}$ and the ROIs size are different for these two databases. Table \[tab:exp\_conf\] lists the experimental parameters. Database $L_{window}$ $L_{overlap}$ $L_{interval}$ $size_{ROI}$ ------------- -------------- --------------- ---------------- -------------- SAMM 200 60 60 15 CAS(ME)$^2$ 30 9 9 10 : Parameter configuration for SAMM and CAS(ME)$^2$. $L_{window}$ is the length of sliding window $W_{video}$,$L_{overlap}$ is the overlap size between sliding windows, $L_{interval}$ is the interval length of $W_{ROI}$. \[tab:exp\_conf\] For CAS(ME)$^2$ database, there are 97 videos, but only 32 videos contain micro-expressions. Thus, different results are given under two conditions: one is only considering 32 videos which have ME (CAS(ME)$^2_{ME}$), another one is to include the entire database (all 97 videos). Since the raw videos in SAMM database are too big to download (700GB), only 79 videos (full frame: 270GB and cropped face: 11GB) were provided for the challenge. In this work, we report the results based on these two versions of SAMM database: one is the cropped videos (SAMM$_{ME}^{c}$) provided by the authors using the method in [@Davison_Yap_Lansley_2015], and the other one is the videos with full frame (SAMM$_{ME}^{f}$). The spotting process is performed only on the downloaded databases. Experiments Results of LTP-ML Method ------------------------------------ After performing the LTP-ML method on these two databases, the spotting results for whole database are listed in Table \[tab:result\]. The *F1-score* for (SAMM$_{ME}^{c}$) and CAS(ME)$^2_{ME}$ are 0.0316 and 0.0179 respectively. LTP-ML performs better in SAMM$_{ME}^{c}$ than SAMM$_{ME}^{f}$, since the cropped-face process has already aligned the face region in the video, and reduced the influence of irrelevant movements. Concerning the spotting result of CAS(ME)$^2$, there are more *FPs* because the video in this database which has no ME may contain macro-expressions. Experiments Results of LBP-$\chi^2$-distance (LBP-$\chi^2$) Method ------------------------------------------------------------------ The result is compared with LBP-$\chi^2$-distance (LBP-$\chi^2$) method. The spotting result is listed in Table \[tab:result\]. For CAS(ME)$^2_{ME}$, when the threshold for peak selection is set to 0.15, we can get the best result for LBP-$\chi^2$ method, the *F1-score* is 0.0111. Meanwhile, the highest *F1-score* of SAMM$_{ME}^{c}$ is 0.0055 when the threshold is set to 0.05. Compared with LTP-ML method, LBP-$\chi^2$ method is less accurate. LTP-ML method is capable of spotting the subtle movements based on the patterns which represented the temporal pattern variation of ME. Yet, the value of *F1-score* is low because of the large amounts of *FP*. Both databases contain noises and irrelevant facial movements, especially for CAS(ME)$^2$, it is not easy to separate macro-expressions from micro-expressions based on 30fps videos. The ability of distinguishing ME from other movements still need to be enhanced. CONCLUSIONS {#sec:conclusion} =========== This paper addresses the challenge in spotting ME on long videos sequences using two most recent databases, i.e. SAMM and CAS(ME)$^2$. We proposed LTP-ML for spotting MEs and provided a set of performance metrics as the guideline for result evaluation on ME spotting. The baseline results of these two databases are provided in this paper. We demonstrate that our proposed method is better than the LBP approach in spotting MEs. Whilst the method was able to produce a reasonable amount of *TPs*, there are still a huge challenge lays ahead due to the large amount of *FPs*. Further research will focus on enhancing the ability of distinguishing ME from other facial movements to reduce *FPs*, including the implementation of deep learning approaches when we have sufficient data. ACKNOWLEDGMENTS =============== The authors gratefully acknowledge the contribution of the Organisers and Program Committee Members. [^1]: This work is supported by Chinese scholarship council and ANR reflet. This paper is also supported in part by grants from the National Natural Science Foundation of China (61772511) and The Royal Society (IF160006).
--- abstract: | The aim of the paper is to answer the following question: does $\kappa$-deformation fit into the framework of noncommutative geometry in the sense of spectral triples?\ Using a compactification of time, we get a discrete version of $\kappa$-Minkowski deformation via $C^*$-algebras of groups. The dynamical system of the underlying groups (including some Baumslag–Solitar groups) is used in order to construct *finitely summable* spectral triples. This allows to bypass an obstruction to finite-summability appearing when using the common regular representation. address: | Centre de Physique Théorique [^1],\ CNRS–Luminy, Case 907 13288 Marseille Cedex 9 FRANCE author: - 'B. Iochum [^2], T. Masson, T. Schücker & A. Sitarz [^3]' title: '$\kappa$-Deformation and Spectral Triples' --- =by -1 Pacs numbers: 11.10.Nx, 02.30.Sa, 11.15.Kc Introduction ============ Lukierski, Ruegg, Nowicki & Tolstoy discovered a Hopf algebraic deformation of the Poincaré Lie algebra and called $\kappa $ the deformation parameter [@kappafirst1; @kappafirst2]. Since this pioneering work, the subject became very active: the Hopf algebra was represented on the $\kappa $-deformation of Minkowski space [@kappaMink1; @kappaMink2]. This has been used to generalize the notion of a quantum particle [@part] or in quantum fields [@fields]. Algebraic properties like differential calculi on the $\kappa $-Minkowski space were investigated [@calculi] as well as the Noether theorem . The $\kappa $-Minkowski space has also been popularized as ‘double special relativity’ [@double] and appears in spin foam models [@foam]. It is natural to ask whether $\kappa $-geometry is a noncommutative one in the sense of Connes [@triple; @triple1] (see [@answer] for a first attempt). While the algebraic setting is quite clear, the main difficulty is to overcome the analysis which is an essential part in the definition of spectral triples. The $\kappa$-deformation of $n$-dimensional Minkowski space is based on the Lie-algebraic relations $$\begin{aligned} \label{commutationkappa} [x^0,\,x^j]:= \tfrac{i}{\kappa} \,x^j, \quad [x^j,\,x^k]=0, \quad j,k=1,\dots,n-1.\end{aligned}$$ Here we assume $\kappa>0$. As in [@KMLS eq. (2.6)], one gets $$\begin{aligned} e^{i c_\mu x^\mu}=e^{i c_0 x^0}\, e^{i c'_j \,x^j} \text{ where } c'_j:= \tfrac{\kappa}{c_0}\,(1-e^{-c_0/\kappa}) c_j.\end{aligned}$$ Assuming that the $x^\mu$’s are selfadjoint operators on some Hilbert space, we define unitaries $$U_\omega :=e^{i\omega x^0} \text{ and }V_{\vec{k}}:=e^{-i\sum_{j=1}^{n-1} k_jx^j}$$ with $\omega,k_j \in {\mathbb{R}}$, which generate the $\kappa$-Minkowski group considered in [@Agostini]. If $W(\vec{k},\omega):=V_{\vec{k}}\,U_\omega$, one gets as in [@Agostini eq. (13)] $$\begin{aligned} \label{grouplaw} W(\vec{k},\omega) \, W( \vec{k'},\omega')=W(e^{-\omega/\kappa} \vec{k'}+\vec{k},\omega + \omega').\end{aligned}$$ The group law is, for $n=2$, nothing else but the crossed product $$\begin{aligned} \label{groupkappa} G_\kappa:={\mathbb{R}}\rtimes_{\alpha}{\mathbb{R}}\text{ with group isomorphism }\,{\alpha}(\omega)k:=e^{-\omega/\kappa}k, \,\,\, k\in {\mathbb{R}}.\end{aligned}$$ Note that $G_\kappa \simeq {\mathbb{R}}\rtimes {\mathbb{R}}^*_+$ is the affine group on the real line which is solvable and nonunimodular. The irreducible unitary representations are either one-dimensional, or fall into two nonequivalent classes [@Agostini]. When $\kappa \rightarrow \infty$, the usual plane ${\mathbb{R}}^2$ is recovered but with an unpleasant pathology at the origin [@DP]. For a given $\omega$, a particular case occurs when $m :=e^{-\omega /\kappa} \in {\mathbb{N}}^*$, since $$\begin{aligned} \label{commutationgen} U_\omega V_{\vec{k}}=(V_{\vec{k}})^m U_\omega, \end{aligned}$$ $m$ being independent of $k_j$. This means that for chosen $\omega$ and $k_j$, the presentation of the group is given by two generators and one relation. Here, we investigate different spectral triples (${\mathcal{A}},{\mathcal{H}},{\mathcal{D}})$ associated to the group $C^*$-algebra of $G_\kappa$. To avoid technicalities due to a continuous spectrum of ${\mathcal{D}}$, we want a unital algebra so that we consider a periodic time $x^0$ which induces a discrete version $G_a$ of $G_\kappa$ where $a$ is a real parameter depending on $\kappa$. This is done in section \[motivation\]. In section \[aquelconque\], we give the main properties of the algebra ${\mathcal{U}}_a = C^*(G_a)$ and its representations. For $C^*$-algebras of groups, the left regular representation is the natural one to consider. But due to the structure of $G_a$ (solvable with exponential growth, see Theorem \[Thealgebra\]), there is a known obstruction to construct finite-summable spectral triples on ${\mathcal{U}}_a$ based on this representation. In order to bypass this obstruction, we need to refine our understanding of the structure of $G_a$ in terms of an underlying dynamical system. This structure permits to define a particular representation of ${\mathcal{U}}_a$ using only the periodic points of the dynamical system. Following Brenken and Jørgensen [@BJ], the topological entropy of this dynamics is considered. It is worthwhile to notice that the elementary building blocks $G_a$ given by $a=m \in {\mathbb{N}}^*$ as in are some of the amenable Baumslag–Solitar groups, already encountered in wavelet theory [@J]. The power of harmonic analysis on groups also justifies a reminder of their main properties in section \[abstractapproach\]. The question of the finite summability of these triples is carefully considered with results by Connes [@Con89] and Voiculescu [@Voiculescu79; @Voiculescu]: using the regular representation of $C^*(G_a)$, an obstruction to finite-summability appears and allows only $\theta$-summability. However, faithful representations of $C^*(G_a)$, not quasi-equivalent to the left regular one and based on the existence of periodic points for dynamical systems, can give rise to arbitrary finite-summable spectral triples. These results are summarized in Theorems \[nonexistence\] and \[existence\].\ All proofs will appear elsewhere. Motivations and models {#motivation} ====================== Let us consider the example given by for two hermitian generators $x^0$ and $x^1$. For any $(k,\omega) \in {\mathbb{R}}^2$, one defines $W(k,\omega) := V_{k} \,U_\omega = e^{-ik x^1} e^{i \omega x^0}$. Then one has which is a representation of the group $G_\kappa$ defined in . The bounded operators $W(f) := \int_{G_\kappa} f(k, \omega) \,W(k, \omega) \, e^{\omega/\kappa} \,dk \,d\omega$ for any $f \in L^1(G_\kappa, e^{\omega/\kappa} \, dk\,d\omega)$ (here $e^{\omega/\kappa} dk\,d\omega$ is the left Haar measure on $G_\kappa$) generate a representation of $C^*_{red}(G_\kappa)$. The product of $f, g \in L^1(G_\kappa, e^{\omega/\kappa} \, dk\,d\omega)$ takes the form $(f \ast_\kappa g)(k, \omega) = \int_{G_\kappa} f(k', \omega') g(e^{\omega'/\kappa}(k - k'), \omega - \omega') \,e^{\omega'/\kappa} \,dk'\,d\omega'$. The advantage of considering the theory of group $C^*$-algebras is twofold. Many structural properties on groups will turn out to be useful in studying some properties of the corresponding $C^*$-algebras. Moreover, this allows us to construct in a natural way compact versions of noncommutative spaces as we now explain. For an abelian topological group $G$, $C^*_{red}(G)$ is isomorphic to $C_0(\widehat{G})$ where $\widehat{G}$ is the Pontryagin dual of $G$. Both algebras are defined as spaces of functions. By duality, a discrete subgroup $\Gamma \subset G$ produces the $C^*$-algebra $C^*_{red}(\Gamma) \simeq C(\widehat{\Gamma})$ where $C(\widehat{\Gamma})$ is the $C^*$-algebra of continuous functions on the compact space $\widehat{\Gamma}$. Notice that there is a natural dual map $\widehat{G} \rightarrow \widehat{\Gamma}$. For the example of the plane, consider the discrete subgroup $\Gamma = {\mathbb{Z}}^2 \subset {\mathbb{R}}^2$. Then the resulting $C^*$-algebra is $C^*({\mathbb{Z}}^2) \simeq C({\mathbb{T}}^2)$ because $\widehat{{\mathbb{Z}}} = {\mathbb{T}}^1$. The dual map ${\mathbb{R}}\simeq \widehat{{\mathbb{R}}} \rightarrow \widehat{{\mathbb{Z}}} = {\mathbb{T}}^1$ is explicitly given by $x \mapsto e^{2\pi ix}$. The choice of the subgroup $\Gamma = {\mathbb{Z}}^2 \subset {\mathbb{R}}^2$ corresponds then to the choice of the compact version ${\mathbb{T}}^2$ of the (dual) space $\widehat{{\mathbb{R}}}^2 \simeq {\mathbb{R}}^2$. The compactification takes place in the space of the variables $(x,y)$. In order to get a compact version of this $\kappa$-deformed Minkowski space, one has to choose a discrete subgroup $H_\kappa \subset G_\kappa$. Since $H_\kappa$ is discrete and non-abelian, the associated algebra $C^*(H_\kappa)$ is unital and noncommutative, so it can be interpreted as a compact noncommutative space. This point is motivated in section \[spectraltriples\] and is done in section \[themodel-subgroup\]. As a final preliminary remark, let us mention that the groups we will encounter will be decomposed as crossed products with ${\mathbb{Z}}$, so both the (related) theories of discrete dynamical systems and crossed products of $C^*$-algebras will be intensively used in many parts of this work. Spectral triples {#spectraltriples} ---------------- The goal is to study the existence of spectral triples for the $\kappa$-deformed space. A spectral triple (or unbounded Fredholm module) $({\mathcal{A}},{\mathcal{H}},{\mathcal{D}})$ [@Con95; @triple; @ConnesMarcolli] is given by a unital $C^*$-algebra $A$ with a faithful representation $\pi$ on a Hilbert space ${\mathcal{H}}$ and an unbounded self-adjoint operator ${\mathcal{D}}$ on ${\mathcal{H}}$ such that - the set ${\mathcal{A}}={\{\,a \in {\mathcal{A}}\,: \, [{\mathcal{D}}, \pi(a)] \text{ is bounded }\,\}}$ is norm dense in $A$, - $(1+{\mathcal{D}}^2)^{-1}$ has a compact resolvent. (${\mathcal{A}}$ is always a $^*$-subalgebra of $A$.)\ Of course, the natural choice of the algebra $A$ is to take the $C^*$-algebra of the group $G_\kappa$, but since $A=C^*(G_\kappa)$ has no unit, we need to replace the second axiom by: - $\pi(a) (1+{\mathcal{D}}^2)^{-1}$ has a compact resolvent for any $a\in {\mathcal{A}}$.\ This technical new axiom generates a lot of analytical complexities but is necessary to capture the metric dimension associated to ${\mathcal{D}}$. For instance, if a Riemannian spin manifold $M$ is non-compact, the usual Dirac operator ${\mathcal{D}}$ has a continuous spectrum on ${\mathcal{H}}=L^2(S)$ where $S$ is the spinor bundle on $M$. Nevertheless, the spectral triple $\big(C^\infty(M), L^2(S),{\mathcal{D}}\big)$ has a metric dimension which is equal to the dimension of $M$. A noncommutative example (the Moyal plane) of that kind has been studied in [@GIGSV].\ We try to avoid these difficulties here by using a unital algebra $A$. The compact version model as choice of a discrete subgroup {#themodel-subgroup} ---------------------------------------------------------- We consider only dimension $n=2$ but the results can be extended to higher dimensions thanks to . To get a unit, we choose a discrete subgroup $H_\kappa$ of $G_\kappa$ such that $1\in C^*(H_\kappa)$.\ Since we want also to keep separate the role of the variables $x^0$ and $x^1$, we consider the subgroup of the form $H_\kappa =H \rtimes_{\alpha}{\mathbb{Z}}$: we first replace the second ${\mathbb{R}}$ of $G_\kappa$ in by the lattice ${\mathbb{Z}}$ which corresponds to unitary periodic functions of a chosen frequency $\omega_0$ (the time $x^0$ is now periodic). So, given $\kappa>0$ and $\omega_0 \in {\mathbb{R}}$, with $$\begin{aligned} \label{defa} a:=e^{-\omega_0 /\kappa} \in {\mathbb{R}}^+,\end{aligned}$$ the group ${\mathbb{R}}\rtimes_{{\alpha}_a} {\mathbb{Z}}$ is a subgroup of $G_\kappa$ where ${\alpha}_a(n)$ is the multiplication by $a^n$. This subgroup is a non-discrete “$ax+b$” group. Then, we want a group $H$ to be a discrete (now, not necessarily topological) subgroup of the first ${\mathbb{R}}$ in ${\mathbb{R}}\rtimes_{{\alpha}_a} {\mathbb{Z}}$, which is invariant by the action ${\alpha}_a$. Given $k_0 \in {\mathbb{R}}$, a natural building block candidate for a discrete $H$ is given by $H = B_a \cdot k_0 \simeq B_a$ where $$B_a:={\{\,\sum_{i, \,{\rm finite}} m_i\, a^{n_i} \, : \, m_i,n_i \in {\mathbb{Z}}\,\}},$$ and more generally, one can take $H\simeq \oplus_{k_0} \,B_a$. The search for a discrete subgroup $H_\kappa$ of $G_\kappa$ such that $1 \in C^*(H_\kappa)$ leads to $H_{\kappa, a} :=B_a \rtimes_{{\alpha}_a} {\mathbb{Z}}$ which is isomorphic to a subgroup of $G_\kappa$ once $k_0$ is fixed. This procedure drives us to the analysis of the algebraic nature of $a$. For instance, when $a=m \in {\mathbb{N}}^*$ is an integer, this group $H_{\kappa, m}$ is well known since it is the solvable Baumslag–Solitar group $BS(1,m)={\mathbb{Z}}[\tfrac{1}{m}] \rtimes_{{\alpha}_m} {\mathbb{Z}}$ as shown in section \[abstractapproach\] where we get $B_m=B_{1/m}={\mathbb{Z}}[1/m]$. A broad family of noncommutative spaces appears: \[caseinteger\] In two dimensions, there exists a unital subalgebra $C^*(H_{\kappa, m})$ of the $\kappa$-deformation algebra, associated to the subgroup $H_{\kappa, m}={\mathbb{Z}}[\tfrac{1}{m}] \rtimes_{{\alpha}_m} {\mathbb{Z}}$ of $G_{\kappa, m}$ and which can be seen as generated by two unitaries $U,V$ such that $U=U_{{\omega}_0}$, $V=V_{k_0}$ and $UV=V^mU$. Here, $\kappa :=-\omega_0\,\log^{-1}(m)>0$ for some given integer $m> 1$ and some $\omega_0 \in {\mathbb{R}}^-$, $k_0\in {\mathbb{R}}$. The algebra Ua and its representations {#aquelconque} ====================================== We now compute the $C^*$-algebra ${\mathcal{U}}_a$ which is our model for a compact version of the $2$-dimensional $\kappa$-Minkowski space. The structure of this algebra is described through a semi-direct product of two abelian groups, one of which depends explicitly on the real parameter $a >0$. This semi-direct structure gives rise to a dynamical system which is heavily used in the following. The classification of the algebras ${\mathcal{U}}_a$ is performed: the $K$-groups are not complete invariants, and we use the entropy defined on the underlying dynamical system to complete this classification. Then some representations of ${\mathcal{U}}_a$ are considered. They strongly depend on the algebraic or transcendental character of $a$. In the algebraic case, some particular finite dimensional representations are introduced based on periodic points of the dynamical system. This construction will be used in section \[exist\]. Let $a=e^{-\omega_0/\kappa} \in {\mathbb{R}}^*_+$ with $a \neq 1$, and let us recall general facts from [@BJ]:\ Define $$B_a:={\{\, \sum_i m_i \,a^{n_i} \text{ for finitely many } m_i,n_i \in {\mathbb{Z}}\,\}}.$$ This discrete group is torsion-free so its Pontryagin dual $\widehat B_a$ is connected and compact. Let ${\alpha}_a$ be the action of ${\mathbb{Z}}$ on $b \in B_a$ defined by ${\alpha}_a(n)b:=a^n \,b$, let $\widehat{{\alpha}_a}$ be the associate automorphism on $\widehat B_a$ and $$\begin{aligned} G_a & :=B_a \rtimes_{{\alpha}_a} {\mathbb{Z}}, & {\mathcal{U}}_a & := C^*(G_a) = C^*(B_a) \rtimes_{{\alpha}_a} {\mathbb{Z}}= C(\widehat B_a) \rtimes_{\widehat{{\alpha}_a}} {\mathbb{Z}}.\end{aligned}$$ This kind of $C^*$-algebras also appeared in [@CPPR] for totally different purposes. The group $G_a$ is generated by $u:=(0,1) \text{ and } v:=(1,0).$ \[symmetry\] Let $a\in {\mathbb{R}}_+^*$, then $B_a=B_{1/a}$ and $G_a \simeq G_{1/a}$. Thus the $C^*$-algebra ${\mathcal{U}}_a$ and ${\mathcal{U}}_{1/a}$ are isomorphic. The symmetry point $a=1/a$ corresponds to the commutative case in with $a=1$ or the undeformed relation with $\kappa=\infty$. In this spirit ${\mathcal{U}}_a$ can be viewed as a deformation of the two-torus. The dynamical system ${\mathcal{U}}_a\simeq C(\widehat B_a) \rtimes_{\alpha}{\mathbb{Z}}$ has an ergodic action (if $a\neq1$) [@BJ; @Brenken] and if the set of $q$-periodic points is $$\mathrm{Per}_q(\widehat B_a) := {\{\,\chi \in \widehat B_a \, : \, \widehat{{\alpha}}^{k}(\chi) \neq \chi, \ \forall k<q, \ \widehat{{\alpha}}^{q}( \chi) = \chi\,\}}$$ then the growth rate $\lim_{q\rightarrow \infty} q^{-1}\log \big(\# \mathrm{Per}_q(\widehat B_a)\big)$ of this sets is an invariant of ${\mathcal{U}}_a$ which coincides with the topological entropy $h(\widehat{{\alpha}_a})$: $$\begin{aligned} \label{entrop} h(\widehat{{\alpha}_a})=\lim_{q\rightarrow \infty} q^{-1}\log \big(\# \mathrm{Per}_q(\widehat B_a)\big).\end{aligned}$$ This entropy can be finite or infinite, dividing the algebraic properties of $a$ into two cases: $a$ can be a an algebraic or a transcendental number. Transcendental case {#transcendent} ------------------- If $a$ is a transcendental number, then $B_a\simeq {\mathbb{Z}}[a,a^{-1}]$. Thus, $B_a\simeq \oplus_{\mathbb{Z}}{\mathbb{Z}}$, $\widehat B_a\simeq S_a:= {\{\,z=(z_k)_{k=-\infty}^\infty \in {\mathbb{T}}^{\mathbb{Z}}\,\}}$ and $\big(\widehat{{\alpha}} (z)\big)_k=z_{k+1}$ for $z\in S_a, \,k\in {\mathbb{Z}}$ so that $\widehat{{\alpha}}$ is just the shift $\sigma$ on ${\mathbb{T}}^{\mathbb{Z}}$. Thus ${\mathcal{U}}_a \simeq C({\mathbb{T}}^{\mathbb{Z}})\rtimes_\sigma {\mathbb{Z}}\, \text{ and } \,h(\widehat{{\alpha}})=\infty.$ Note that the wreath product $\wr$ appears with its known presentation: $$G_a=B_a \rtimes_{\alpha}{\mathbb{Z}}\simeq {\mathbb{Z}}\wr {\mathbb{Z}}\simeq \langle u,v \, : \, [u^i v u^{-i},v]=1 \text{ for all } i\geq1 \rangle.$$ This group is amenable (solvable), torsion-free, finitely generated (but not finitely presented), residually finite with exponential growth.\ $S_a$ contains a lot of $q$-periodic points: they are obtained by repeating any sequence $(z_k)_{k=0}^{q-1}$ of arbitrary elements in ${\mathbb{T}}$. Aperiodic points are easily constructed also. Moreover, $S_a$ is the Bohr compactification $b_{B_a}{\mathbb{R}}$ of ${\mathbb{R}}$. Algebraic case {#algebrique} -------------- Assume now that $a$ is algebraic. Let $P\in {\mathbb{Q}}[x]$ be the monic irreducible polynomial such that $P(a)=0$ and let $P=c\,Q_a$ where $Q_a\in {\mathbb{Z}}[x]$. If $d$ is the degree of $Q_a$, we get the ring isomorphism $B_a \simeq {\mathbb{Z}}[x,x^{-1}]/(Q_a)$ [@BJ]. Moreover, $B_a$ has a torsion-free rank $d$. If $Q_a(x)=\sum_{j=0}^d q_j\, x^j$ (so $Q_a$ has leading coefficient $q_d \in {\mathbb{N}}^*$), let $A_a \in M_{d\times d}({\mathbb{Z}})$ be the $d\times d$-matrix defined by $({A_a})_{i,j}:=q_d \,\delta_{i,j-1}$ for $1\leq j\leq d $ and $(A_a)_{d,j}=-q_{j-1}$. Then $q_d\,a^j=\sum_{k=1}^d (A_a)_{j,k}\, a^{k-1}$.\ For instance, if $a=1/m$ for $m\in {\mathbb{N}}^*$ then $P(x)=x-1/m$, $Q_a(x)=mx-1$, $d=1$, so $B_a={\mathbb{Z}}[\frac{1}{m}]$ and $A_a$ is just the number $1$. Let $\sigma$ be the shift on the group ${({\mathbb{T}}^d)}^{\mathbb{Z}}$ and consider its $\sigma$-invariant subgroup $K_a:={\{\,z=(z_k)_{k=-\infty}^\infty \in {({\mathbb{T}}^d)}^{\mathbb{Z}}\, : \, q_d\,z_{k+1}=A_a \,z_k\,\}}$ (use ${\mathbb{T}}\simeq {\mathbb{R}}/{\mathbb{Z}}$).\ If $S_a$ is the connected component of the identity of $K_a$, then there exists a topological group isomorphism $\psi \, : \, \widehat B_a \rightarrow S_a$ such that $\sigma_{\vert {S_a}} \circ \psi = \psi \circ \widehat{{\alpha}}$ [@Lawton Theorem 19]. For any $\chi \in \widehat B_a$, the associated $z=(z_k)_{k=-\infty}^\infty$ is given by $z_k^{(i)} = \chi(a^{k+i-1})$ where $z_k = (z_k^{(i)})_{i=1}^{d} \in {\mathbb{T}}^d$. In particular $z_0$ is given by $\big(\chi(1), \chi(a), \dots, \chi(a^{d-1})\big)$. This map is only surjective on the connected component of the identity.\ When $a=1/m$ with $m\in {\mathbb{N}}^*$, we will recover $S_{1/m}=S_m$ in . There is a morphism of groups $\hat{\iota} : {\mathbb{R}}^d \rightarrow S_a$ defined as follows: to any $\phi = (\phi^{(i)})_{i=1,\dots,d} \in {\mathbb{R}}^d$, one associates $\hat{\iota}(\phi) = z = (z_k)_{k=-\infty}^\infty \in {({\mathbb{T}}^d)}^{\mathbb{Z}}$ with $$\label{eq-thetamap} z_k^{(i)} = \exp\big( 2 i \pi q_d^{-k} \sum_{j=1}^{d} (A_a^k)_{i,j} \phi^{(j)} \big).$$ This shows that $S_a$ is a Bohr compactification of ${\mathbb{R}}^d$ [@Brenken Proposition 2.4]. Then $\widetilde{{\alpha}}(\phi) = q_d^{-1} A_a \phi$ defines an action of ${\mathbb{Z}}$ on ${\mathbb{R}}^d$ which satisfies $\hat{\iota} \circ \widetilde{{\alpha}} = \widehat{{\alpha}} \circ \hat{\iota}$.\ If $r_i$, $i=1,\cdots d$ are the roots of $P$, then by [@BJ Proposition 3, Corollary 1], $$\begin{aligned} \label{cq} c_q(a):=\#\text{ Per}_q(S_a)=\Pi_{k=1}^q \vert Q_a(e^{i2\pi k/q})\vert = \vert q_d \vert^q \, \Pi_{k=1}^d \vert 1-{r_k}^q \vert.\end{aligned}$$ Thus by , the topological entropy is $$\begin{aligned} \label{entropy} h(\widehat{{\alpha}_a})=\log\vert q_d \vert +\sum_{i, \,\vert r_i\vert >1} \log \vert r_i\vert.\end{aligned}$$ In case $a=m$ or $a=1/m$, gives $h(\widehat{{\alpha}_m})=h(\widehat{{\alpha}_{1/m}})=\log(m)$.\ Aperiodic points in $S_a$ can be easily constructed using the map $\hat{\iota}$ defined by : any $\phi \in ({\mathbb{R}}\backslash {\mathbb{Q}})^d$ defines an aperiodic point $\hat{\iota}(\phi) \in S_a$. The structure of algebras Ua ---------------------------- We can now give the main properties of the algebras ${\mathcal{U}}_a$: \[Thealgebra\] Let $a \in {\mathbb{R}}^*_+$ and $a\neq 1$. Then \(i) The group $G_a = B_a \rtimes _{\alpha}{\mathbb{Z}}$ is a torsion-free discrete solvable group with exponential growth and $\widehat{B_a}$ is a compact set isomorphic to a solenoid $S_a$. \(ii) ${\mathcal{U}}_a=C^*_{red}(B_a \rtimes_{\alpha}{\mathbb{Z}})\simeq C(S_a) \rtimes_{\widehat{{\alpha}}} {\mathbb{Z}}$ is a NGCR [^4], AF-embeddable, non-simple, residually finite dimensional $C^*$-algebra and its generated von Neumann algebra for the left regular representation is a type $\mathrm{II}_1$- factor. A main point of this theorem, crucial for the sequel is that the algebra ${\mathcal{U}}_a$ is residually finite and its proof is based on properties of the underlying dynamical system: \[prop-dynsyst\] Let $a \in {\mathbb{R}}^*_+$ and $a\neq 1$. The subgroup of periodic points and the set of aperiodic points of $S_a$ under $\widehat{{\alpha}}$ are dense. The space of orbits of $S_a$ is not a $T_0$-space. The classification of algebras ${\mathcal{U}}_a$ is also based on the dynamical system: \[classification\] Let $\omega_0 \in {\mathbb{R}}$ and $\kappa \in {\mathbb{R}}^*_+$ defining $a\neq 1$ in . \(i) ${\mathcal{U}}_a \simeq {\mathcal{U}}_{a'}$ yields $c_q(a)=c_q(a'),\, \forall q\in {\mathbb{N}}^*$. \(ii) The entropy $h(\widehat{{\alpha}})$ is also an isomorphism-invariant of ${\mathcal{U}}_a$. This result has important physical consequences since a full Lebesgue measure dense set of different parameters $a$ (namely the transcendental ones) generates the same algebra or $\kappa$-deformed space, while in the rational case, these spaces are different: \[classification1\] As already seen, ${\mathcal{U}}_a\simeq {\mathcal{U}}_{1/a}$. Moreover, \(i) All transcendental numbers $a$ generate isomorphic algebras $ {\mathcal{U}}_a$. \(ii) If ${\mathcal{U}}_a\simeq {\mathcal{U}}_{a'}$, then $a$ and $a'$ are both simultaneously algebraic or transcendental numbers. \(iii) If ${\mathcal{U}}_a\simeq {\mathcal{U}}_{a'}$, then $a'=a$ or $a'=a^{-1}$ in the following cases: $a$, $a'$ or their inverses are in ${\mathbb{Q}}^*$ or are quadratic algebraic numbers. \(iv) If $a=m/l \in {\mathbb{Q}}^*_+$, $K_0({\mathcal{U}}_a) \simeq {\mathbb{Z}}$ and $K_1({\mathcal{U}}_a) \simeq {\mathbb{Z}}\oplus {\mathbb{Z}}_{l-m}$. Using [@Brenken2], we can show that the $K$-groups do not give a complete classification even in this algebraic case. On some representations of Ua for algebraic a {#representationsinduced} --------------------------------------------- We will concentrate on $a\neq1$ algebraic and follow the construction of finite dimensional representations [@ST; @Yamashita]: Let $z_q\in \mathrm{Per}_q(S_a)$ be a $q$-periodic point of $\widehat{{\alpha}}$. Let $\rho_{z_q}: C(S_a) \rightarrow M_{q}({\mathbb{C}})$ be a representation of $C(S_a)$ defined by $$\rho_{z_q}(f):=\text{Diag} \big( \, f(z_q),\cdots,f\big(\widehat{{\alpha}}^{q-1}(z_q)\big) \,\big) \in M_{q}({\mathbb{C}})$$ and for $x\in {\mathbb{T}}$, let $u_{x,z_q}:= \genfrac{(}{)}{0pt}{1}{\,0 \quad x}{1_{q-1}\,\,\,\, 0 } \in M_q({\mathbb{C}})$. It is a unitary which satisfies the covariance relation $u_{x,z_q}^* \, \rho_{z_q}(f) \,u_{x,z_q}=\rho_{z_q}(f \circ \widehat{{\alpha}})$ thus $\pi_{x,z_q}:=\rho_{z_q} \rtimes u_{x,z_q}$ is a representation of ${\mathcal{U}}_a$ on $M_q({\mathbb{C}})$. Again, $\pi_x:=\oplus_{q=1}^\infty \oplus_{z_q \in \mathrm{Per}_q (S_a)}$ is a representation of ${\mathcal{U}}_a$. So, for a dense family ${\{\,x_l\,\}}_{l=1}^\infty$ in ${\mathbb{T}}$, using the canonical faithful conditional expectation $C(S_a) \rtimes_{\widehat{{\alpha}}} {\mathbb{Z}}\rightarrow C(S_a)$ and the density of periodic points, one show that $\pi:=\oplus_{l=1}^\infty \pi_{x_l}$ is a faithful representation of ${\mathcal{U}}_a$ such that $\pi({\mathcal{U}}_a) \subset \oplus_{l=1}^\infty \oplus _{q=1}^\infty \oplus _{z \in \mathrm{Per}_{q}(S_a)} M_{q}({\mathbb{C}})$. The representation $\pi_{x,\chi}$ where $\chi=z_q \in \widehat{B_a}=S_a$, can be extended to a representation $\pi_\chi$ on the Hilbert space ${\mathcal{H}}_\chi = L^2({\mathbb{T}}) \otimes {\mathbb{C}}^q$ by $$\label{eq-reppichi} \pi_\chi = \int_{{\mathbb{T}}} \pi_{x,\chi}\, dx.$$ Denote by $\{e^{(q)}_s \}_{s=1, \dots q}$ the canonical basis of ${\mathbb{C}}^q$ and by $e_n : \theta \mapsto e^{in\theta}$, for $n \in {\mathbb{Z}}$, the basis of $L^2({\mathbb{T}}) \simeq \ell^2({\mathbb{Z}})$. Then let, for $f \in C^\ast(B_a) = C(\widehat B_a)$: $$\begin{aligned} \pi_\chi(f) (e_n \otimes e^{(q)}_s) &:= f \circ \widehat{{\alpha}}^{s-1}(\chi) \, e_n \otimes e^{(q)}_s, \\ U (e_n \otimes e^{(q)}_s) &:= \begin{cases} e_n \otimes e^{(q)}_s & \text{for $1\leq s < q$},\\ e_{n+1} \otimes e^{(q)}_1 & \text{for $s = q$}, \end{cases}\end{aligned}$$ where $U$ is the generator of ${\mathbb{Z}}$. This representation is constructed from the representation of $G_a=B_a \rtimes_{\alpha}{\mathbb{Z}}$ given by $\pi_\chi(b) (e_n \otimes e^{(q)}_s) = \chi \circ {\alpha}^{s-1}(b) \, e_n \otimes e^{(q)}_s$ for any $b \in B_a$ (the generator $U$ of the action of ${\mathbb{Z}}$ is the same). Another natural representation to consider is the representation of ${\mathcal{U}}_a$ obtained from the left regular representation of $G_a$. In [@LPT] and more systematically in [@DJ], the induced representations *à la* Mackey of $G_a$ for algebraic $a$ have been investigated. The main results are the following. For any $\chi \in \widehat B_a$, let the space of functions $\varphi : B_a \rtimes_{\alpha}{\mathbb{Z}}\rightarrow {\mathbb{C}}$ such that $\varphi(b,k) = \chi(b) \varphi(0,k)$ for any $b \in B_a$ and $k \in {\mathbb{Z}}$ be endowed with the norm $\| \varphi \|_\chi^2 = \sum_{k \in {\mathbb{Z}}} |\varphi(0,k)|^2$. This defines a Hilbert space denoted by ${\mathcal{H}}_\chi^{\mathrm{Ind}}$. The induced representation of $G_a$ on ${\mathcal{H}}_\chi^{\mathrm{Ind}}$ is given by $(\pi_\chi^{\mathrm{Ind}}(g) \varphi)(h) = \varphi(hg)$ for any $g,h \in G_a$. This representation is unitarily equivalent to the following $\pi'_\chi$ [@DJ Theorem 4.2]: the Hilbert space is $\ell^2({\mathbb{Z}})$ and for any $\xi = (\xi_k)_{k \in {\mathbb{Z}}}$, one takes $(\pi'_\chi(b)\xi)_k := \chi\circ {\alpha}^{k}(b) \xi_k$ and the generator of ${\mathbb{Z}}$ is $(U \xi)_k = \xi_{k+1}$. As a representation of ${\mathcal{U}}_a$, one has $(\pi'_\chi(f)\xi)_k = f \circ \widehat{{\alpha}}^{k}(\chi) \xi_k$ for any $f \in C^\ast(B_a)$. \[thm-inducedrepresentations\] Assume $a\neq 1$ is algebraic. \(i) There is a natural bijection between the set of orbits of $\widehat{{\alpha}}$ in $S_a$ and the set of all equivalence classes of induced representations of ${\mathcal{U}}_a=C^\ast(G_a)$. This bijection is realized by $\chi \mapsto \pi_\chi^{\mathrm{Ind}}$. \(ii) The representation $\pi_\chi^{\mathrm{Ind}}$ is irreducible if and only if $\chi$ is aperiodic. \(iii) The commutant of $\pi_\chi^{\mathrm{Ind}}$ for a $q$-periodic point $\chi$ is the commutative algebra $C({\mathbb{T}})$. \(iv) The right regular representation $R$ of ${\mathcal{U}}_a=C^\ast(G_a)$ is unitarily equivalent to the representation $\int_{B_a}^{\oplus} \pi_\chi^{\mathrm{Ind}} \, d\mu(\chi)$. For a $q$-periodic $\chi$, the representation $\pi_\chi^{\mathrm{Ind}}$ is reducible. Explicitly one has: If $\chi$ is $q$-periodic, $\pi_\chi^{\mathrm{Ind}}$ is unitarily equivalent to the representation $\pi_\chi$ on ${\mathcal{H}}_\chi$, so that its continuous decomposition into irreducible finite dimensional representations on ${\mathbb{C}}^q$ is realized by along ${\mathbb{T}}$. This proposition states that, while the finite dimensional representations of ${\mathcal{U}}_a$ are not obtained as induced representations, they are nevertheless reductions of induced representations. The right regular representation $R$ contains the infinite dimensional irreducible induced representations which are only accessible using aperiodic points. The representations $R$ and $\pi_\chi$ are not quasi-equivalent: this difference will play a crucial role in the construction of different spectral triples, see Remark \[Contradict\]. While the $\pi_\chi^{\mathrm{Ind}}$’s yield a von Neumann factor of type $\mathrm{I}$, $R$ gives a type $\mathrm{II}_1$ factor because of the integral. So the group $G_a$ is non-type $\mathrm{I}$. The particular case a=m in N\* {#abstractapproach} ============================== According to Lemma \[symmetry\], the case $a=m \in {\mathbb{N}}^*$ also covers the case $a=1/m$. We do insist on this $\kappa$-deformed space since the algebra is then generated by two unitaries related by one relation (see below) in the spirit of the noncommutative two-torus: $G_m=BS(1,m)$ is the Baumslag–Solitar group which is generated by two elements and one-relator while, when $a$ and $a^{-1}$ are not integers, $G_a$ is not a finitely presented group (still with two generators). This simplifies the computations of section \[algebrique\]. Moreover, the results described now rely more on some properties of the Baumslag–Solitar group than on the dynamical system. Thus, these results (which for the most are already valid and exposed for generic values of $a$) are presented independently. These structures appear also naturally in wavelet theory, which could benefit from our analysis. The algebra {#thealgebra} ----------- \[def-thealgebra\] Let ${\mathcal{U}}_m$ be the universal $C^*$-algebra labelled by $m \in {\mathbb{Z}}^*$ (restricted to ${\mathbb{N}}^*$ later) and generated by two unitaries $U$ and $V$ such that $$\begin{aligned} \label{def} UVU^{-1}=V^m.\end{aligned}$$ This universal $C^*$-algebra ${\mathcal{U}}_m$ is denoted by ${\mathcal{O}}(E_{1,m})$ in [@Kat], and also ${\mathcal{O}}_{m,1}({\mathbb{T}})$ in [@Yamashita] (where only $m\in{\mathbb{N}}^*$ is considered.) These algebras are topological graph $C^*$-algebras which can be seen as transformation group $C^*$-algebras on solenoid groups as already noticed in [@BJ; @J; @Brenken]. They have been used in wavelets and coding theory [@DHPQ; @DJ; @DJP]. Relation also appeared in the Baumslag–Solitar group $BS(1,m)$ introduced in [@BS] as the group generated by $u,\,v$ with a one-relator: $$BS(1,m):=\langle u,\,v \,\vert \, uvu^{-1}=v^m \rangle.$$ This group plays a role in combinatorial and geometric group theory. It is a finitely generated, meta-abelian, residually finite, Hopfian, torsion-free, amenable (solvable non-nilpotent) group. It has infinite conjugacy classes, a uniformly exponential growth (for $m\neq 1$) but is not Gromov hyperbolic [@Harpe]. Note that $BS(1,1)$ is the free abelian group on two generators and $BS(1,-1)$ is the Klein bottle group. As for the $BS(1,m)$ groups, within the algebras ${\mathcal{U}}_m$, we remark that ${\mathcal{U}}_1$ and ${\mathcal{U}}_{-1}$ play a particular role: ${\mathcal{U}}{_1} = C({\mathbb{T}}^2)$ and ${\mathcal{U}}_{-1}\supset C({\mathbb{T}}^2)$ will not be considered here since we need $a=m>0$. For $m\geq 2$, a solenoid appears as in section \[algebrique\], as well as a crossed product structure, a fact that we recall now in this particular context. Assume $2\leq m \in {\mathbb{N}}$ and let the subring of ${\mathbb{Q}}$ generated over ${\mathbb{Z}}$ by $\tfrac{1}{m}$ $$B_m=B_{1/m}:={\mathbb{Z}}[\tfrac{1}{m}]:=\bigcup_{l\in {\mathbb{N}}}m^{-l} {\mathbb{Z}}\subset {\mathbb{Q}}.$$ It is the additive subgroup of ${\mathbb{Q}}$ which is an inductive limit of the rank-one groups $m^{-l}{\mathbb{Z}}$, for $l=0,1,2,\dots$ and $B_m$ has a natural automorphism ${\alpha}$ defined by ${\alpha}(b):=mb$. Note that the abelian group $B_m$ is not finitely generated. When $m\rightarrow \infty$, $BS(1,m) \rightarrow {\mathbb{Z}}\wr {\mathbb{Z}}$ (in the space of marked groups on two generators) [@Stalder]. This group also appears when $m=e^{-\omega_0/\kappa}$ is replaced by a transcendental number $a\in {\mathbb{R}}^*_+$ as seen in section \[aquelconque\]. $B_m$ can be identified with the subgroup of the affine group Aff$_1({\mathbb{Q}})$ generated by the dilatation $u:x \rightarrow mx$ and the translation $v: x\rightarrow x+1$. It is the subgroup normally generated in $BS(1,m)$ by $\langle v, u^{-1} vu, u^{-2} v u^2 , . . .\rangle$. The Baumslag–Solitar group $BS(1,m)$ is then isomorphic to the crossed product $BS(1,m) \simeq B_m \rtimes_{\alpha}{\mathbb{Z}}$ so that one has the group extension $$1\rightarrow {\mathbb{Z}}[ \tfrac{1}{m}] \rightarrow B(1, m) \rightarrow {\mathbb{Z}}\rightarrow 1.$$ Using this crossed product decomposition, the group $BS(1,m)$ has the following explicit law: $(b,l)(b',l')=(b+{\alpha}^l(b'),l+l')$ for $l,l'\in {\mathbb{Z}}$ and $b,b'\in B_m$. It is of course generated by the elements $u:=(0,1)$ and $f_b:=(b,0)$ with $b\in B_m$. Thus for $j,l\in {\mathbb{Z}}, n\in{\mathbb{N}}$, $uf_bu^{-1}=f_{{\alpha}(b)}$, and $$\begin{aligned} \label{gen} \text{if }f_{{\alpha}^{-n}}j:=({\alpha}^{-n}j,0)=u^{-n}f_j u^n, \text{ then } ((\tfrac{1}{m})^{n}j,l)=f_{{(\tfrac{1}{m})}^nj}\,u^l.\end{aligned}$$ $BS(1,m)$ is a subgroup of the $``ax+b"$ group (endowed with the law $(b,a)(b',a'):=(b+ab',aa')$) and can be viewed as the following subgroup of two-by-two matrices ${\{\, \genfrac{(}{)}{0pt}{1}{\,m^l \,\,\, b\,}{\,\,0 \,\,\,\,\,\,1\, } \; : \; l\in {\mathbb{Z}},\,b\in B_m\,\}}$. $BS(1,m)\simeq B(1,m')$ is equivalent to $m=m'$ [@Mol]. Let $\widehat {B_m}$ be the Pontryagin dual of $B_m$ endowed with the discrete topology. It is isomorphic to the solenoid $$\begin{aligned} \label{solenoid} S_m =S_{1/m}\simeq {\{\,(z_k)_{k=0}^{\infty} \in \prod_{i=0}^\infty {\mathbb{T}}\, : \, z_{k+1}^m=z_k, \,\forall k\in {\mathbb{N}}_0\,\}}\end{aligned}$$ using $z_k:=\chi\big((\tfrac{1}{m})^k\big)$ for any $\chi \in \widehat B_m$. The group $S_m$ is compact connected and abelian. Notice that (see section \[algebrique\])\ $S_m \simeq {\{\,(z_k)_{k=-\infty}^{\infty} \in {\mathbb{T}}^{\mathbb{Z}}\, : \, z_{k+1}^m=z_k, \, k\in {\mathbb{Z}}\,\}}$ defining $z_{-k}:=z_0^{mk}$ for $k>0$. The embedding $\hat{\iota} : \theta \in{\mathbb{R}}\mapsto \chi_\theta \in S_m \text{ where }\chi_\theta(b):=e^{i2\pi \theta b} \in {\mathbb{T}}$ for $b\in B_m$ identifies $S_m$ as the Bohr compactification $b_{B_m}{\mathbb{R}}$ of ${\mathbb{R}}$. $S_m$ is endowed with a natural group automorphism $\widehat{\alpha}$ given by $$\begin{aligned} \widehat{{\alpha}} (z_0,z_1,z_2,\dots)=(z_0^m,z_0,z_1,\dots) \qquad \widehat{{\alpha}} ^{-1}(z_0,z_1,z_2,\dots)=(z_1,z_2,\dots).\end{aligned}$$ All $q$-periodic points in $S_m$ are of the following form: if $z_0$ is a solution of $z^{m^{q}-1} =1$, then $(z_0,z_0^{m^{q-1}},\dots,z_0^m,z_0,\dots) \in S_m$; so there are only finitely many periodic points, namely $c_q(m) = m^{q}-1$ such points. The $C^*$-algebra $C(S_m) \simeq C^*(B_m)$ is precisely the algebra of almost periodic functions on ${\mathbb{R}}$, with frequencies in $B_m$ and the isomorphism is the map $f \mapsto f \circ \hat\iota$. Thus $${\mathcal{U}}_m = C^*(BS(1,m)) \simeq C^*(B_m)\rtimes_{{\alpha}} {\mathbb{Z}}\simeq C(S_m) \rtimes_{\widehat{\alpha}} {\mathbb{Z}}.$$ The unitary element $U$ of Definition \[def-thealgebra\] is precisely the generator of the action ${\alpha}$ of ${\mathbb{Z}}$ on $C^*(B_m)$ while $V$ is one of the generators ${\{\, U^{-\ell}V U^\ell : \ell \in {\mathbb{Z}}\,\}}$ of the abelian algebra $C^*(B_m)$. As a continuous function on $S_m$, $U^{-\ell}V U^\ell$ is the function $(z_k)_{k=0}^\infty \mapsto z_\ell$ and in particular, $V:(z_k)_{k=0}^\infty \mapsto z_0$. The subgroup ${\{\,z:=(z_k)_{k=0}^\infty \in S_m \, : \, \widehat{\alpha}^q (z)=z \text{ for some } q\in {\mathbb{N}}^*\,\}}$ of periodic points is dense in $S_m$ and $\widehat{\alpha}$ is ergodic on $S_m$ for $m\geq 2$ as previously seen [@BJ Proposition 1]. The representations ------------------- The knowledge of $^\ast$-representations of ${\mathcal{U}}_m$ is essential in the context of spectral triples (see Definition \[defi\]). According to , any unitary representation of $BS(1,m)$ is given by a unitary operator $U$ and a family of unitaries $T_k$, $k\in {\mathbb{Z}}$, with the constraint $UT_kU^{-1}=T_{mk}$, so there is a bijection between the $^\ast$-representations of ${\mathcal{U}}_m$ on some Hilbert space ${\mathcal{H}}$ and the corresponding unitary representations of $BS(1,m)$. This is rephrased usefully in the following lemma [@J]: \[useful\] The algebra ${\mathcal{U}}_m$ is the $C^*$-algebra generated by $L^\infty({\mathbb{T}})$ and a unitary symbol $\tilde U$ with commutation relations, where $e_n(z):=z^n$ $$\begin{aligned} \tilde U \,f\, \tilde U^{-1}=f \circ e_m, \,\,\forall f \in L^{\infty} ({\mathbb{T}}).\end{aligned}$$ ${\mathcal{U}}_m$ contains a family of abelian subalgebras ${\mathcal{A}}_n:=\tilde U^{-n}\, L^\infty({\mathbb{T}})\,\tilde U^n$ for $n\in {\mathbb{N}}$, which is increasing since $\tilde U^{-n} \,f \,\tilde U^n=\tilde U^{-(n+1)} \,f\circ e_m \, \tilde U^{(n+1)}$. If we choose the Hilbert space ${\mathcal{H}}:=L^2({\mathbb{R}})$, the scaling and shift operators give rise to a representation $\pi$ of $ {\mathcal{U}}_m$ on ${\mathcal{H}}$ by $\pi(U): \psi(x) \mapsto \tfrac{1}{\sqrt{m}} \, \psi(\tfrac{x}{m})$ and $\pi(V): \psi(x) \mapsto \psi(x-1)$. The Haar measure $\nu$ on $\widehat B_m$ gives rise to a faithful trace on ${\mathcal{U}}_m$ and, since there are many finite dimensional representations of ${\mathcal{U}}_m$ (see proof of Theorem \[Thealgebra\]), there are many traces on it. With ${\mathcal{H}}:=L^2(S_m,\nu)\simeq \ell^2(B_m)$ and $U: \psi \in {\mathcal{H}}\mapsto \psi \circ \widehat{\alpha}\in {\mathcal{H}}$, $C(S_m)$ acts on ${\mathcal{H}}$ by left multiplication, we get a covariant representation for $(S_m,U)$ of the dynamical system $(C(S_m),\widehat{{\alpha}},{\mathbb{Z}})$, so a representation of ${\mathcal{U}}_m$ on ${\mathcal{H}}$. Since $\widehat{\alpha}$ is ergodic, this representation is irreducible and faithful [@BJ Theorem 1]. If we choose ${\mathcal{H}}:=\ell^2({\mathbb{Z}})$, for each $\theta \in {\mathbb{R}}$ we get an induced representation of ${\mathcal{U}}_m$ by $\pi_\theta(U)\psi(k):=\psi(k-1)$ and $\pi_\theta(V)\psi(k):=\chi_\theta( m^{-k})\,\psi(k), k\in {\mathbb{Z}}$. On the existence of spectral triples {#exist} ==================================== Since we want to construct spectral triples on ${\mathcal{U}}_a$, it is worthwhile to know the heat decay of $G_a=B_a\rtimes_{\alpha}{\mathbb{Z}}$ via a random walk on the Cayley graph of $G_a$ with generators $S={\{\,x,x^{-1},y,y^{-1}\,\}}$ where $x=(0,1)$ and $y=(1,0)$, and with a constant weight and standard Laplacian. The decay of the heat kernel $p_t$, with $t\in {\mathbb{N}}$, has been computed on the diagonal in [@Pittet Theorem 1.1], [@CGP Theorem 5.2]: when $t \rightarrow \infty$, we get $p_{2t} \sim e^{-t^{1/3}\,(\log t)^{2/3}}$ if $a$ is transcendental while $p_{2t} \sim e^{-t^{1/3}}$ if $a$ is algebraic. This is related to the fact that $G_a$ has exponential volume growth. However, for a finite dimensional connected non-compact Lie group, the behaviour of the heat kernel $p_t$ depends on $t\in{\mathbb{R}}^*_+$ and can diverge for the short time behaviour when $t\rightarrow 0$. Let us explain how this point is related to the dimension: In noncommutative geometry, a (regular simple) spectral triple $({\mathcal{A}},{\mathcal{H}},{\mathcal{D}})$ has a (spectral) dimension which is given by $\max {\{\,n\in {\mathbb{N}}\, : \, n \text{ is a pole of } \zeta_{\mathcal{D}}:s \in {\mathbb{C}}\rightarrow \Tr(\vert {\mathcal{D}}\vert ^{-s})\,\}}$ (here ${\mathcal{D}}$ is assumed invertible). In particular, when $M$ is a $n$-dimensional compact Riemannian spin manifold, and ${\mathcal{A}}=C^\infty(M)$, ${\mathcal{H}}=L^2(S)$ where $S$ is the spinor bundle and ${\mathcal{D}}$ is the canonical Dirac operator, the spectral dimension coincides with $n$. Via the Wodzicki residue, an integral ${\mathop{\mathchoice{\copy\ncintdbox} {\copy\ncinttbox}{\copy\ncinttbox} {\copy\ncinttbox}}\nolimits}X:=\operatorname{Res}_{s=0} \Tr(X \vert {\mathcal{D}}\vert^{-s})$ is defined on (classical) pseudodifferential operators $X$ acting on smooth sections of $S$. For instance, ${\mathop{\mathchoice{\copy\ncintdbox} {\copy\ncinttbox}{\copy\ncinttbox} {\copy\ncinttbox}}\nolimits}\vert {\mathcal{D}}\vert^{-n}$ coincides (up to a universal constant) with the Dixmier trace $\Tr_{\text{Dix}}(\vert {\mathcal{D}}\vert ^{-n})=\lim_{N\rightarrow \infty} \log(N)^{-1}\sum_{k=1}^N \vert \lambda_k \vert ^{-n}$ where the $\lambda_k$ are the singular values of ${\mathcal{D}}$. The dimension of $M$ appears in $\Tr(e^{-t{\mathcal{D}}^2}) \sim \sum_{N \geq 0}\tfrac{1}{t^{(n-N)/2}} \,a_N({\mathcal{D}})$ when $t \rightarrow 0$. In particular, when $M={\mathbb{R}}^n$ with Lebesgue measure and ${\mathcal{D}}^2=-\triangle$ is the standard Laplacian (non-compactness is not a problem), the heat kernel is $p_t(x,x)=\tfrac{1}{(4 \pi t)^{n/2}}$ for all $x \in M$ (see [@Con94; @ConnesMarcolli; @Polaris]). As a consequence, $\Tr(\vert {\mathcal{D}}\vert^{-(n+\epsilon)}) < \infty$ for all $\epsilon >0$. We will see that, depending on the chosen representation of ${\mathcal{U}}_a$, such an $n$ does not always exist, meaning that the “dimension is infinite". \[defi\] A spectral triple (also called unbounded Fredholm module) $({\mathcal{A}},{\mathcal{H}},{\mathcal{D}})$ is given by a unital $C^*$-algebra $A$ with a faithful representation $\pi$ on a Hilbert space ${\mathcal{H}}$ and an unbounded self-adjoint operator ${\mathcal{D}}$ such that - the set ${\mathcal{A}}={\{\,a \in {\mathcal{A}}\,: \, [{\mathcal{D}}, \pi(a)] \text{ is bounded }\,\}}$ is norm dense in $A$, - $(1+{\mathcal{D}}^2)^{-1} \in J$ where $J$ is a symmetrically-normed ideal of the compact operators ${\mathcal{K}}({\mathcal{H}})$ on ${\mathcal{H}}$. The triple is $p$-summable if $J={\mathcal{L}}^p({\mathcal{H}})$ for $1 \leq p<\infty$ which means $\Tr\big((1+{\mathcal{D}}^2)^{-p/2} \big) <\infty$. It is $p^+$-summable if $J={\mathcal{L}}^{p+}({\mathcal{H}})$. It is finitely summable if it is $p$-summable for some $p$. It is $\theta$-summable if there exists $t_0\geq 0$ such that $\Tr\big(e^{-t{\mathcal{D}}^2} \big) <\infty$ for all $t>t_0$ (thus $J={\mathcal{K}}(H)$). Note that ${\mathcal{A}}$ is a $^*$-subalgebra of $A$ and $p$-summability implies $\theta$-summability. Connes proved in [@Con89] that, for an infinite, discrete, non-amenable group $G$, there exist no finitely summable spectral triples on $A=C^*_{red}(G)$. However, in this case, there always exist $\theta$-summable spectral triples on $A$ (even with ${\mathcal{D}}>0$). Using a computable obstruction to the existence of quasicentral approximate units relative to $J$ for $A$, Voiculescu was able to derive, for solvable groups with exponential growth, the non-existence result for unbounded (generalized) Fredholm modules using the Macaev ideal $J={\mathcal{L}}^{\infty,1}({\mathcal{H}})$ [@Voiculescu]. We use these results in the following: \[nonexistence\] Non-existence of finite-summable spectral triples.\ Let $ A={\mathcal{U}}_a$, $G_a=B_a \rtimes_{\alpha}{\mathbb{Z}}$ and ${\mathcal{A}}={\mathbb{C}}[G_a]$. \(i) There is no finitely summable spectral triple $\big(\pi({\mathcal{A}}), {\mathcal{H}}_\pi,{\mathcal{D}}\big)$ when the representation $\pi$ is quasiequivalent to the left regular one. \(ii) There exist $\theta$-summable spectral triples $\big(\pi({\mathcal{A}}), {\mathcal{H}}_\pi,{\mathcal{D}}\big)$ with $t_0=0$ where the representation $\pi$ is quasiequivalent to the left regular one. Despite the previous result, we add a few explicit examples of spectral triples using the fact that the algebra ${\mathcal{U}}_a$ is residually finite. Clearly, these triples deal with a restrictive part of the geometry of the $\kappa$-deformation based on ${\mathcal{U}}_a$, namely the dynamical system which is behind. The residually finite property is seen via the periodic points of this dynamics. \[existence\] Existence of finite-summable spectral triples.\ Let $ A={\mathcal{U}}_a$ and ${\mathcal{A}}={\mathbb{C}}[G_a]$. \(i) There exist spectral triples $\big(\pi({\mathcal{A}}), {\mathcal{H}}_\pi,{\mathcal{D}}\big)$ which are compact, i.e. $[{\mathcal{D}},\pi(x)]$ is compact for all $x \in {\mathcal{A}}$. \(ii) There exist spectral triples $\big(\pi({\mathcal{A}}), {\mathcal{H}}_\pi,{\mathcal{D}}\big)$ such that $[{\mathcal{D}},\pi(x)]=0$, $\forall x \in {\mathcal{U}}_a$, and with arbitrary summability. \(iii) When $a$ is algebraic, there exist spectral triples $\big(\pi({\mathcal{A}}), {\mathcal{H}}_\pi,{\mathcal{D}}\big)$ such that $[{\mathcal{D}},\pi(v)]=0$, $[{\mathcal{D}},\pi(u)] \neq 0$ and with an arbitrary summability $p\geq 2$. In case $(i)$, $[{\mathcal{D}},\pi(x)]$ is not necessarily zero but the summability is not controlled while for case $(ii)$, the condition $[{\mathcal{D}},\pi(x)]=0$ enables us to control summability. In a sense, case $(iii)$ is a mixed situation requiring that $a$ be algebraic. In that situation, we have an explicit representation $\pi$ so that formulae for Dirac operators can be proposed. \[Contradict\] There is no contradiction between Theorems \[nonexistence\] and \[existence\] since the faithful quasidiagonal representation (or residually finite one) $\pi$ of ${\mathcal{U}}_a$ used above to construct ${\mathcal{D}}$ is not quasiequivalent to the left regular one: actually, as already mentioned, the von Neumann algebra generated by $\pi({\mathcal{U}}_a)$ is a $\mathrm{II}_1$ factor when $\pi$ is the left regular representation, while it is of type $\mathrm{I}$ when $\pi$ is the quasidiagonal or residually finite one [@Dixmier 5.4.3.]. A more direct way to confirm that the representation $\pi$ used in the proof of point $(iii)$ of Theorems \[existence\] is not quasiequivalent to the left regular representation (or to the right regular representation which is unitarily equivalent to the left one) is to notice that $\pi$ is the direct integral $\pi = \int_{\mathrm{Per}}^{\oplus} \pi_\chi \, d\mu(\chi)$ of the *finite dimensional* representations $\pi_\chi$ defined in . As such, this representation is strictly contained in the right regular representation $R$ as can be checked using $(iv)$ of Theorem \[thm-inducedrepresentations\]. The part of $R$ which is not in $\pi$ is given by the induced *infinite dimensional* irreducible representations constructed on aperiodic $\chi$’s. As noticed in [@SZ], if $({\mathcal{A}},{\mathcal{H}}_\pi,{\mathcal{D}})$ is a spectral triple with $[{\mathcal{D}},\pi(x)]=0$, $\forall x \in A$, then A is a residually finite $C^*$-algebra. Theorem \[nonexistence\] says that the 2-dimensional $\kappa$-deformed space reflected by the algebra ${\mathcal{U}}_a$ with $\kappa=-\omega_0\,\log^{-1}(a)$ is in fact “infinite dimensional" as a metric noncommutative space. Theorem \[existence\] is a tentative to restore a metric. For instance, the distances on the state space ${\mathcal{S}}({\mathcal{U}}_a)$ generated by Connes’ formula $$d(\omega,\omega'):=\sup {\{\,\vert \omega(a)-\omega'(a)\vert \, :\, a\in {\mathcal{A}},\, \vert \vert [{\mathcal{D}},a] \vert \vert \leq 1\,\}} ,\quad \omega,\omega' \in {\mathcal{S}}({\mathcal{U}}_a)$$ are infinite in case $(ii)\,$of Theorem \[existence\], while in case $(iii)\,$some states can be at finite distances. The operator ${\mathcal{D}}$ given in Theorem \[existence\] $(iii)$ is not directly related to the group structure of $G_a$ but rather connected to the underlying dynamical system associated to the algebraic nature of $a$: it depends explicitly of the isomorphism-invariant ${\{\,c_q(a) \, : \,q\in {\mathbb{N}}^*\,\}}$. Conclusion ========== We have shown that $\kappa$-Minkowski space defined by can be reduced to a compact or discrete version. Depending on $\kappa$, or on $a$ defined in , this involves discrete amenable groups $G_a$, in particular the well-known Baumslag–Solitar ones. The associated $C^*$-algebras ${\mathcal{U}}_a$ can be viewed as a deformation of the two-torus. They are different when $a$ varies within the rational numbers (of zero Lebesgue measure) because of the structure of the underlying dynamical system. For transcendental values of $a$, which are dense in ${\mathbb{R}}_+$ and of full Lebesgue measure, all these algebras are isomorphic to each other. Due to the exponential growth of $G_a$, we have proved that the algebras ${\mathcal{U}}_a$ are not only quasidiagonal but also residually finite dimensional. They admit different spectral triples: the ones which are quasi-equivalent to the left regular representation and are never $p$-summable but only $\theta$-summable, *i.e.* they are of “infinite metric dimension". This situation reminds us of the passage from non-relativistic to relativistic quantum mechanics: in quantum field theory, the $\theta$-summability (and not the $p$-summability) naturally occurs due to the behaviour of $\Tr(e^{-tH})$ (when $t\rightarrow 0$) where $H$ is the Hamiltonian (or ${\mathcal{D}}^2$), see for instance [@CHKL]. The other faithful representations can generate fancy spectral triples which can have arbitrary summability (or “dimension") depending on the algebraic properties of the real parameter $a$, but are in fact degenerate to some extent. It is also not entirely clear what the topological content of these unbounded Fredholm modules is (i.e. whether they correspond to nontrivial elements of $K$-homology). The dimension of these spectral triples is unrelated to the number of coordinates defining the $\kappa$-deformed Minkowski spaces. The nonexistence theorem, though powerful, does not preclude the possible existence of a genuine, non-degenerate, nontrivial spectral geometry on the $\kappa$-deformation spaces presented here, they only restrict the possible algebra representations that could be used in the construction. This shows how delicate the notion of spectral or metric dimensions of $\kappa$-Minkowski space is, and how subtle its analysis through noncommutative geometry. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Alain Connes, Michael Puschnigg, Adam Skalski and Shinji Yamashita for helpful discussions or correspondence. B. I. and T. S. acknowledge the warm hospitality of the Institute of Physics at the Jagiellonian University in Krakow where this work was started under the Transfer of Knowledge Program “Geometry in Mathematical Physics". [60]{} A. Agostini, “$\kappa$-Minkowski representations on Hilbert spaces", J. Math. Phys. [**48**]{} (2007), 052305. G. Amelino-Camelia, “Relativity in spacetimes with short-distance structure governed by an observer-independent (Planckian) length scale”, Int. J. Mod. Phys.  D [**11**]{} (2002), 35–59. G. Amelino-Camelia, A. Marciano and D. Pranzetti, “On the 5D differential calculus and translation transformations in 4D $\kappa$-Minkowski noncommutative spacetime”, Int. J. Mod. Phys. [**A 24**]{} (2009), 5445–5463. G. Amelino-Camelia, G. Gubitosi, A. Marciano, P. Martinetti and F. Mercati, “A no-pure-boost uncertainty principle from spacetime noncommutativity”, Phys. Lett. B [**671**]{} (2009) 298–302\ G. Amelino-Camelia, G. Gubitosi, A. Marciano, P. Martinetti, F. Mercati, D. Pranzetti and R. A. Tacchi, “First results of the Noether theorem for Hopf-algebra spacetime symmetries", Prog. Theor. Phys. Suppl.  [**171**]{} (2007), 65–78. G. Baumslag and D. Solitar, “Some two-generator one-relator non-Hopfian groups", Bull. Amer. Math. Soc. [**68**]{} (1962), 199–201. M. B. Bekka and N. Louvet, “Some properties of $C^*$-algebras associated to discrete linear groups, in *$C^*$-Algebras*, J. Cuntz and S. Echterhoff (Eds), Springer-Verlag, Berlin Heidelberg New York 2000, 1–22. B. Blackadar, *Operator Algebras, Theory of $C^*$-Algebras and von Neumann Algebras*, Springer, Springer-Verlag, Berlin Heidelberg New York, 2006. B. Brenken, “Isomorphism classes of solenoidal algebras I, Can. J. Math. [**36**]{} (1993), 414–418. B. Brenken, “$K$-Groups of solenoidal algebras I, Proc. Amer. Math. Soc. [**123**]{} (1995), 1457–1464. B. Brenken, “The local product structure of expansive automorphisms of solenoids and their associated $C^*$-algebras", Can. J. Math. [**48**]{} (1996), 692–709. B. Brenken and P. E. T. Jørgensen, “A family of dilation crossed product algebras", J. Operator Theory [**25**]{} (1991), 299–308. N. P. Brown and N. Ozawa, *$C^*$-algebras and Finite Dimensional Approximations*, Graduate Studies in Mathematics, 88 Amer. Math. Soc., Providence, RI, 2008. A.L. Carey, J. Phillips, I.F. Putnam and A. Rennie, “Families of type III KMS states on a class of $C^*$-algebras containing $O_n$ and $\mathcal{Q}_{\mathbb{N}}$", arXiv:1001.0424 \[math.OA\]. S. Carpi, R. Hillier, Y. Kawahigashi and R. Longo, “Spectral triples and the super-Virasoro algebra", Commun. Math. Phys. [**295**]{} (2010), 71–97. A. Connes, “Compact metric spaces, Fredholm modules, and hyperfiniteness", Ergod. Th. & Dynam. Sys., [**9**]{} (1989), 207–220. A. Connes, *Noncommutative geometry*, Academic Press, 1994. A. Connes, “Noncommutative geometry and reality”, J. Math. Phys. [**36**]{} (1995), 6194–6231. A. Connes, “Gravity coupled with matter and the foundation of non-commutative geometry”, Commun. Math. Phys.  [**182**]{} (1996), 155–176. A. Connes, “On the spectral characterization of manifolds”, arXiv:0810.2088 \[math.OA\]. A. Connes and M. Marcolli, *Noncommutative Geometry, Quantum Fields and Motives*, Colloquium Publications, Vol. 55, American Mathematical Society, 2008. A. Connes and H. Moscovici, “Cyclic cohomology, the Novikov conjecture and hyperbolic groups", Topology [**29**]{} (1990), 345–388. T. Coulhon, A. Grigor’yan and C. Pittet, “A geometric approach to on-diagonal heat kernel lower bounds on groups", Ann. Inst. Fourier, Grenoble, [**51**]{} (2001), 1763–1827. F. D’Andrea, “Spectral geometry of $\kappa$-Minkowski space”, J. Math. Phys.  [**47**]{} (2006), 062105. L. Dabrowski and G. Piacitelli, “The $\kappa$-Minkowski spacetime: Trace, classical limit and uncertainty relations", arXiv:0909.3215 \[hep-th\]. M. Daszkiewicz, J. Lukierski and M. Woronowicz, “$\kappa$-deformed oscillators, the choice of star product and free $\kappa$-deformed quantum fields”, arXiv:0807.1992 \[hep-th\]. J. Dixmier, *les $C^*$-algèbres et leurs représentations*, Gauthier-Villars, Paris, 1964. D. Dutkay, D. Han, G. Picioroaga and Q. Sun, “Orthogonal dilations of Parseval wavelets", Mathematische Annalen [**341**]{} (2008), 483–515. D. Dutkay, P. E. T. Jørgensen and G. Picioroaga, “Unitary representations of wavelet groups and encoding of iterated function systems in solenoids", in *Ergodic Theory and Dynamical Systems*, Published online by Cambridge University Press 2009. D. Dutkay and P. E. T. Jørgensen, “A duality approach to representations of Baumslag–Solitar groups", Contemporary Mathematics [**449**]{} (2008), 99–127. L. Freidel and E. R. Livine, “Effective 3d quantum gravity and non-commutative quantum field theory”, Phys. Rev. Lett.  [**96**]{} (2006) 221301. V. Gayral, B. Iochum, J. M. Gracía-Bondía, T. Schücker and J. C. Várilly, “Moyal planes are spectral triples”, Commun. Math. Phys. [**246**]{} (2004), 569–623. J. M. Gracía-Bondía, J. C. Várilly and H. Figueroa, *Elements of Noncommutative Geometry*, Birkhäuser Advanced Texts, Birkhäuser, Boston, 2001. P. de la Harpe, *Topics in Geometric Group Theory*, The university of Chicago Press, Chicago, 2000. N. Higson and G. Kasparov, “Operator $K$-theory for groups which act properly and isometrically on Hilbert space", Electron. Res. Announc. Amer. Math. Soc. [**3**]{} (1997), 131–142. N. A. Ivanov, “On the structure of some reduced amalgamated free product $C^*$-algebras, arXiv:0705.3919 \[math.0A\]. R. Ji, “Smooth dense subalgebras of reduced group $C^*$-algebras, Schwartz cohomology of groups, and cyclic cohomology", J. Funct. Anal., [**107**]{} (1992), 1–33. P. Jolissaint, “Rapidly decreasing functions in reduced $C^*$-algebras of groups", Trans. Amer. Math. Soc. [**317**]{} (1990), 167–196. P. E. T. Jørgensen, “Ruelle operators: Functions which are harmonic with respect to a transfer operator", Memoirs of the A. M. S., [**152**]{} (2001), number 720. T. Katsura, “A class of $C^*$-algebras generalizing both graph algebras and homeomorphism $C^*$-algebras IV, pure infiniteness, J. Functional Analysis [**254**]{} (2008), 1161–1187. S. Kawamura, J. Tomiyama and Y. Watatani, “Finite-dimensional irreducible representations of $C^*$-algebras associated with topological dynamical systems", Math. Scand. [**56**]{} (1985), 241–248. W. Lawton, “The structure of compact connected groups which admit an expansive automorphism", in *Recent advances in Topological Dynamics*, A. Beck Ed., Lecture Notes in Mathematics, vol 318 (1973), 182–196. P. Kosiński, P. Maślanka, J. Lukierski and A. Sitarz, “Towards $\kappa$-deformed d=4 relativistic field theory", Czech. J. Phys. [**48**]{} (1998), 1407–1414. L. T. Lim, J. A. Packer and K. F. Taylor, “A direct integral decomposition of the wavelet representation", Proc. Amer. Math. Soc. [**129**]{} (2001), 3057–3067. J. Lukierski, H. Ruegg, A. Nowicki and V. N. Tolstoy, “$q$-deformation of Poincaré algebra,” Phys. Lett.  B [**264**]{} (1991), 331–338. J. Lukierski, A. Nowicki and H. Ruegg, “New quantum Poincaré algebra and $\kappa$-deformed field theory”, Phys. Lett.  B [**293**]{} (1992), 344–352. J. Lukierski, H. Ruegg and W. J. Zakrzewski, “Classical quantum mechanics of free $\kappa$ relativistic systems”, Annals Phys.  [**243**]{} (1995), 90–116. S. Majid and H. Ruegg, “Bicrossproduct structure of $\kappa$-Poincaré group and non-commutative geometry”, Phys. Lett.  B [**334**]{} (1994), 348–354. F. Martin and A. Valette, “Markov operators on the solvable Baumslag–Solitar groups", Experimental Math., [**9**]{} (2000), 291–300. D. I. Moldavanskii, “Isomorphism of the Baumslag–Solitar groups", Ukrainian Math. J. [**43**]{} (1991),1569–1571. M. V. Pimsner, “Embedding transformation group $C^*$-algebras into AF-algebras", Ergodic Theory Dynam. System [**3**]{} (1983), 613–626. C. Pittet and L. Saloff-Coste, “Random walks on abelian by cyclic groups", Proc. Amer. Math. Soc., [**131**]{} (2002), 1071–1079. A. Sitarz, “Noncommutative differential calculus on the $\kappa$-Minkowski space,” Phys. Lett.  B [**349**]{} (1995), 42–48. A. Skalski and J. Zacharias, “A note on spectral triples and quasidiagonality", Expositiones Math. [**27**]{} (2009), 137–141. Y. Stalder, “Convergence of Baumslag–Solitar groups", Bull. Belgian Math. Soc. [**13**]{} (2006), 221–233. C. Svensson and J. Tomiyama, “On the commutant of $C(X)$ in $C^*$-crossed products by ${\mathbb{Z}}$ and their representations", J. Funct. Anal [**256**]{} (2009), 2367–2386. D. Voiculescu, “Some results of norm-ideal perturbations of Hilbert space operators", J. Operator Theory [**2**]{} (1979), 3–37. D. Voiculescu, “On the existence of quasicentral approximate units relative to normed ideals. Part I, J. Function. Anal. [**91**]{} (1990), 1–36. D. P. Williams, *Crossed Products of $C^*$-Algebras*, Math. Surveys and Monographs, Vol 134, 2007. S. Yamashita, “Circle correspondence $C^*$-algebras", to appear in Houston J. Math. arXiv:0808.1403v1 \[math.OA\]. S. Zakrzewski, “Quantum Poincaré group related to the $\kappa$-Poincaré algebra”, J. Phys. A [**27**]{} (1994) 2075–2082. [^1]: Unité Mixte de Recherche du CNRS et des Universités Aix-Marseille I, Aix-Marseille II et de l’Université du Sud Toulon-Var [^2]: Talk presented by B. I. at the conference “Geometry and Physics in Cracow", September 21-25, 2010, iochum@cpt.univ-mrs.fr [^3]: The Smoluchowski Institute of Physics, Jagiellonian University, Reymonta 4, 30-059 Kraków, Poland, Partially supported by MNII grant 189/6.PRUE/2007/7 and N 201 1770 33 [^4]: A $C^*$-algebra $A$ is said to be CCR or liminal if $\pi(A)$ is equal to the set of compact operators on the Hilbert space ${\mathcal{H}}_\pi$ for every irreducible representation $\pi$. The algebra $A$ is called NGCR if it has no nonzero CCR ideals.\ An $AF$-algebra is a inductive limit of sequences of finite-dimensional $C^*$-algebras.\ The algebra $A$ is residually finite-dimensional if it has a separating family of finite-dimensional representations.
--- abstract: 'We still do not understand how planets form, or why extra-solar planetary systems are so different from our own solar system. But the last few years have dramatically changed our view of the discs of gas and dust around young stars. Observations with the Atacama Large Millimeter/submillimeter Array (ALMA) and extreme adaptive-optics systems have revealed that most — if not all — discs contain substructure, includind rings and gaps [@ALMA_HLTau; @Long2018a; @Huang2018a], spirals[@Benisty2015; @Stolker2016a; @Huang2018b] azimuthal dust concentrations [@van-der-Marel2013], and shadows cast by misaligned inner discs [@Marino2015; @Stolker2016a]. These features have been interpreted as signatures of newborn protoplanets, but the exact origin is unknown. Here we report the kinematic detection of a few Jupiter-mass planet located in a gas and dust gap at 130au in the disc surrounding the young star HD 97048. An embedded planet can explain both the disturbed Keplerian flow of the gas, detected in CO lines, and the gap detected in the dust disc at the same radius. While gaps appear to be a common feature in protoplanetary discs[@Long2018a; @Huang2018a], we present a direct correspondence between a planet and a dust gap, indicating that at least some gaps are the result of planet-disc interactions.' author: - 'C. Pinte$^{1,2}$, G. van der Plas$^{2}$, F . Ménard$^{2}$, D. J. Price$^{1}$, V. Christiaens$^{1}$, T. Hill$^{3}$, D. Mentiplay$^{1}$, C. Ginski$^{4}$, E. Choquet$^{5}$, Y. Boehler$^{2}$, G.Duchêne$^{6,2}$, S. Perez$^{7}$, S. Casassus$^{8}$' bibliography: - 'biblio.bib' title: Kinematic detection of a planet carving a gap in a protoplanetary disc --- Monash Centre for Astrophysics (MoCA) and School of Physics and Astronomy, Monash University, Clayton Vic 3800, Australia, Univ. Grenoble Alpes, CNRS, IPAG, F-38000 Grenoble, France Atacama Large Millimeter/Submillimeter Array, Joint ALMA Observatory, Alonso de Córdova 3107, Vitacura 763-0355, Santiago, Chile Anton Pannekoek Institute for Astronomy, University of Amsterdam, Science Park 904,1098XH Amsterdam, The Netherlands Aix Marseille Univ, CNRS, CNES, LAM, Marseille, France Astronomy Department, University of California, Berkeley, CA 94720-3411, USA Universidad de Santiago de Chile, Av. Libertador Bernardo O’Higgins 3363, Estación Central, Santiago Departamento de Astronomía, Universidad de Chile, Casilla 36-D, Santiago, Chile A variety of mechanisms have been proposed to explain the formation of rings and gaps in discs, *e.g.* snow-lines, non-ideal MHD effects, zonal flows, and self-induced dust-traps [@Takahashi2014; @Gonzalez2015; @Loren2015; @Zhang2015; @Bethune2016]. The most straightforward explanation is that the gaps are the results of forming planets interacting with the disc [@Dipierro15b]. Recent ALMA surveys suggest that planets could indeed be responsible for carving out several of the observed gaps [@Long2018a; @Huang2018a; @Zhang2018a], but until now definite evidence has remained elusive. Despite much effort, direct imaging of planets in young discs remains difficult. Many of the claimed detections have been refuted, or require confirmation [@Rameau2017; @Ligi2018; @Currie2019]. The most promising detection to date is a companion imaged in the cleared inner disc around PDS 70 [@Keppler2018; @Muller2018; @Christaens2019; @Haffert2019]. However, the mass estimate from photometry remains uncertain, and it is not yet clear if PDS 70 b falls within the planetary regime. Our approach is to search for the dynamical effect of a planet on the surrounding gas disc. Disc kinematics are dominated by Keplerian rotation. Embedded planets perturb the gas flow in their vicinity, launching spiral waves at Lindblad resonances both inside and outside their orbit. The disturbed velocity pattern is detectable by high spectral and spatial resolution ALMA line observations [@Perez2015b]. This technique was used to detect embedded planets in the disc surrounding HD 163296 [@Pinte2018b; @Teague2018a]. Here, we used ALMA to observe the disc surrounding the young ($\approx$3Myr) intermediate-mass (2.4M$_\mathrm{\odot}$) star HD 97048 in Band 7 continuum (885$\mu$m) and in the $^{13}$CO J=3-2 transition, with a spectral resolution of 220ms$^{-1}$. Observations were performed with 3 interferometer configurations sampling baselines from 15 to $\approx$ 8,500m, resulting in final images with a spatial resolution of 0.07”$\times$0.11” (13$\times$20au). We report the detection of a localised deviation from Keplerian flow in the disc. The velocity kink is spatially associated with the gap seen in dust continuum emission (Fig. \[fig:obs\] and S\[sup\_fig:SPHERE\]). The most plausible and simultaneous explanation for those two independent features is the presence of an embedded body of a few Jupiter masses which carves a gap in the dust disc and locally perturbs the gas flow. The continuum emission shows a system of two rings detected up to $\approx$ 1” from the star. The $^{13}$CO emission extends further in radius ($\approx$ 4”), displaying the typical butterfly pattern of a disc in Keplerian rotation (Fig. \[fig:obs\] and \[fig:schema\]). No significant brightness variation of the $^{13}$CO emission is detected at the location of the gap. In a given spectral channel, the emission is distributed along the corresponding isovelocity curve, *i.e.* the region of the disc where the projected velocity towards the observer is equal to the channel velocity. The observed East-West asymmetry is characteristic of an optically thick emitting layer located above the midplane. The lower, fainter, disc surface is also detected to the West of the upper disc surface. The CO emission displays a kink in the upper isovelocity curve, highlighted by the dotted circle in Fig. \[fig:obs\]. The velocity kink is detected consistently in channels between +0.7 and +1.1km/s from the systemic velocity. It is also seen in images reconstructed from individual observing nights, *i.e.* before combining the data sets. The morphology of the emission around the velocity kink is the same with and without continuum subtraction, indicating that the kink is not the result of optical depth effects (see Supplementary Figure \[sup\_fig:executions\]). The sensitivity of the ALMA observations allows us to detect the continuum in each individual channel, revealing that the velocity kink is located just above the gap seen in continuum emission, at the same radius. This spatial coincidence points to a common origin for both features. The deformation of the emission is localised to a diameter of $\approx$ 0.3”. Notably, the emission on the opposite side the disc (and at opposite velocity) displays a smooth profile, with no kink. This excludes a large scale perturbation of the disc or an azimuthally symmetric mechanism. The perturbation is similar to the one detected in HD 163296 [@Pinte2018b]. In both cases, the kink is only detected over a small range in both radial extent and velocity. A corresponding velocity kink in the lower surface of the disc is not seen as the emission is weaker and masked by the continuum and brighter upper CO surface (Fig. \[fig:obs\] and \[fig:schema\]). Using the same procedure as in ref.[@Pinte2018a], we measured the altitude of the $^{13}$CO layer to be 17$\pm$1au near the velocity kink (at a distance of 130au). Assuming that the planet is located in the disc midplane and exactly below the center of the velocity kink, it would be at a projected distance of 0.45$\pm$0.1" and PA = $-55\pm$10$^\circ$ from the star. To infer the mass of the putative planet, we performed a series of 3D global gas and multi-grain dust hydrodynamics simulations, where we embedded a planet on a circular orbit at 130au with a mass of 1, 2, 3 and 5M$_\mathrm{Jup}$ (gas disc mass of $10^{-2}$M$_\mathrm{\odot}$). Simulations were performed for approximately 800 orbits ($\approx$ 1Myr), and then post-processed to compute the thermal structure and resulting continuum emission and CO maps. The presence of a few Jupiter mass planet produces distinct signatures in the gas and dust (Fig. \[fig:phantom\]). The embedded planet generates a gap and spirals in the gas, resulting in a non-axisymmetric velocity field. The dynamics of the dust depends on the *Stokes number*, i.e. the ratio of the gas drag stopping time to the orbital time, which depends on the grain size and dust properties. When the Stokes number is close to unity — corresponding roughly to millimetre sized grains at the gas surface densities considered here if grains are compact and spherical — dust grains form axisymmetric rings inside and outside of the planet orbital radius [@Dipierro15b]. Fig \[fig:model\] shows the predicted emission for the various planet masses, in the continuum and for the $^{13}$CO line. The channel maps are best reproduced with an embedded planet of 2-3M$_\mathrm{Jup}$, giving a velocity kink with amplitude matching the observations. For the 1M$_\mathrm{Jup}$ planet, the kink is too small. The most massive planet, with 5M$_\mathrm{Jup}$, creates a kink too large, and which remains detectable over a range of velocity that is too wide ($\pm$ 1km/s from the 0.96km/s channel where the deviation is the strongest). Embedded planets have also been predicted to generate vertical bulk motions and turbulence which should result in detectable line broadening when the planet is massive enough ($>$ a few Jupiter masses) [@Dong2019]. Analysis of the moment-2 map does not reveal significant line broadening at the location of the gap and are consistent with thermal broadening and Keplerian shearing within the beam. This also rules out the upper end of the range tested in our simulations. HD 97048 was observed with SPHERE on the VLT, resulting in a point source detection limit of $\approx$2M$_\mathrm{Jup}$ [@Ginski2016] at the location where we detect the velocity kink. This upper limit assumes a hot-start model, and an unattenuated planet atmosphere. Our simulations show that the planet is embedded, with an optical depth of $\gtrsim$0.5 towards the observer at 1.6$\mu$m, *i.e.* it would appear about twice as faint as an unobscured planet. This is consistent with a 2-3M$_\mathrm{Jup}$ planet not being detected by SPHERE. Our kinematic mass determination is also consistent with the planet mass range 0.4 and 4M$_\mathrm{Jup}$ estimated from the width of the scattered light gap, for a viscosity between $10^{-4}$ and $10^{-2}$ [@Dong2017]. All of the planet masses explored in our models result in azimuthally symmetric gaps in continuum emission at 885$\mu$m, as detected by ALMA. At this wavelength, the thermal emission is dominated by dust grains a few hundred microns in size, that decouple from the gas, and form axisymmetric rings, even if the gas flow is locally non-axisymmetric. The width and/or depth of a gap in sub-millimetre thermal emission depends on the planet mass as well as on the Stokes number of the dust grains that contribute most at the observed wavelength [@Zhang2018a]. In most cases, the Stokes number is unknown as the local gas density and dust properties are poorly constrained by observations. Continuum gap width may not provide reliable estimate of planet masses [@Rosotti2016]. Conversely, when the mass of the planet is known, for instance via the kinematics of the gas as in this work, the dust continuum observations can be used to directly measure the Stokes number, thus constraining the gas density and dust properties in the vicinity of the gap. For HD 97048, dust grains dominating the thermal emission must have a Stokes number around a few $10^{-2}$ to reproduce the observed gap profile. This excludes compact dust grains with the assumed density in our models. One way to reduce the Stokes number is to increase the gas density, but this causes significant accretion on the planet (see supplementary material). Another possibility is that dust grains of a few 100 microns or millimetres in size consist of fluffy aggregates, as suggested by sub-millimetre polarisation studies [@Kataoka2016; @Dent2019]. Aggregates have a larger projected area, and experience stronger gas drag than equal mass compact grains. They have smaller Stokes number, and can reproduce the observed dust continuum gap width for a 2M$_\mathrm{Jup}$ planet (Fig. \[fig:model\] and S\[sup\_fig:porous\_grains\]). The coincident location of the velocity kink and gap demonstrates that protoplanets are responsible for *at least some* of the observed gaps in discs. Most of the alternative mechanisms for creating dust gaps in discs — including snow lines, non-ideal MHD, zonal flows and self-induced dust traps — rely on the formation of a pressure bump where the dust grains can be trapped and grow further. While those pressure bumps produce deviations from Keplerian velocity, they are axisymmetric. That is, they do not cause a localised, non-axisymmetric velocity deviation as observed. Other mechanisms might be imagined to create a non-azimuthally symmetric velocity pattern in the disc. Gravitational instabilities or outer companion/flyby create spirals, but these are large-scale structures, and would not produce a velocity kink localised to a small region of the disc. Neither will they result in azimuthally symmetric dust gaps. The interaction of a few Jupiter masses planet with its surrounding disc is to our knowledge the only plausible explanation that can explain both a localised velocity kink and an azimuthally symmetric gap. More systematic kinematic mass estimates may allow us to better connect the population of young embedded planets in discs with the known exoplanet population. ![ALMA observations of the dust and gas disc surrounding HD 97048. a) $^{13}$CO 3-2 emission at velocity +0.96km.s$^{-1}$ from the systemic velocity. The velocity kink revealing the presence of an embedded perturber is marked by a dotted circle and the cyan dot represents the location of the putative planet. The kink is located above the gap detected in continuum. b) 885$\mu$m continuum emission using all the continuum channels b) and c) $^{13}$CO 3-2 emission at the opposite velocity -0.96km.s$^{-1}$ from the systemic velocity, where the emission displays a smooth profile. Line observations were not continuum subtracted. The ALMA beam is 0.07”$\times$0.11” and is indicated by the grey ellipse. \[fig:obs\]](kink.pdf){width="\linewidth"} ![Schematic view of the disc as seen by ALMA in a single channel. The CO emission originates from the disc surfaces, while the continuum is mostly emitted from the disc midplane.\[fig:schema\]](schema.pdf){width="0.7\linewidth"} ![Hydrodynamical model of a 2M$_\mathrm{Jup}$ planet interacting with the disc of HD 97048. *Top row:* Gas surface density (left), gas radial velocity (middle) and gas azimuthal velocity offset compared to Keplerian velocity (right). *Bottom row:* Dust surface density for 1.5$\mu$m (left), 6.25$\mu$m (middle) and 200$\mu$m (right) compact dust grains. They have a Stokes number of $\approx 10^{-2}$, $5 \times 10^{-2}$, and $1$, respectively. Fluffy aggregates and/or porous grains will have a smaller Stokes number for the same grain mass. Sink particles are marked by cyan dots with size corresponding to their accretion radii. \[fig:phantom\]](hd97048-dens.pdf "fig:"){height="0.3\hsize"} ![Hydrodynamical model of a 2M$_\mathrm{Jup}$ planet interacting with the disc of HD 97048. *Top row:* Gas surface density (left), gas radial velocity (middle) and gas azimuthal velocity offset compared to Keplerian velocity (right). *Bottom row:* Dust surface density for 1.5$\mu$m (left), 6.25$\mu$m (middle) and 200$\mu$m (right) compact dust grains. They have a Stokes number of $\approx 10^{-2}$, $5 \times 10^{-2}$, and $1$, respectively. Fluffy aggregates and/or porous grains will have a smaller Stokes number for the same grain mass. Sink particles are marked by cyan dots with size corresponding to their accretion radii. \[fig:phantom\]](hd97048-vr.pdf "fig:"){height="0.3\hsize"} ![Hydrodynamical model of a 2M$_\mathrm{Jup}$ planet interacting with the disc of HD 97048. *Top row:* Gas surface density (left), gas radial velocity (middle) and gas azimuthal velocity offset compared to Keplerian velocity (right). *Bottom row:* Dust surface density for 1.5$\mu$m (left), 6.25$\mu$m (middle) and 200$\mu$m (right) compact dust grains. They have a Stokes number of $\approx 10^{-2}$, $5 \times 10^{-2}$, and $1$, respectively. Fluffy aggregates and/or porous grains will have a smaller Stokes number for the same grain mass. Sink particles are marked by cyan dots with size corresponding to their accretion radii. \[fig:phantom\]](hd97048-deltavphi.pdf "fig:"){height="0.3\hsize"} ![Hydrodynamical model of a 2M$_\mathrm{Jup}$ planet interacting with the disc of HD 97048. *Top row:* Gas surface density (left), gas radial velocity (middle) and gas azimuthal velocity offset compared to Keplerian velocity (right). *Bottom row:* Dust surface density for 1.5$\mu$m (left), 6.25$\mu$m (middle) and 200$\mu$m (right) compact dust grains. They have a Stokes number of $\approx 10^{-2}$, $5 \times 10^{-2}$, and $1$, respectively. Fluffy aggregates and/or porous grains will have a smaller Stokes number for the same grain mass. Sink particles are marked by cyan dots with size corresponding to their accretion radii. \[fig:phantom\]](hd97048-dust1.pdf "fig:"){height="0.3\hsize"} ![Hydrodynamical model of a 2M$_\mathrm{Jup}$ planet interacting with the disc of HD 97048. *Top row:* Gas surface density (left), gas radial velocity (middle) and gas azimuthal velocity offset compared to Keplerian velocity (right). *Bottom row:* Dust surface density for 1.5$\mu$m (left), 6.25$\mu$m (middle) and 200$\mu$m (right) compact dust grains. They have a Stokes number of $\approx 10^{-2}$, $5 \times 10^{-2}$, and $1$, respectively. Fluffy aggregates and/or porous grains will have a smaller Stokes number for the same grain mass. Sink particles are marked by cyan dots with size corresponding to their accretion radii. \[fig:phantom\]](hd97048-dust3.pdf "fig:"){height="0.3\hsize"} ![Hydrodynamical model of a 2M$_\mathrm{Jup}$ planet interacting with the disc of HD 97048. *Top row:* Gas surface density (left), gas radial velocity (middle) and gas azimuthal velocity offset compared to Keplerian velocity (right). *Bottom row:* Dust surface density for 1.5$\mu$m (left), 6.25$\mu$m (middle) and 200$\mu$m (right) compact dust grains. They have a Stokes number of $\approx 10^{-2}$, $5 \times 10^{-2}$, and $1$, respectively. Fluffy aggregates and/or porous grains will have a smaller Stokes number for the same grain mass. Sink particles are marked by cyan dots with size corresponding to their accretion radii. \[fig:phantom\]](hd97048-dust8.pdf "fig:"){height="0.3\hsize"} ![Comparison of ALMA observations (top row) with hydrodynamic simulations of discs with different embedded planet masses, post-processed with radiative transfer. The left column shows the continuum data, while the following panels show the line data. The 2M$_\mathrm{Jup}$ case corresponds to the model displayed in Fig. \[fig:phantom\]. Porous grains and/or aggregates are required to match the continuum gap width. The surface to mass ratio of the dust grains has been increased by 50 compared to compact spheres.\[fig:model\]](phantom+mcfost_model.pdf){width="\linewidth"} The authors declare that they have no competing financial interests. Correspondence and requests for materials should be addressed to C.P.(email: christophe.pinte@monash.edu). C.P. analysed the data, carried out the modelling and wrote the manuscript. G. vdP wrote the observing proposal and reduce the data. D.P. provided advice on running the smoothed particle hydrodynamics simulations and made some of the figures. All co-authors provided input on the manuscript. C.P., D.J.P. and V.C. acknowledge funding from the Australian Research Council viaFT170100040 and DP180104235. F.M., GvdP and C.P. acknowledge funding from ANR of France (ANR-16-CE31-0013). This work was performed on the OzSTAR national facility at Swinburne University of Technology. OzSTAR is funded by Swinburne and the Australian Government’s Education Investment Fund. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2016.1.00826.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan) and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. ------------- ---------------- -------------------- ------------ -------- ------------ ------------ ------------ UTC Date Time on source N$_{\mathrm{ant}}$ Baselines pwv \[minutes\] \[m\] \[mm\] Bandpass Flux Phase 2016 Nov 18 26.3 45 15 to 919 0.60 J0538-4405 J1107-4449 J1058-8003 2017 Nov 24 42.7 49 92 to 8548 0.56 J0522-3627 J0904-5735 J1058-8003 2017 Nov 30 42.7 47 79 to 8283 0.56 J0635-7516 J0904-5735 J1058-8003 ------------- ---------------- -------------------- ------------ -------- ------------ ------------ ------------ HD 97048. --------- HD 97048 is located in the Chameleon I cloud, at a distance of 185pc [@Gaia-Collaboration2018]. It has a spectral type Be9.5/AO, an effective temperature of 10,000K, and a luminosity of 40 solar luminosities. The star is surrounded by a large disc [@Lagage06; @van-der-Plas2017], extending up to at least 850au in $^{12}$CO J=2-1 emission which is seen at an inclination $\approx 40^\circ$. Previous ALMA observations of the dust sub-disc (probing dust grains a few hundred microns in size) revealed a central cavity and two bright rings separated by a gap centered at $\approx$130au from the star [@van-der-Plas2017]. The two bright rings are also detected in scattered light with VLT/SPHERE [@Ginski2016] (Supplementary figure \[sup\_fig:SPHERE\]), where the observations probes the distribution of sub-micron sized dust grains. Such small grains experience a high gas drag and tend to follow closely the gas spatial distribution, suggesting that the gap is also present in the gas disc. Two extra rings extending up to 2.2” from the star are also detected in scattered light. The disc is detected in PAH emission up to $\approx$ 650au, revealing a flaring surface [@Lagage06]. Scattered light images confirmed that the disc surface shows a significant flaring [@Ginski2016]. Observations and data reduction. -------------------------------- We observed HD 97048 with ALMA in band 7 in the C40-4 (1 execution) and C40-7 (2 executions) configurations reaching a total time on source of 112 minutes (ALMA program \#2016.1.00826.S, PI: G. van der Plas). The details of the observations can be found in Table \[table\_obs\]. One of the spectral windows for each observation was centered at the $^{13}$CO J=3-2 rest frequency with an individual channel width of 122kHz, resulting in a 244kHz spectral resolution (220m s$^{-1}$) after Hanning smoothing. The three other spectral windows were used for continuum with a bandwidth of 1.875GHz each. We performed one round of phase self calibration on the continuum data set observed November 24th, and two rounds on the other two data sets, and applied the self calibration solutions to the line data. We imaged the visibilities at a 120ms$^{-1}$ velocity spacing using Briggs weighing, resulting in a beam size of 0.11” $\times$ 0.07” arcsec at a PA of -38$^\circ$. Typical RMS values obtained are 3.6mJy/beam per channel for the combined data. Channel maps are presented in Supplementary Figure \[sup\_fig:channel\_maps\]. The kink in CO emission discussed in this manuscript is present in all 3 individual executions and in both the continuum subtracted and non-subtracted images (Supplementary Figure \[sup\_fig:executions\]). 3D modelling procedure. ----------------------- We performed a series of 3D global simulations using the [phantom]{} Smoothed Particle Hydrodynamics (SPH) code [@Price2018a]. We performed a series of multi-grain gas+dust simulations using the algorithm described in [@Hutchison2018; @Ballabio2018], using 2 million SPH particles and following the dust fraction of particles of sizes ranging from 1.5625 to 1600$\mu$m (11 bins total, each bin doubling the grain size). Each dust species experiences a different gas drag depending on the grain size, resulting in differential vertical setting and radial migration. All 11 populations of dust grains were evolved simultaneously with the gas. We thus self-consistently took account of the cumulative backreaction on the gas. We assume a central mass of 2.4M$_\odot$ and a distance of 185pc [@Gaia-Collaboration2018]. We set the initial disc inner and outer radii to 40au and 700au, respectively. We set the gas mass to $10^{-2}$M$_\odot$, and use an exponentially tapered power-law surface density profile with a critical radius of 500au, power-law index of $- 0.5$. The disc aspect ratio was set to 0.06 at 40au (consistent with the scattered light images [@Ginski2016]), with a vertically isothermal equation of state, and sound speed power-law index of $-0.25$. We set the artificial viscosity in the code to obtain an average Shakura-Sunyaev[@ShakuraSunyaev1973] viscosity of $10^{-3}$ [@Lodato2010]. We embedded two planets in the disc orbiting at 30au, with a mass of 1M$_{\rm Jup}$, and at 130au with a mass of either 1, 2, 3, or 5M$_{\rm Jup}$. The inner planet is used to carve a central cavity in the disc, as seen in the ALMA observations, but we did not vary the mass of the inner planet in this work. The presence of an inner planet is also suggested by the torqued intra-cavity HCO$^+$ velocity field [@van-der-Plas2017]. We used sink particles [@Bate95] to represent the star and planets. We set the accretion radius of the planets to 0.125 times the Hill radius, with an accretion radius of 10au for the central star. The model surface density is plotted in the top left panel of figure \[fig:phantom\] for the 2M$_\mathrm{Jup}$ planet, along with the radial velocity (top centre) and predicted deviation from Keplerian flow (top right panel) and dust column density for 1.56, 6.25 and 200$\mu$m grains (bottom row). We evolved the models for 800 orbits of the outer planet ($\approx 1$ million years). The flow pattern around the planet establishes itself over a much shorter timescale, but the gap carving is set by the viscous timescale and establishment of dust gaps at lower Stokes numbers requires a large number of orbits. The planets accrete a moderate amount of gas from the disc. The final masses after 800 orbits are 1.38, 2.52, 3.63 and 5.77M$_\mathrm{Jup}$ for the 1, 2, 3, 5M$_\mathrm{Jup}$ planets respectively. Migration is negligible with all planets migrating by less than 1au by the end of the calculations. Additional simulations were also performed with a disc gas mass of $10^{-1}$M$_\odot$. As they result in significant accretion on the outer planet, we explore a range of planet masses. Planets with initial masses of 0.1, 0.15, 0.2, 0.25, and 0.5M$_\mathrm{Jup}$ reach a mass of 0.11, 0.18, 0.30, 2.0, 3.6, and 5.9M$_\mathrm{Jup}$, respectively, after 800 orbits. Migration remains limited to less than 3au for all planet masses, except for the most massive planet which migrated by 9au. We can produce a velocity kink matching the observations using a planet with a final mass of 2M$_\mathrm{Jup}$, giving us a similar planet mass estimate as with the lower disc gas mass models. However, in the high gas mass models the accretion rate on the planet increases with time, leading to a runaway accretion process and a high final planet mass. To compute the disc thermal structure, continuum images and synthetic line maps, we used the [mcfost]{} Monte Carlo radiative transfer code [@Pinte06; @Pinte09], assuming $T_\mathrm{gas} = T_\mathrm{dust}$, and local thermodynamic equilibrium as we are looking at low-$J$ CO lines. The central star was represented by a sphere of radius 2.25R$_\odot$, radiating isotropically with a Kurucz spectrum at 10,000K [@Woitke2018a]. To avoid interpolating the density structure between the SPH and radiative transfer code, we used a Voronoi tesselation where each [mcfost]{} cell corresponds to a SPH particle. We set the $^{13}$CO abundance to 710$^{-7}$ and follow the prescription described in Appendix B of [@Pinte2018a] to account for freeze-out where T $< 20$K, photo-dissociation and photo-desorption in locations where the UV radiation is high. We adopted a turbulent velocity of 50 ms$^{-1}$. We adopted a fixed dust mixture composed of 60% silicate, 15% amorphous carbon [@Woitke2016]. Each grain size is represented by a distribution of hollow spheres with a maximum void fraction of $0.8$. We use a grain population with 100 logarithmic bins in size ranging from 0.03 to 3000$\mu$m. At each point in the model, the density of a given grain size is obtained by interpolating between the SPH dust grain sizes, assuming grains smaller than half the smallest SPH grain size, ie 0.78$\mu$m, follow the gas distribution, and grains larger than 1.6mm follow the distribution of the 1.6mm grains. The total grain size distribution is normalised by integrating over all grain sizes, assuming a power-law $\mathrm{d}n(a) \propto a^{-3.5}\mathrm{d}a$, and over all cells, where we set the total dust mass to 1/100 of the total SPH gas mass. We computed the dust optical properties using the Mie theory. CO maps were generated at a spectral resolution of 50m/s, binned at the observed resolution and Hanning smoothed to match the observed spectral resolution. Continuum and CO maps were then convolved with a Gaussian beam matching the ALMA CLEAN beam. The spatial distribution of the dust grains is set by their Stokes number. In [phantom]{}, we assume compact spheres, but grains with different shape and mass will follow the same spatial distribution if they have the same Stokes number. To study the impact of the Stokes number on the thermal emission maps, we can mimic fluffyness by shifting the SPH grain sizes before interpolating the grain size distribution in [mcfost]{}. Thermal emission at 885$\mu$m is dominated by dust grains a few hundred microns in size. For our model with a 10$^{-2}$M$_\odot$ gas mass, compact grains of this size would have a Stokes number close to 1. The corresponding gas width appears larger than in the observations. Supplementary Figure \[sup\_fig:porous\_grains\] shows that if the emitting grains have a Stokes number around $10^{-2}$ instead of $1$, for instance if they are fluffy aggregates, the sub-millimetre gap and ring widths are in much better agreement with the observations for the same mass planet. Radiative-equilibrium hydrodynamics calculations. ------------------------------------------------- The $^{13}$CO emission originates from between 1 and 2 hydrostatic scale heights[@Pinte2018a], at an altitude where the temperature is higher than in the midplane. To test the validity of the vertically isothermal structure used in the SPH calculations, we also performed a set of [phantom]{} simulations where the temperature is regularly updated by [mcfost]{}. The two codes have been interfaced to run simultaneously. Thanks to the fast mapping between the distribution of SPH particles and radiative transfer Voronoi mesh, we can perform frequent radiative transfer calculations within the SPH simulation. At a specified time interval, [phantom]{} passes the local density, grain distribution, and sink particles properties to [mcfost]{} which returns the 3D disc temperature (assuming the gas temperature is equal to the dust temperature). Between two [mcfost]{} calls, [phantom]{} evolves the disc with the temperature of each particle held constant. This main advantage of this method is that we include the full frequency dependence, as well as light scattering, which are critical for accurate temperature calculations in protoplanetary discs. We assume radiative equilibrium at each call of [mcfost]{}, which is only a valid approximation if the radiative timescale is much smaller than the dynamical timescale. Due to the limited optical depths of the models of HD 97048, these conditions are satisfied here. The temperature structure is updated every $1/10^{th}$ of the outer planet orbit. Simulations were also performed with calls to [mcfost]{} once per orbit, producing almost indistinguishable results. Supplementary figures \[sup\_fig:vphi\], \[sup\_fig:vr\] and \[sup\_fig:vz\] compares the velocity fields for the 2M$_\mathrm{Jup}$ model between the vertically isothermal and vertically stratified (*ie* with regular [mcfost]{} temperature updates) cases. The isothermal simulation was designed to have a similar midplane temperature as the [mcfost+phantom]{} model, *i.e.* with $h/r = 0.06$ at $r=40$au. This is the same simulation as presented in the rest of the paper. As expected, differences increase with altitude, with larger deviations in the vertically stratified case. In this model, they remain limited to $\approx$0.1 and 0.05km/s for the azimuthal and radial velocities, respectively. In the case of the $^{13}$CO emission of HD 97048, the vertically isothermal structure appears as a reasonable approximation, as long as the $h/r$ is chosen sensibly. Hence vertical temperature stratification does not significantly affect our planet mass estimate. Impact of observational noise and $uv$ plane sampling. ------------------------------------------------------ As we do not aim to perform a detailed fitting of the data, all models so far were presented with a simple Gaussian convolution to compare with observations, *i.e.* showing noise-free and with fully-sampled $uv$-plane synthetic maps. To assess if observational artefacts could affect the images and in particular the detection of the kink, we also post-processed the 2M$_\mathrm{Jup}$ model through a modified version of the [CASA]{} ALMA simulator. Synthetic visibilities were computed at the same ($u$,$v$) coordinates as the data. A precipitable water vapor of 0.6mm was used to set the thermal noise. The resulting synthetic visibilities were CLEANed using the same parameters as observed visibilities. A comparison of the Gaussian-convolved and CLEANed synthetic images is showed in supplementary figure \[sup\_fig:test\_conv\]. The shape of the velocity kink is not affected by observational effects, indicating that a simple convolution is a good approximation for qualitatively comparing models to data. Data availability {#data-availability .unnumbered} ================= Raw data is publicly available via the ALMA archive under project id 2016.1.00826.S. Final reduced and calibrated data cubes are available with the DOI 10.6084/m9.figshare.8266988. Code availability {#code-availability .unnumbered} ================= [Phantom]{} is publicly available at <https://bitbucket.org/danielprice/phantom>. [mcfost]{} is currently available under request and will be made open-source soon. Figures were generated with [splash]{}[@Price2007] (<http://users.monash.edu.au/~dprice/splash/>) and [pymcfost]{} (<https://github.com/cpinte/pymcfost>), which are both open-source. ![VLT/SPHERE dual-polarisation image Q$_\phi$ image in J band (data from [@Ginski2016]), with the cyan dot marking the location of the putative planet. The white circle represents an uncertainty of 0.1” on the planet location. The data was convolved with a Gaussian of FWHM 2 pixels (24.5mas) to reduce noise.\[sup\_fig:SPHERE\]](SPHERE.pdf){width="0.7\linewidth"} ![$^{13}$CO channel maps of HD 97048. The velocity kink is only visible in a few channels between 0.7 and 1.1km/s from the systemic velocity (marked by a dashed circle).\[sup\_fig:channel\_maps\]](channel_maps.pdf){width="\linewidth"} ![Imaging with and without continuum subtraction, and of the two individual datasets with extended ALMA configuration. The velocity kink is consistently detected in both data sets, and the continuum subtraction does not affect the detection.\[sup\_fig:executions\]](executions.pdf){width="0.7\linewidth"} ![Continuum emission at 885$\mu$m as a function of the Stokes number of the 200$\mu$m grains. *Left:* The 200$\mu$m grains have a Stokes number close to 1 in the rings. This is the reference model presented in this paper. *Center:* The 200$\mu$m grains have a Stokes number close to $10^{-2}$. *Right:* ALMA continuum observations. \[sup\_fig:porous\_grains\]](porous_grains.pdf){width="0.7\linewidth"} ![Azimuthal velocity perturbation induced by a 2M$_\mathrm{Jup}$ planet in a vertically isothermal calculation (top) compared to a calculation with stratified vertical temperature (bottom). We show slices taken in the midplane (left) and at heights of 10 and 20 au above the midplane (middle and right, respectively). \[sup\_fig:vphi\]](hd97048-vphi-vs-z-iso-vs-mcfost.pdf){width="\linewidth"} ![Radial velocity perturbation induced by a 2M$_\mathrm{Jup}$ planet in a vertically isothermal calculation (top) compared to a calculation with stratified vertical temperature (bottom). We show slices taken in the midplane (left) and at heights of 10 and 20 au above the midplane (middle and right, respectively). \[sup\_fig:vr\]](hd97048-vr-vs-z-iso-vs-mcfost.pdf){width="\linewidth"} ![Vertical velocity perturbation induced by a 2M$_\mathrm{Jup}$ planet in a vertically isothermal calculation (top) compared to a calculation with stratified vertical temperature (bottom). We show slices taken in the midplane (left) and at heights of 10 and 20 au above the midplane (middle and right, respectively). \[sup\_fig:vz\]](hd97048-vz-vs-z-iso-vs-mcfost.pdf){width="\linewidth"} ![Comparison of the same channel map of the 2M$_\mathrm{jup}$ model with convolution by a Gaussian beam (left) or by sampling the synthetic visibilities at the same *uv* points as the data, and re-imaging using the same parameters as the data (right). \[sup\_fig:test\_conv\]](test_conv.pdf){width="0.7\linewidth"}
--- abstract: 'Topological defects in Bloch bands, such as Dirac points in graphene, and their resulting Berry phases play an important role for the electronic dynamics in solid state crystals. Such defects can arise in systems with a two-atomic basis due to the momentum-dependent coupling of the two sublattice states, which gives rise to a pseudo-spin texture. The topological defects appear as vortices in the azimuthal phase of this pseudo-spin texture. Here, we demonstrate a complete measurement of the azimuthal phase in a hexagonal optical lattice employing a versatile method based on time-of-flight imaging after off-resonant lattice modulation. Furthermore we map out the merging transition of the two Dirac points induced by beam imbalance. Our work paves the way to accessing geometric properties in optical lattices also with spin-orbit coupling and interactions.' author: - Matthias Tarnowski - Marlon Nuske - Nick Fläschner - Benno Rem - Dominik Vogel - Lukas Freystatzky - Klaus Sengstock - Ludwig Mathey - Christof Weitenberg title: 'Observation of topological Bloch-state defects and their merging transition' --- [^1] [^2] The motion of a particle in a crystal is not only affected by the band dispersion, but also by the geometry of the Bloch states. This geometrical property of the states was famously pointed out by Thouless and Berry [@thouless_quantized_1982; @berry_quantal_1984], and constitutes a fundamental feature of crystalline structures in their tremendously diverse forms. Furthermore, this geometric structure, which is captured by the Berry curvature, can have singular features, which have a topological nature. The paradigmatic example of such topological defects are the Dirac points in graphene, shown in Fig. \[fig:1\_topological\_defects\]. In a two-band model, the geometry of the eigenstates can be visualized as a pseudo-spin 1/2 texture in momentum space. The topological defects are quantized vortices in the azimuthal phase, that are located at the Dirac points. These features are indeed responsible for the special electronic transport properties of graphene, see [@xiao_berry_2010]. Beyond graphene and its properties, topological defects also control numerous other intriguing phenomena in solid state physics, such as the integer quantum Hall effect [@klitzing_new_1980] or topological insulators [@hasan_colloquium_2010]. ![Topological defects in Bloch bands. a) The dispersion relation in momentum space of the two lowest bands of a graphene lattice. The bands touch linearly at the two Dirac points, i.e. the K and $\rm K'$ points, at the edge of the Brillouin zone (hexagon). The corresponding eigenstates form a pseudo-spin texture in momentum space with the azimuthal phase $\phi_{{\hspace{0.5pt}{\mathbf{k}}}}$ (projection plane below). The vortices of the azimuthal phase show that the Dirac points are topological defects in the pseudo-spin texture. b) When the inversion symmetry is broken by an energy offset between the sublattices, the Dirac points open and become massive. The topological defects remain unchanged.[]{data-label="fig:1_topological_defects"}](fig1.pdf){width="\linewidth"} In recent years, ultracold atoms in optical lattices have emerged as a versatile model system with tunable topological properties [@goldman_topological_2016]. Besides the measurement of the band gap at topological defects [@tarruell_creating_2012; @weinberg_breaking_2016], they also offer detection tools, which access the eigenstates similar to ARPES measurements in graphene [@gierz_graphene_2012]. For example, accelerating wave packets through the lattice gives access to diverse geometric phases via interferometry [@atala_direct_2013; @duca_2014; @li_bloch_2016] or to global topological invariants such as the Chern number via differential drift measurements [@price_mapping_2012; @jotzu_experimental_2014; @aidelsburger_measuring_2015]. Finally working with a filled lowest band, a projection onto flat bands yields a full state tomography, from which topological defects and global topology can be determined [@hauke_tomography_2014; @flaschner_experimental_2016]. These methods are, however, either inefficient for covering the full Brillouin zone, cannot resolve the position of the defects, or only work in specific systems such as Floquet systems. ![Detection of the azimuthal phase structure via lattice modulation. a) The interference pattern of three laser beams with intensities $I_1$, $I_2$ and $I_3$ forms a hexagonal lattice with lattice sites A and B on two triangular sublattices. b) In a tight-binding model, it can be described by tunneling between nearest and next-nearest neighbors with amplitude $J_{AB}$ and $J_{AA}, J_{BB}$, respectively, and by a sublattice energy offset $\Delta_{AB}$ (indicated by different colors), which is created by using appropriate polarizations of the laser beams [@flaschner_experimental_2016]. c) The laser beam intensities are modulated with a driving frequency $\omega$ to probe the temporal response of the momentum states. d) Sketch of the energy distance between the two lowest bands. The driving frequency is chosen red-detuned to the band gap. e) We measure the momentum space density $n({{\mathbf{k}}},t)$ after ToF expansion for various modulation times $t$ and obtain an oscillation with distinct phase $\chi_{{\hspace{0.5pt}{\mathbf{k}}}}$ for each momentum ${\mathbf{k}}$. The measured phase $\chi_{{\hspace{0.5pt}{\mathbf{k}}}}$ is directly related to the azimuthal phase $\phi_{{\hspace{0.5pt}{\mathbf{k}}}}$ of the eigenstates.[]{data-label="fig:2_detectionScheme"}](fig2.pdf){width="\linewidth"} Here, we present a measurement of the topological defects of the Bloch states of a hexagonal optical lattice. We map out the azimuthal phase profile of the pseudo-spin texture in entire momentum space and identify the phase windings as topological defects. We introduce a new versatile method to measure this phase for all momentum states simultaneously, which also works for a completely filled first band. Off-resonant modulation of the laser beam intensities leads to a modulation of the momentum space density. We show that the phase of this modulation is related to the azimuthal phase. As a compelling example, we realize the merging transition of two topological defects by varying the lattice beam intensities [@zhu_simulation_2007] and map out the transition by following their position. We consider a system of ultracold atoms in a hexagonal optical lattice [@becker_ultracold_2010; @flaschner_experimental_2016] (see Fig. \[fig:2\_detectionScheme\](a)). The lattice consists of two triangular sublattices, labeled as A and B, with the annihilation operators for the momentum states ${{\hspace{0.5pt}{\mathbf{k}}}}$ on the respective sublattices $c_{{{\hspace{0.5pt}{\mathbf{k}}}},A}$ and $c_{{{\hspace{0.5pt}{\mathbf{k}}}},B}$. Neglecting a momentum-dependent energy offset, the tight-binding Hamiltonian is given by $H_{\rm tb}=\sum_{{\hspace{0.5pt}{\mathbf{k}}}} H_{\rm tb,{{\hspace{0.5pt}{\mathbf{k}}}}}$, where $$\begin{aligned} H_{\rm tb,{{\hspace{0.5pt}{\mathbf{k}}}}}&=\epsilon_{{\hspace{0.5pt}{\mathbf{k}}}} {\begin{pmatrix}}c_{{{\hspace{0.5pt}{\mathbf{k}}}},A}^\dag & c_{{{\hspace{0.5pt}{\mathbf{k}}}},B}^\dag {\end{pmatrix}}{\begin{pmatrix}}\cos(\theta_{{\hspace{0.5pt}{\mathbf{k}}}}) & \sin(\theta_{{\hspace{0.5pt}{\mathbf{k}}}})e^{-i\phi_{{\hspace{0.5pt}{\mathbf{k}}}}} \\ \sin(\theta_{{\hspace{0.5pt}{\mathbf{k}}}})e^{i\phi_{{\hspace{0.5pt}{\mathbf{k}}}}} & -\cos(\theta_{{\hspace{0.5pt}{\mathbf{k}}}}) {\end{pmatrix}}{\begin{pmatrix}}c_{{{\hspace{0.5pt}{\mathbf{k}}}},A} \\ c_{{{\hspace{0.5pt}{\mathbf{k}}}},B}{\end{pmatrix}}\label{eq:hamiltonianThetaPhi} \end{aligned}$$ and describes a pseudo-spin with azimuthal phase $\phi_{{\hspace{0.5pt}{\mathbf{k}}}}$ and polar angle $\theta_{{\hspace{0.5pt}{\mathbf{k}}}}$ for each momentum. The band distance is $2\epsilon_{{\hspace{0.5pt}{\mathbf{k}}}}$. The eigenstates with annihilation operators $$\begin{aligned} \begin{split} c_{{{\hspace{0.5pt}{\mathbf{k}}}},+}&=\cos(\theta_{{\hspace{0.5pt}{\mathbf{k}}}}/2)c_{{{\hspace{0.5pt}{\mathbf{k}}}},A}+\sin(\theta_{{\hspace{0.5pt}{\mathbf{k}}}}/2)e^{-i\phi_{{\hspace{0.5pt}{\mathbf{k}}}}}c_{{{\hspace{0.5pt}{\mathbf{k}}}},B}\\ c_{{{\hspace{0.5pt}{\mathbf{k}}}},-}&=-\sin(\theta_{{\hspace{0.5pt}{\mathbf{k}}}}/2)e^{i\phi_{{\hspace{0.5pt}{\mathbf{k}}}}}c_{{{\hspace{0.5pt}{\mathbf{k}}}},A}+\cos(\theta_{{\hspace{0.5pt}{\mathbf{k}}}}/2)c_{{{\hspace{0.5pt}{\mathbf{k}}}},B} \label{eq:lowerUpperBandOperators} \end{split} \end{aligned}$$ describe the two lowest bands of the lattice. The dependence of $\theta_{{\hspace{0.5pt}{\mathbf{k}}}}$, $\phi_{{\hspace{0.5pt}{\mathbf{k}}}}$ and $\epsilon_{{\hspace{0.5pt}{\mathbf{k}}}}$ on the tight-binding parameters $J_{AB}$, $J_{AA}$, $J_{BB}$, $\Delta_{AB}$ (see Fig. \[fig:2\_detectionScheme\]) is given in [@supmat]. The vortices of $\phi_{{\hspace{0.5pt}{\mathbf{k}}}}$ indicate the topological defects of the Bloch states. In order to access the phase $\phi_{{\hspace{0.5pt}{\mathbf{k}}}}$, we employ a new method, which is based on the interference of the A and B sublattices in time-of-flight (ToF) expansion after off-resonant lattice modulation. The density distribution after ToF is up to a Wannier envelope given by $$\begin{aligned} n({\mathbf{k}})=n_{{{\hspace{0.5pt}{\mathbf{k}}}},A}+n_{{{\hspace{0.5pt}{\mathbf{k}}}},B} + ({\langle c_{{{\hspace{0.5pt}{\mathbf{k}}}},A}^\dag c_{{{\hspace{0.5pt}{\mathbf{k}}}},B} \rangle}+c.c.) {\addtocounter{equation}{1}\tag{\theequation}}\label{effTofExp}{\quad\text{,}}\end{aligned}$$ where $n_{{{\hspace{0.5pt}{\mathbf{k}}}},A}={\langle c_{{{\hspace{0.5pt}{\mathbf{k}}}},A}^\dag c_{{{\hspace{0.5pt}{\mathbf{k}}}},A} \rangle}$ and $n_{{{\hspace{0.5pt}{\mathbf{k}}}},B}={\langle c_{{{\hspace{0.5pt}{\mathbf{k}}}},B}^\dag c_{{{\hspace{0.5pt}{\mathbf{k}}}},B} \rangle}$ are the occupations of the two sublattices. Rewriting it in terms of the eigenstates corresponding to the upper and lower bands yields $$\begin{aligned} n({\hspace{0.5pt}{\mathbf{k}}}) &= A_{{\mathbf{k}}, +} n_{{\hspace{0.5pt}{\mathbf{k}}}, +} + A_{{\mathbf{k}}, -} n_{{\hspace{0.5pt}{\mathbf{k}}}, -} + (B_{{\hspace{0.5pt}{\mathbf{k}}}} \langle c_{{\hspace{0.5pt}{\mathbf{k}}}, +}^{\dagger} c_{{\hspace{0.5pt}{\mathbf{k}}}, -}\rangle + c.c.) \label{eq:nofkthetaphi} \end{aligned}$$ where $n_{{{\hspace{0.5pt}{\mathbf{k}}}},+}={\langle c_{{{\hspace{0.5pt}{\mathbf{k}}}},+}^\dag c_{{{\hspace{0.5pt}{\mathbf{k}}}},+} \rangle}$ and $n_{{{\hspace{0.5pt}{\mathbf{k}}}},-}={\langle c_{{{\hspace{0.5pt}{\mathbf{k}}}},-}^\dag c_{{{\hspace{0.5pt}{\mathbf{k}}}},-} \rangle}$ are the occupations of the two bands. The prefactors $A_{{\mathbf{k}}, \pm} = (1 \mp \cos \phi_{{\hspace{0.5pt}{\mathbf{k}}}} \sin \theta_{{\hspace{0.5pt}{\mathbf{k}}}})/2$ and $B_{{\hspace{0.5pt}{\mathbf{k}}}}=(\cos(\theta_{{\hspace{0.5pt}{\mathbf{k}}}})\cos(\phi_{{\hspace{0.5pt}{\mathbf{k}}}})+i\sin(\phi_{{\hspace{0.5pt}{\mathbf{k}}}}))/2$ contain information about the azimuthal phase $\phi_{{\hspace{0.5pt}{\mathbf{k}}}}$, which however cannot be disentangled by a single measurement of the density. The crucial idea is to extract the phase from a measurement of the temporal response to a driving, which affects the terms differently. We employ off-resonant modulation of the laser beam intensities with a red-detuned driving frequency $\omega<\Delta_{AB}$ (see Fig. \[fig:2\_detectionScheme\](b)). For a discussion of other driving regimes see supplementary material [@supmat]. For weak off-resonant driving, the band occupations remain largely unaffected, while the correlator ${\langle c_{{{\hspace{0.5pt}{\mathbf{k}}}},+}^\dag c_{{{\hspace{0.5pt}{\mathbf{k}}}},-} \rangle}$ becomes time dependent. The macroscopic occupation of the lowest band serves as a homodyne amplification of the very small induced occupation of the upper band, such that the correlator can be measured. To first order in perturbation theory and neglecting a fast oscillating part (see [@supmat]) the resulting density oscillation after ToF is: $$\begin{aligned} n({\mathbf{k}},t)& \approx n_{{\rm eq},{{\hspace{0.5pt}{\mathbf{k}}}}}- \delta n_{{\hspace{0.5pt}{\mathbf{k}}}} \sin(\omega t +\chi_{{\hspace{0.5pt}{\mathbf{k}}}}) {\quad\text{,}}\label{eq:ntof} \end{aligned}$$ where $n_{{\rm eq},{{\hspace{0.5pt}{\mathbf{k}}}}}$ is the ToF density without a lattice modulation and $$\begin{aligned} \chi_{{\hspace{0.5pt}{\mathbf{k}}}}&={\rm Arg}{\left[ \cos(\phi_{{\hspace{0.5pt}{\mathbf{k}}}}) + i P_{\hspace{0.5pt}{\hspace{0.5pt}{\mathbf{k}}}} \sin(\phi_{{\hspace{0.5pt}{\mathbf{k}}}}) \right]}{\quad\text{.}}\label{eq:chik} \end{aligned}$$ The oscillation amplitude $\delta n_{{\hspace{0.5pt}{\mathbf{k}}}}$ and the coefficient $P_{{\hspace{0.5pt}{\mathbf{k}}}}$ are defined in [@supmat]. The measured phase $\chi_{{\hspace{0.5pt}{\mathbf{k}}}}$ is closely related to the azimuthal phase $\phi_{{\hspace{0.5pt}{\mathbf{k}}}}$. In particular, $\chi_{{\hspace{0.5pt}{\mathbf{k}}}}$ smoothely depends on $\phi_{{\hspace{0.5pt}{\mathbf{k}}}}$ and the position of their vortices is identical. Therefore the measurement of $\chi_{{\hspace{0.5pt}{\mathbf{k}}}}$ gives a reliable determination of the position and winding number of the topological defects of the Bloch states. The distortion factor $P_{{\hspace{0.5pt}{\mathbf{k}}}}$ is given by $P_{{\hspace{0.5pt}{\mathbf{k}}}}\approx \omega/\Delta_{AB}$, and can be made close to one for near-resonant driving ($\omega\approx\Delta_{AB}$). ![Measurement of the azimuthal phase profile. a) Expected azimuthal phase $\phi_{{\hspace{0.5pt}{\mathbf{k}}}}$ in momentum space. The black hexagon marks the first Brillouin zone. b) Experimentally obtained phase $\chi_{{\hspace{0.5pt}{\mathbf{k}}}}$. The position and winding number of the topological defects are correctly determined. c) Quantitative comparison of the phases along the high symmetry path indicated by the solid, blue lines in a). The data is averaged over the three equivalent paths of which only one is shown in a). The parameters are $\omega=2\pi\cdot 5500$ Hz, $J_{AB}=h\cdot 520$ Hz, $J_{AA}=h\cdot 99$ Hz, $J_{BB}\approx 0$ and $\Delta_{AB}=h\cdot 6056$ Hz (see [@supmat]). The laser beam intensities are modulated by $\pm 20\%$, leading to a modulation of $J_{AB}$ by $\pm 18\%$ and of $\Delta_{AB}$ by $\pm 22\%$. The experimentally measured $\chi_{{\hspace{0.5pt}{\mathbf{k}}}}$ matches well with the numerical expectation, which we also find for other red-detuned driving frequencies. Furthermore, for the chosen parameters, the difference between $\chi_{{\hspace{0.5pt}{\mathbf{k}}}}$ and $\phi_{{\hspace{0.5pt}{\mathbf{k}}}}$ is experimentally indiscernable.[]{data-label="fig:3_comparisonExperimentTheory"}](fig3.pdf){width="\linewidth"} In Fig. \[fig:3\_comparisonExperimentTheory\]b we show the full phase profile obtained with the method as described above. We start with a spin-polarized cloud with $10^5$ fermionic potassium atoms. We adiabatically ramp up a hexagonal optical lattice in a boron-nitride configuration with finite offset $\Delta_{AB}$, such that the lowest band is completely filled. We then suddenly apply a modulation of the laser beam intensities and measure the resulting ToF density distribution after variable modulation times. We extract $\chi_{{\hspace{0.5pt}{\mathbf{k}}}}$ from a pixel-wise fit to the data and thus obtain the phase information with high resolution throughout momentum space. We compare the measurement to the prediction from perturbation theory for $\chi_{{\hspace{0.5pt}{\mathbf{k}}}}$ and find very good agreement (Fig. \[fig:3\_comparisonExperimentTheory\]). Due to the near-resonant choice of the driving frequency, the distortion between $\phi_{{\hspace{0.5pt}{\mathbf{k}}}}$ and $\chi_{{\hspace{0.5pt}{\mathbf{k}}}}$ is very small and we effectively measure $\phi_{{\hspace{0.5pt}{\mathbf{k}}}}$. This phase profile illustrates the threefold symmetry of the eigenstates, which stems from the high symmetry of the lattice in real space [@lim_2015; @flaschner_experimental_2016; @li_bloch_2016]. ![Mapping out the merging transition of topological defects. a) Phase profile for two different beam imbalances ($I^{(1)}/I^{(2)}=1.14$ and $I^{(1)}/I^{(2)}=1.23$). The hexagon marks the Brillouin zone. The blue boxes show the cut along $k_x$, which is displayed in b). b) movement of the topological defects (visible as $\pi$ phase jump in the cut) as a function of beam imbalance. The defects move along the K-M-$\rm K'$ path and merge at M point for a critical beam imbalance $I^{(1)}/I^{(2)}$ = 1.18. Lattice parameters as in Fig. \[fig:3\_comparisonExperimentTheory\], driving frequency is $\omega = 2\pi\cdot 4500$ Hz. The black line shows the positions of the defects as obtained from the local minima in a band structure calculation. []{data-label="fig:4_mergingTransition"}](fig4.pdf){width="\linewidth"} In a second set of experiments, we use the same method to fully map out the merging transition of the topological defects. This can be achieved by changing the relative beam intensities of the optical lattice [@zhu_simulation_2007] or via lattice shaking [@koghee_2012], which is equivalent to applying strain in a solid state material such as graphene [@amorim_novel_2016]. While strain is limited in a solid state system and in particular graphene doesn’t hold the stress needed to reach the merging transition [@montambaux_merging_2009], these limitations do not apply to optical lattices and the merging transition was previously observed [@tarruell_creating_2012; @duca_2014]. In a graphene lattice with inversion symmetry, the band gap is protected by symmetry and doesn’t open until the merging of the topological defects [@zhu_simulation_2007]. In this case, the merging transition is accompanied by a transition from a semimetal to an insulator and the clear effect on the band structure can be used to detect the transition [@tarruell_creating_2012]. In a lattice with broken inversion symmetry as in our case, there is always a band gap. Then the merging transition is purely topological in nature with a dramatic change only in the phase profile. Therefore our method is well suited to map out this phase transition. The position of the topological defects fits very well to the expectation from band structure calculations (Fig. \[fig:4\_mergingTransition\]). We find the transition at a critical beam imbalance of $I^{(1)}/I^{(2)}=1.18$ ($I^{(3)}=I^{(2)}$ throughout), which corresponds to a critical imbalance of the next-neighbor tunneling elements of $J_{AB}^{(1)}/J_{AB}^{(2)}=2.05$. This critical ratio of tunneling elements is close to the prediction of $J_{AB}^{(1)}/J_{AB}^{(2)}=2.0$, which is valid if the next-nearest-neighbour tunneling can be ignored. This can be seen from a generalization of the argument in Ref. [@hasegawa_zero_2006; @zhu_simulation_2007; @pereira_tight-binding_2009] to a lattice with sublattice offset. Our measurements show that the topological defects are robust under the change of lattice parameters and can only be destroyed by annihilation of the two defects of opposite winding. Due to its versatility our method could be extended in several directions. It should be applicable to Floquet systems with a large Floquet driving frequency, such that a stroboscopic but still time-resolved measurement is possible. As our method does not couple to the spin degree of freedom, it could be used to detect spin-dependent Dirac points or non-Abelian gauge fields [@osterloh_cold_2005; @ruseckas_non-abelian_2005] by combining it with Stern-Gerlach separation. Our method is also suited to detect higher winding numbers of the pseudospin textures, which are predicted for driven graphene [@sentef_theory_2015] and bilayer graphene [@min_chiral_2008; @kumar_flat_2013]. Finally, using two driving frequencies, an extension to lattices with a three-atomic basis such as the Lieb lattice [@taie_coherent_2015] or generally to multiband systems is conceivable. As our method works with a completely filled band instead of accelerated wave packets, it might be a suitable starting point for the characterization of the topology in interacting systems. We thank Jean-Noel Fuchs and Lih-King Lim for fruitful discussions. We acknowledge funding from the Deutsche Forschungsgemeinschaft through the excellence cluster The Hamburg Centre for Ultrafast Imaging - Structure, Dynamics and Control of Matter at the Atomic Scale, the GrK 1355 and the SFB 925. B. R. acknowledges financial support from the European Commission (Marie Curie Fellowship). [30]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase 10.1103/PhysRevLett.49.405) [****,  ()](\doibase 10.1098/rspa.1984.0023) [****, ()](\doibase 10.1103/RevModPhys.82.1959) [****, ()](\doibase 10.1103/PhysRevLett.45.494) [****,  ()](\doibase 10.1103/RevModPhys.82.3045) [****,  ()](\doibase 10.1038/nphys3803) [****,  ()](\doibase 10.1038/nature10871) [****,  ()](\doibase 10.1088/2053-1583/3/2/024005) [****,  ()](\doibase 10.1021/nl300512q) [****,  ()](\doibase 10.1038/nphys2790) [****,  ()](\doibase 10.1126/science.1259052) [****,  ()](\doibase 10.1126/science.aad5812) [****,  ()](\doibase 10.1103/PhysRevA.85.033620) [****,  ()](\doibase 10.1038/nature13915) [****,  ()](\doibase 10.1038/nphys3171) [****, ()](\doibase 10.1103/PhysRevLett.113.045303) [****,  ()](\doibase 10.1126/science.aad4568) [****, ()](\doibase 10.1103/PhysRevLett.98.260402) [****,  ()](\doibase 10.1088/1367-2630/12/6/065025) @noop [****,  ()](\doibase 10.1103/PhysRevA.92.063627) [****,  ()](\doibase 10.1103/PhysRevA.85.023637) [, ****, ()](\doibase 10.1016/j.physrep.2015.12.006) [****,  ()](\doibase 10.1103/PhysRevB.80.153412) [****,  ()](\doibase 10.1103/PhysRevB.74.033413) [****,  ()](\doibase 10.1103/PhysRevB.80.045401) [****,  ()](\doibase 10.1103/PhysRevLett.95.010403) [****,  ()](\doibase 10.1103/PhysRevLett.95.010404) [****,  ()](\doibase 10.1038/ncomms8047) [****,  ()](\doibase 10.1103/PhysRevB.77.155416) [****,  ()](\doibase 10.1103/PhysRevB.87.241108) [****,  ()](\doibase 10.1126/sciadv.1500854) [^1]: These authors have contributed equally to this work [^2]: These authors have contributed equally to this work
--- abstract: 'Elementary Algebraic Geometry can be described as study of zeros of polynomials with integer degrees, this idea can be naturally carried over to ‘polynomials’ with rational degree. This paper explores affine varieties, tangent space and projective space for such polynomials and notes the differences and similarities between rational and integer degrees. The line bundles $\curly{O}(n),n\in\ds{Q}$ are also constructed and their Čech cohomology computed.' author: - 'Harpreet Singh Bedi   [bedi@alfred.edu]{}' date: 15 Aug 2019 title: '$\deg \ds{Q}$ Algebraic Geometry' ---
--- abstract: '\[sec:Abstract\] A new approach to solving a class of rank-constrained semi-definite programming (SDP) problems, which appear in many signal processing applications such as transmit beamspace design in multiple-input multiple-output (MIMO) radar, downlink beamforming design in MIMO communications, generalized sidelobe canceller design, phase retrieval, etc., is presented. The essence of the approach is the use of underlying algebraic structure enforced in such problems by other practical constraints such as, for example, null shaping constraint. According to this approach, instead of relaxing the non-convex rank-constrained SDP problem to a feasible set of positive semidefinite matrices, we restrict it to a space of polynomials whose dimension is equal to the desired rank. The resulting optimization problem is then convex as its solution is required to be full rank, and can be efficiently and exactly solved. A simple matrix decomposition is needed to recover the solution of the original problem from the solution of the restricted one. We show how this approach can be applied to solving some important signal processing problems that contain null-shaping constraints. As a byproduct of our study, the conjugacy of beamfoming and parameter estimation problems leads us to formulation of a new and rigorous criterion for signal/noise subspace identification. Simulation results are performed for the problem of rank-constrained beamforming design and show an exact agreement of the solution with the proposed algebraic structure, as well as significant performance improvements in terms of sidelobe suppression compared to the existing methods.' author: - 'Matthew W. Morency  and Sergiy A. Vorobyov,  [^1]' title: 'An Algebraic Approach to a Class of Rank-Constrained Semi-Definite Programs With Applications' --- [**[*Index Terms –*]{} Semi-definite programming, rank-constrained optimization, algebra, polynomial ideals.**]{} Introduction {#sec:Intro} ============ According to the signal space concept, a signal is represented as a vector in a vector space [@Wozencraft], [@VanTrees]. Working in vector spaces is yet the most popular approach while designing signal processing models and algorithms. As a consequence, much of signal processing is written in the language of linear algebra. That is, the algebra that deals with vectors, and vector spaces, and whose operations are vector addition and composition of linear maps. This language has two appealing features. First, many problems are well described or approximated by linear models. Second, linear algebra is well behaved in the sense that most questions which can be posed within linear algebra can generally be easily answered within this framework. As a further development of signal processing toward the use of advanced algebra, frame theory has found fruitful applications for efficient signal/image representation and filter-bank design [@Frames], [@Mallat]. More recently, graphical signal processing (GSP) has been introduced for addressing the need of designing signal processing methods for large-scale social networks, big data or biological networks analysis [@Shuman]–[@Moura2]. Since signals in GSP are defined over a discrete domain whose elementary units (vertices) are related to each other through a graph, algebra beyond linear algebra is gaining more and more attention. In fact, new applications demonstrate that not all questions that are of interest to signal processing engineers can be readily solved by the tools of linear algebra, even if they have their genesis in linear models. To this end, some classic and more recent signal processing problems that have been addressed within a linear algebra framework can be more accurately, efficiently, and more completely addressed with the use of advanced algebraic tools, which are not yet widely known and used within signal processing. One example of such problem is the matrix completion [@MC], [@Franz1], which has received an enormous amount of attention, especially since the advent of the NetFlix prize, and the relevant phase retrieval problem [@VoroCandes], [@Franz2]. Interestingly, the formulation of the matrix completion problem has much in common with some other important problems in signal processing. To name a few, optimal downlink beamforming design in multiple-input multiple-output (MIMO) wireless communications [@BengtOtt], [@HuangPalomar]; robust adaptive beamforming design for general-rank signal model [@ArashSergiy1]; transmit beamspace design in MIMO radar [@Fuhrmann]–[@ArashJournal]; interference alignment [@Dimakis]; and even the classic generalized sidelobe canceller (GSC) design [@Frost], [@GriffithsJim] are all related to each other in the sense that they can be formulated as rank-constrained semi-definite programming (SDP) problems. The rank constraint makes such problems non-convex and in general very difficult to solve. In the case of the rank-one constraint, SDP relaxation and randomization techniques have been used to address such problems following the landmark work of Goemans and Williamson [@GW]. That is, the rank-one constraint can be dropped, and then the resulting convex SDP problem can be solved and a rank-one solution recovered through a randomization procedure. However, the guarantees proven for the problem of finding maximum cut in a graph in [@GW] (see also for more generic exposition and other applications [@TomMagazin], [@KevinMe]) apply to rank-one constrained problems, while similar guarantees have not, to our knowledge, been shown to hold for general rank constrained problems. However, it may not be necessary to rely on relaxation in order to solve some rank-constrained SDP problems of a high practical importance. Indeed, it is typical, at least it is the case in many of the aforementioned problems, that a strong underlying algebraic structure is enforced by design requirements and other constraints in the problems. Such underlying algebraic structures allow one to significantly simplify the problems. In this paper,[^2] the general rank constraint in a class of rank-constrained SDP problems is made implicit by restricting the dimension of the problem to be equal to the desired rank, while requiring the solution to be full-rank. Specifically, we develop an approach by which the dimension of the rank-constrained SDP problems can be restricted by exploiting the underlying polynomial structure of the problems. In this respect, advanced algebra (beyond linear algebra) is instrumental and allows to solve a class of rank-constrained SDP problems in a space of polynomials. The space of polynomials has desirable properties by its definition and forms a feasible and convex restriction of the original optimization problem. Then the obtained SDP problem will be convex and can be efficiently solved. An additional advantage is a lower dimension of the resulting problem that is equal to the rank required by the rank constraint. In this way, many aforementioned problems can be very efficiently solved. The rest of the paper is organized as follows. Section II defines and explores some important algebraic preliminaries such as groups, subgroups, rings, and ideals, which are instrumental in developing our approach. The considered type of rank-constrained SDP problems is introduced in Section III, and a basic motivating result is proved. Section IV revises some classical signal processing problems such as phase retrieval, downlink beamforming design for MIMO communications, and transmit beamspace design for MIMO radar, and shows that the corresponding optimization problems perfectly fit to the structure of the considered type of rank-constrained SDP problems. The underlying algebraic structure enforced in the considered type of rank-constrained SDP problems by null-shaping constraint is investigated in details in Section V. In Section VI, it is also shown that the celebrated GSC is a special case of the addressed type of rank-constrained SDP problems and some generalizations of GSC are shown. Finally, the reformulation of such rank-constrained SDP problem into corresponding convex problem is given in Section VII, and algebraic consequences of the conjugacy of beamforming and parameter estimation problems are discussed in Section VIII within the insightful framework of this paper. Our simulation results for the rank-constrained beamforming design are given in Section IX, followed by conclusions in Section X. Algebraic Preliminaries ======================= This paper makes key use of several algebraic structures which are not widely used within the engineering literature. An overview of these structures, their corresponding notations, and relationships between these structures is thus included to aid in the understanding of the content of the paper [@Barvinok]–[@IVA]. A group is a set of elements $\cal{G}$ with a binary operation $\bullet$ with the following properties: 1. $a \bullet b \in \cal{G}$, $\forall a,b \in \cal{G}$ 2. $\exists e \in \cal{G}\ |$ $\ a \bullet e = e \bullet a = a,\ \forall a \in \cal{G}$ 3. $\forall a \in \cal{G}$, $\exists b \in \cal{G} |$ $a \bullet b = b \bullet a = e$ 4. $(a \bullet b)\bullet c = a\bullet(b \bullet c)$ \[def:group\] If a group $G$ is commutative with respect to $\bullet$ then G is said to be Abelian. As an example, consider the set of all permutations of an $n$-tuple. This set is a group (in fact, it is known as the symmetric group, $S_n$), with operation of composition of functions.[^3] In this case, the identity element is the zero permutation, which takes each element of the $n$-tuple to itself. This is not an Abelian group as composition of functions depends, generally speaking, on the order of the composition. An example of an Abelian group are the integers under addition. Every vector space is an Abelian group with respect to vector addition, which is a property which will be used later. Building on this definition, we next introduce the concept of a subgroup. A subgroup $H < G$ is a subset of $G$ for which all of the group properties hold with respect to $\bullet$. Thus, every subgroup is a group unto itself which is also contained in $\cal{G}$. A subgroup of $\cal{G}$ which does not contain every element of $G$ is said to be a proper subgroup of $\cal{G}$. A subgroup of an Abelian group is Abelian by definition. Since all vector spaces are Abelian groups with respect to vector addition, it follows that every subspace of a vector space is an Abelian subgroup of that space with respect to addition. Next we define a ring, which is then will be used for defining an ideal and then polynomial ideal. Polynomial deals, in turn, will be used in the paper for developing our approach. A ring is a set $\cal{R}$ with operations $\bullet$ and $\diamond$ with the following properties: 1. $(\cal{R},\bullet)$ is an Abelian group 2. $\exists 1 \in \cal{R}\ |$ $\ 1\diamond r = r \diamond 1 = r,\ \forall r \in \cal{R}$ 3. $(a \diamond b) \diamond c = a \diamond (b \diamond c), \forall a,b,c \in \cal{R}$ 4. $a\diamond(b \bullet c) = (a\diamond b)\ \bullet (a\diamond c), \forall a,b,c \in \cal{R}$ 5. $(a \bullet b) \diamond c = (a\diamond c) \bullet (b\diamond c), \forall a,b,c \in \cal{R}$ \[def:ring\] Properties $2$ and $3$ of Definition \[def:ring\] taken together mean that a ring $\cal{R}$ is a *monoid* under multiplication $\diamond$ operation. Properties $4$ and $5$ of Definition \[def:ring\] again taken together simply state that in a ring, multiplication $\diamond$ is left and right distributive over addition $\bullet$. We will only consider commutative rings which have the addition property that $a \diamond b = b\diamond a, \forall a,b \in R$.[^4] The major substructures of rings are known as ideals. An ideal $\mathcal{I}$ in a commutative ring $R$ is a subgroup of $R$ with the following properties: 1. $\mathcal{I}$ is a subgroup of R 2. $\forall a \in \mathcal{I},\ r \in R, a \diamond r \in \mathcal{I}, r \diamond a \in \mathcal{I}$ \[def:ideal\] A relevant example of a ring with a non-trivial ideal, which will be used in the contribution of this paper, is the [*ring of univariate polynomials over a field $\mathbb{K}$*]{}, which we will denote as $\mathbb{K}[x]$ (read as “K adjoin x”). To show that this is a commutative ring we need to show that the relevant properties of Definitions \[def:group\] and \[def:ring\] hold. The fact that the polynomials are an Abelian group under addition is obvious since $P(x) + Q(x) = Q(x) + P(x) = R(x) \in \mathbb{K}[x]$, $0 \in \mathbb{K}[x]$, and the additive inverse of a polynomial $P(x)$ is trivially $-P(x) \in \mathbb{K}[x]$. To show that it is commutative ring, we note that $A(x)B(x) = B(x)A(x) = C(x) \in \mathbb{K}[x]$ for any polynomials $A(x), B(x), C(x)$ with coefficients in $\mathbb{K}$. The set $\mathbb{K}[x]$ is associative with respect to both multiplication and addition, and polynomial multiplication is distributive over polynomial addition since $A(x)(B(x) + C(x)) = A(x)B(x) + A(x)C(x)$ and $(A(x) + B(x))C(x) = A(x)C(x) + B(x)C(x)$. Finally, we note that the set $\mathbb{K}[x]$ has the multiplicative identity $1$, thus completing the proof. It is important to mention that polynomials in $\mathbb{K}[x]$ also form a vector space over $\mathbb{K}$. Without restriction on the degree of the polynomials, this space has infinite dimension, and as such a finite dimensional vector space of polynomials implies a restriction of the degree of the polynomials to a finite number $N$. This vector space has a basis of $\{x^i,0\leq i \leq N\}$. We denote the space of polynomials with degree strictly less than $N$ by $\mathbb{K}_{N}[x]$. Fig. \[fi:coeffspace\] demonstrates that the subscript $N$ corresponds to the dimension of the vector space, where $N-1$ is the restriction on the degree of the polynomials. Thus, there should be no confusion with the definition of $\mathbb{K}_N[x]$ being the space of polynomials of degree strictly less than $N$. Hereafter in the paper, we consider the case $\mathbb{K} = \mathbb{C}$. ![Depiction of an element of $\mathbb{C}_3[x]$. Notice that a polynomial of degree 2, corresponds to a 3 dimensional vector. Hence, $\mathbb{C}_3[x]$ denotes the space of all polynomials of degree strictly less than 3.[]{data-label="fi:coeffspace"}](coeffspace.pdf){width="5.9cm"} Building on Remark $2$ and the definition of an ideal (Definition \[def:ideal\]), the question arises of when a subset of elements in $\mathbb{C}[x]$ forms an ideal. Let $\mathcal{I}$ be the set of all univariate polynomials $\mathbb{C}[x]$ with a root at $x_0 \in \mathbb C$. This set forms an ideal in the ring $\mathbb{C}[x]$, which we refer hereafter as [*polynomial ideal*]{}. To see this, consider a polynomial with a single root at $x_0$. By Euclid’s division algorithm, a univariate polynomial has a root at a point $x_0$ if and only if it can be written as $P(x) = Q(x)(x - x_1)$. Consider Definition \[def:ideal\] and let $P(x) = Q(x)(x - x_1)$. Then $P(x)R(x) = Q(x)R(x)(x-x_1)$ which is again in $\mathcal{I}$. The polynomials with a root at $x_0$ are also clearly a subgroup of $\mathbb{C}[x]$ since $P_1(x) - P_2(x) \in \mathcal I$, where $P_1(x) \triangleq Q_1(x)(x-x_0)$, $P_2(x) \triangleq Q_2(x)(x-x_0)$ for any $Q_1(x),Q_2(x) \in \mathbb{C}[x]$. This ideal is also an infinite dimensional vector space over $\mathbb{C}$. Let $A(x) = P(x)(x-x_1)$ and $B(x) = Q(x)(x-x_1)$. It is easy to show that $$\begin{aligned} \alpha(A(x) + B(x)) &= \alpha A(x)+\alpha B(x), \alpha \in \mathbb{C}, A(x),B(x) \in \mathcal{I} \nonumber \\ &= \alpha(P(x) + Q(x))(x-x_1) \nonumber \\ &= \alpha R(x)(x-x_1),\ R(x) = P(x) + Q(x) \nonumber\end{aligned}$$ Furthermore, it is easy to show that if the polynomials $A(x)$ and $B(x)$ are coprime, that is, if their greatest common divisor (gcd) is a constant, the ideal is the entire ring. To show this, we invoke Bezout’s identity $$\begin{aligned} ap + bq &= gcd(p,q) \nonumber\end{aligned}$$ for some $a,b \in \cal{R}$ where $\cal{R}$ is a principal ideal domain (every ideal is generated by a single element). Since every univariate polynomial ideal is generated by a single element $f \in \mathcal{I}$ [@Vinberg], [@IVA], the univariate polynomials are a principal ideal domain. Moreover, if $1 \in \mathcal{I}$, then $\mathcal{I}$ is the whole ring since $r\diamond 1, 1 \diamond r \in \mathcal{I}, \forall r\in R$. Thus, the only non-trivial subspaces which form an ideal in $\mathbb{C}[x]$ are the subspaces of polynomials with roots in common. Since every ideal in $\mathbb{C}[x]$ is principal, i.e., can be generated by a single element $f \in \mathbb{C}[x]$, we denote $\mathcal{I} \subset \mathbb{C}[x]$ as$\langle f \rangle \triangleq \{ f \cdot g \ \forall g \in \mathbb{C}[x]\}$, read as “the ideal generated by” $f$. The generator of an ideal $\mathcal{I} \subset \mathbb{C}[x]$ is given straightforwardly as $f = \prod_{k} (x - x_k)$. The restriction of this ideal to polynomials with degree less than $N$ is written as $\langle f \rangle |_N$, which is a subspace of $\mathbb{C}^N$. To clarify the notations, it is important to note that $\mathcal{I}|_N \subset \mathbb{C}[x]$ is an algebraic object as a subset of the univariate polynomial ring, while $\langle f \rangle |_N$ is a vector subspace of $\mathbb{C}^N$. The key difference between the two is the notion of scalar multiplication under which both objects are closed. Scalars for the former are elements of the polynomial ring itself, while scalars for the latter are elements of $\mathbb{C}$. The structure which unifies these two concepts is known as a [*module*]{}, but it is not defined or used here as it is outside of the scope of the paper. The Problem {#sec:Problem Formulation} =========== We are interested in solving the following homogeneous quadratically constrained quadratic programming problem $$\begin{aligned} \min_{\mathbf {W}}\ &\;\mathrm{tr} \{ \mathbf {W}^H \mathbf {C} \mathbf{W} \} \label{InitialProblema} \\ \mathrm{s.t.} &\;\mathrm{tr} \{ \mathbf{W}^H \mathbf{B}_j \mathbf{W} \} = \delta_j, \; j \in \mathfrak{J} \label{InitialProblemb}\end{aligned}$$ where $\mathbf W \in \mathbb{C}^{N \times K}$ is a matrix (vector when $K=1$) of optimization variables, $\mathbf C \in \mathbb{C}^{N\times N}$ and $\mathbf B_j \in \mathbb{C}^{N\times N}$ are matrices of coefficients, $\delta_j \in \mathbb{R}$ is some problem specification parameter, $\mathfrak{J}$ is some index set, $\mathrm{tr} \{\cdot\}$ denotes the trace of a square matrix, and $( \cdot)^H$ stands for the Hermitian transpose of a vector or matrix. Practical problems which can be formulated in this form are nowadays ubiquitous in signal processing and its applications. By introducing the new optimization variable $\mathbf X \triangleq \mathbf W \mathbf W^H$, and using the cyclic property of the trace operator, the problem – can be equivalently rewritten as the following SDP problem $$\begin{aligned} \min_{\mathbf X}\ &\;\mathrm{tr} \{ \mathbf X \mathbf C \} \label{eq:SDP_obj} \\ \mathrm{s.t.} &\;\mathrm{tr} \{\mathbf X \mathbf B_j \} = \delta_j, \; j \in \mathfrak{J} \label{eq:affine} \\ &\;\mathrm{rk} \{\mathbf X \} = K \label{eq:rank}\\ &\;\mathbf X \succeq 0 \label{eq:PSD}\end{aligned}$$ where $\mathrm{rk} \{\cdot\}$ stands for the rank of a matrix and $\mathbf X \succeq 0$ means that matrix $\mathbf X$ is positive semi-definite (PSD). Assuming that $\mathbf W$ is required to be full rank,[^5] the constraint has to be an equality constraint as it is expressed by the definition of $\mathbf X$ and the constraint . The problem – is a rank-constrained SDP problem. It is non-convex as a direct consequence of the non-convex constraint . The non-convexity of this constraint extends simply from the geometry of the PSD cone. The interior of the cone is exactly the cone of positive definite matrices [@Barvinok], meaning that all rank-deficient solutions must lie on the boundary. Set boundaries are usually non-convex, a notable exception being an affine half-space. The dominant approach to addressing this problem following the work of [@GW] has been semidefinite relaxation (SDR). That is, the constraint is dropped altogether from the problem, and the resulting convex SDP problem is solved. If the solution to the resulting convex problem, denoted as $\mathbf X^{\star}$, is not rank-$K$, it must then be mapped back to the manifold of rank-$K$ $N \times N$ PSD matrices. Obviously, the solution to the original problem – is contained in the feasible set of the relaxed problem. However, what is sacrificed in the relaxation process is a guarantee that the solution to the relaxed problem will be “close” to the solution of the original problem. The optimality bound proven to hold in [@GW] applies only to rank-one constrained SDP problems. Then the rank-one solution, i.e., mapping of the solution of the relaxed SDP problem back to the manifold of rank-one matrices, can be obtained with guarantees proven in [@GW] through randomization procedure. Similar guarantees have not been shown to hold for general rank-$K$ constrained SDP problems. In the case when the problem – is rank-$K$ constrained, but separable, i.e., can be separated to $K$ rank-one constrained SDP sub-problems as in [@HuangPalomar], the optimality bounds proven in [@GW] can be directly applied to each resulting sub-problem. However, separating the problem may not be necessary, desirable, or even possible in practice. In such cases, an alternative approach, which we present in this paper, is of importance. In contrast to the SDR approach, our approach is based on restriction. That is, rather than expand the feasible set of the optimization problem, we restrict it to lie on the boundary of the cone of PSD matrices. Since the boundary of PSD cone is non-convex, it is not at all obvious that this would result in a convex problem. It is also not obvious how to perform such a restriction. Thus, some care should be taken in with respect to how such a restriction can be performed. The following basic theorem gives a condition for the restriction to be convex. Let $\cal M$ be a $K$ dimensional subspace of $\mathbb{C}^N$, $\mathcal{R}$ is the set of rank-$K$ $N \times N$ matrices, and $\mathcal{X} \triangleq \{\mathbf X \in \mathcal{R} \, \rvert \, \mathcal{C} \{ \mathbf X \} = {\cal M}\}$, where $\mathcal{C} \{ \cdot \}$ denotes the column space of a matrix. Then the intersection $\mathcal{X} \cap \mathbb{S}_N^+$, where $\mathbb{S}_N^+$ is the cone of $N \times N$ PSD matrices, is convex. [*Proof:*]{} We must first define a general element in the intersection $\mathcal{X} \cap \mathbb{S}_N^+$. Consider the matrix $\mathbf X \triangleq \mathbf W \mathbf W^H$ where $\mathcal{C} \{ \mathbf W \} = {\cal M}$ and $\mathbf W \in \mathbb{C}^{N \times K}$. The matrix $\mathbf X$ is clearly in $\mathcal{X}$ since $\mathcal{C} \{ \mathbf X \} = {\cal M}$, and also in $\mathbb{S}_N^+$ by construction. Conversely, any element in $\mathcal{X} \cap \mathbb{S}_N^+$ must have this form, since every element in $\mathcal{R}$ must have a factorization of $\mathbf X = \mathbf A \mathbf B^H$ where $\mathbf A$ and $\mathbf B$ are both $N \times K$ matrices, and any matrix in $\mathbb{S}_N^+$ must be Hermitian, implying that $\mathbf A = \mathbf B$. Now consider two similarly defined matrices in the intersection $\mathcal{X} \cap \mathcal{R}$, denoted as $\mathbf X_1 \triangleq \mathbf W_1 \mathbf W_1^H$ and $\mathbf X_2 \triangleq \mathbf W_2 \mathbf W_2^H$. Since the column space of $\mathbf W_1$ is the same as column space of $\mathbf W_2$, i.e., $\mathcal{C} \{ \mathbf W_1 \} = \mathcal{C} \{ \mathbf W_2 \}$, there exists an invertible matrix $\mathbf P$ such that $\mathbf W_2 = \mathbf W_1 \mathbf P$. Taking the convex combination of $\mathbf X_1$ and $\mathbf X_2$, it is easy to see that the train of the following equalities holds $$\begin{aligned} \lambda \mathbf X_1 + (1 - \lambda) \mathbf X_2 &= \lambda \mathbf W_1 \mathbf W_1^H + (1 - \lambda) \mathbf W_2 \mathbf W_2^H \nonumber \\ &= \lambda \mathbf W_1 \mathbf W_1^H + (1 - \lambda)\mathbf W_1 \mathbf P \mathbf P^H \mathbf W_1^H \nonumber \\ &= \mathbf W_1 \big(\lambda \mathbf I + (1 - \lambda) \mathbf P \mathbf P^H \big)\mathbf W_1^H \nonumber\end{aligned}$$ for $\lambda \in [0,1]$. The matrix $\mathbf P \mathbf P^H$ is PSD by construction, and specifically, it is positive definite (PD), since $\mathbf P$ was invertible to begin with. The PD matrices form a cone, and thus, $\mathbf P' = \mathbf I + (1 - \lambda) \mathbf P \mathbf P^H$ is also PD, admitting a further factorization as $\mathbf P' = \mathbf R \mathbf R^H$, with $\mathbf R$ invertible. Hence, $\mathbf W_1 \mathbf P' \mathbf W_1^H \in \mathcal{X} \cap \mathcal{R}$. This complets the proof. $\blacksquare$ It is worth observing that the intersection $\mathcal{X} \cap \mathbb{S}_N^+$ introduced in Theorem $1$ is isomorphic to the cone of PSD matrices of size $K$, and thus, it has dimension $K(K+1)/2$. Therefore, the set of all matrices of the form $\mathbf W \mathbf P \mathbf W^H$ form a $K(K+1)/2$ dimensional face of the PSD cone. Indeed, it can be shown that a set is a face of the PSD cone if and only if it satisfies the parametrization given in Theorem $1$ [@Barvinok], [@ParriloPermenter]. Summarizing, if one can glean the subspace $\cal V$ which contains the column space of $\mathbf X$ which satisfy the affine constraints in the problem –, among others, Theorem $1$ provides a clear and direct method to restrict the problem to a smaller dimension. Such procedure of restricting the problem to a smaller dimension contains three sept. First, it is needed either estimate or know the subspace $\cal V$, then produce a basis for it of the appropriate dimension, and finally take this basis to be the columns of $\mathbf W$. Section \[sec:Null Shape\] provides a complete example of the solution for the subspace $\cal V$ (fist step of the procedure highlighted above) defined by the constraints of a rank-constrained SDP incling the null-shaping constraint, and Section \[sec:dimreduce\] shows how, by way of understanding the algebraic structures laid out in the preliminaries, we can arbitrarily restrict the dimension of this space to coincide with the desired rank of the solution, $K$. However, any other method which produces a basis for the space $\cal V$ is compatible with the findings of this paper. Before proceeding to such studies, we would like to stress on the importance and ubiquitous nature of the problem – by explaining some currently important applications in terms of the problem –. Applications {#sec:Applications} ============ Many classical and currently important signal processing problems can be expressed in the form –. Such most popular recently problems are listed here, as well as it is shown how they can be formulated in the form –. Phase Retrieval {#subsec:Phase Retrieval} --------------- The simplest form of the objective function is when $\mathbf C = \mathbf I_N$, in which case the objective function becomes $\mathrm{tr} \{\mathbf X \}$. For symmetric (and Hermitian) matrices, this is equivalent to the nuclear norm, and nowadays widely used as a proximal operator for rank reduction. For example, consider the problem of phase retrieval [@VoroCandes], in which the goal is to retrieve a vector $\mathbf x_0 \in \mathbb{C}^N$ about which we only have quadratic measurements $\lvert \mathbf a_j^H \mathbf x_0 \rvert^2 = b_j$ where $\mathbf a_j \in \mathbb{C}^N$ and $b_j \in \mathbb {R}$. It means that we observe the intensity measurement $b_j$, while $\mathbf a_j$ is a sampling vector. Simple manipulations using the trace operator show that $\lvert \mathbf a_j, \mathbf x_0 \rvert^2 = \mathrm{tr} \{ \mathbf a_j^H \mathbf x_0 \mathbf x_0^H \mathbf a_j \} = \mathrm{tr} \{ \mathbf X \mathbf A_j \}$ where $\mathbf A_j \triangleq \mathbf a_j \mathbf a_j^H$ and $\mathbf X \triangleq \mathbf x_0 \mathbf x_0^H$. Hence the phase retrieval problem can be equivalently rewritten as the following feasibility problem $$\begin{aligned} \mathrm{find}\ \ \ \ &\mathbf X \label{eq:feas_obj}\\ \mathrm{s.t.} \ \ \ &\mathrm{tr} \{ \mathbf X \mathbf A_j \} = b_j,\ \forall \ j \\ &\mathrm{rk} \{\mathbf X \} = 1 \\ &\mathbf X \succeq 0. \label{eq:feasSDP}\end{aligned}$$ It is easy to check that the problem – differs from the problem – only in the objective function. However, the difficulty with the problems – and – lies not with the objective function, but with the feasibility sets, which are of identical form. If the feasibility set of – is non-empty then it is relatively easy to minimize a linear function over it, and thus, define a problem in the form of –. Therefore, insights about one problem provide insights about the other. Optimal Downlink Beamforming {#subsec:OBP} ---------------------------- Another example of – is the optimal downlink beamforming problem in MIMO wireless communication systems. In this problem, the objective is to minimize the total transmitted power while maintaining an acceptable quality of service for all users. Assuming constant modulus waveforms and total number of $J$ users, the total transmit power is given as $\sum_{j=1}^{J} \|\mathbf w_j\|$ where $\mathbf w_j \in \mathbb{C}^N$ is the beamforming vector corresponding to the $j$-th user. Then the total transmitted power can be rewritten as $$\begin{aligned} \mathrm{tr} & \left\{ \sum_{j=1}^J \mathbf w_j \mathbf w_j^H \right\} = \mathrm{tr} \{ \mathbf W^H \mathbf W \} \nonumber \\ & = \sum_{j=1}^J \mathrm{tr} \left\{ \mathbf w_j \mathbf w_j^H \right\} = \sum_{j=1}^J \mathrm{tr} \{ \mathbf X_j \} \label{TotalP}\end{aligned}$$ where $\mathbf W \triangleq [\mathbf w_1,\cdots,\mathbf w_J]$ has rank $J$ and $\mathbf X_j \triangleq \mathbf w_j \mathbf w_j^H$ is the rank-one matrix. Constraints on the quality of service for all users are written in term of the signal-to-interference plus noise ratio in the following form $$\begin{aligned} \frac{\mathbf w_j^H \mathbf R_j \mathbf w_j}{\sum_{i\neq j} \mathbf w_i^H \mathbf R_{i,j} \mathbf w_i + \sigma_j^2} \geq \gamma_j, \; \forall j \label{QofS}\end{aligned}$$ where $\mathbf R_j$ is the signal auto-correlation matrix corresponding to the $j$-th user, $\mathbf R_{i,j}$ is the cross-correlation matrix corresponding to the interfence between the $i$-th and $j$-th users, $\gamma_j$ is a quality-of-service specification for $j$-th user, and $\sigma_j^2$ is the noise variance corresponding to the $j$-th user’s channel, assuming the noise observed on each channel is an independent Gaussian random variable. Using and expressed in terms of $\mathbf X_j$, the optimal downlink beamforming problem can be written as $$\begin{aligned} \min_{\mathbf X_k} \ & \; \sum_{j=1}^{K}\mathrm{tr} \{\mathbf X_j \} \label{eq:OBP_SDP} \\ \mathrm{s.t.} &\; \mathrm{tr} \{ \mathbf X_j \mathbf R_j \} - \gamma_j \sum_{i\neq j} \mathrm{tr} \{ \mathbf X_i \mathbf R_{i,j} \} = \gamma_j \sigma_j^2,\ \forall \ j \\ &\; \mathrm{rk} \{ \mathbf X_j \} = 1,\ \forall \ j \\ &\; \mathbf X_j \succeq 0,\ \forall \ j \label{eq:OBP_SDPc}\end{aligned}$$ which fits the description of the problem –. The problem – is separable as the constraints for every $j$-th user are independent of the constraints for the remaining users, and the objective consists of $J$ summanrd one per each user. Then $J$ rank-one constrained SDP problems can be solved for finding $\mathbf X_j$ for all $J$ users. Moreover, it is clear from Theorem $1$ that the optimal solution for each separate problem with respect to the corresponding $\mathbf X_j$ must lie in a one-dimensional face of the PSD cone. These one-dimensional faces need not coincide. However, as we will show later, by introducing new constraints, it can happen so that each of these one-dimensional faces must exist within a larger $J(J+1)/2$ face of the cone, thus granting great insight into the optimal solution to the problem, and significant computational savings. Transmit Beamspace in MIMO Radar {#subsec:TBMIMORadar} -------------------------------- One more important application problem of the form – is the transmit beamspace design for MIMO radar [@Fuhrmann]–[@ArashJournal]. The goal of this optimization problem is to match an ideal transmit beampattern as closely as possible with an achievable one, subject to physical constraints imposed by a transmit antenna array. The ideal beampattern is based on presumed knowledge of the locations of targets of interest. Due to the nature of the radar problem, the most that could be assumed about the target locations would be a prior distribution. If one has knowledge of the prior distribution, it makes sense to transmit energy in regions where targets are likely to be located, with less or no energy transmitted in the regions where targets are unlikely to appear or cannot be located. For example, if the prior knowledge is that the targets lie within an angular sector $\Theta = [\theta_1, \theta_2]$, then the optimal strategy would be to transmit all of the available power into this sector, while transmitting as low as possible energy elsewhere [@TransmitEnergyFocusing], [@ArashJournal]. In a MIMO radar system, a linear combination of $K$ orthogonal baseband waveforms $\boldsymbol \psi (t) \triangleq [\psi_1(t), \ldots, \psi_K(t)]^T$ is transmitted from an antenna array (we assume a uniform linear array (ULA) for simplicity) of $N$ elements, where $K < N$, in order to concentrate energy over the desired sector $\Theta$. The signal at the transmitter at the time instant $t$ in the direction $\theta$ is given by $$\label{eq:TxSignal} \mathbf s (t,\theta) = \mathbf a^H(\theta) \mathbf W \boldsymbol \psi(t)$$ where $[\mathbf a(\theta)]_n \triangleq e^{j2\pi / \lambda_c (n-1) d_x\sin(\theta)},\ \theta \in [-\pi/2,\pi/2]$ denotes the $n$ ($n \in 1,\cdots,N$) element of the array response vector. Using and orthogonality of the waveforms, the magnitude of the beampattern in any direction $\theta$ is given by the inner product of $\mathbf s(t,\theta)$ with itself integrated over a pulse duration $T$, that is, $$\begin{aligned} G(\theta) &\!\!\!=\!\!\!& \int_{T} \mathbf s (t,\theta)\mathbf s^H (t,\theta)dt \nonumber\\ &\!\!\!=\!\!\!& \mathbf a^H(\theta)\mathbf W \left(\int_{T} \boldsymbol \psi(t) \boldsymbol \psi^*(t)dt \right) \mathbf W^H \mathbf a(\theta) \nonumber\\ &\!\!\!=\!\!\!& \mathbf a^H(\theta)\mathbf W \mathbf W^H \mathbf a(\theta) = {\mathrm tr} \left\{ \mathbf W \mathbf W^H \mathbf a(\theta) \mathbf a^H(\theta) \right\} \nonumber \\ &\!\!\!=\!\!\!& {\mathrm tr} \{ \mathbf X \mathbf C \} \label{Beampattern}\end{aligned}$$ where $\mathbf X \triangleq \mathbf W \mathbf W^H$ and $\mathbf C \triangleq {\mathbf a} (\theta) {\mathbf a}^H (\theta)$. Denoting the desired ideal beampattern by $G_{\mathrm d} (\theta)$ and using , the transmit beamspace design problem for MIMO radar can be written as $$\begin{aligned} \min_{\mathbf X} \max_{\theta} &\; \left| G_{\mathrm d} (\theta) - {\mathrm tr} \{ \mathbf X \mathbf C \} \right| \label{eq:opt2} \\ \mathrm{s.t.} &\; {\mathrm tr}\{ \mathbf X \mathbf B_j \} = \frac{E}{N},\ j = 1,\cdots,N \label{eq:constraintmatrix} \\ &\; \mathrm{rk} \{ \mathbf X \} = K \\ &\; \mathbf X \succeq 0 \label{eq:PSD1} \end{aligned}$$ where $E$ is the total power budget at the transmit antenna array and $| \cdot |$ stands for the magnitude. The constraint serves to control the distribution of power among antenna elements. The matrix $\mathbf B_j$ is a selection matrix consisting of all zeros except for a $1$ in the $j$-th diagonal position. For example, taking $j = 1$ in yields $\sum_{k=1}^{K} \left| \mathbf w_{1,k} \right |^2$, which is the amount of power used by all waveforms transmitted from the first antenna. This need not necessarily equal to $E/N$. For instance, certain antennas could be reserved for use in communications in which case one would constrain that antenna to not use any energy for the purposes of transmission of radar signals. Underlying Algebraic Structure Forced by Null-Shaping Constraint {#sec:Null Shape} ================================================================ In this section we consider one frequently used constraint, often referred to as null-shaping constraint, which forces a certain algebraic structure in the feasibility set of corresponding optimization problems. Based on this constraint we show how our restriction based approach to solving problems of the form – can be realized. As shown above, in several array processing and signal processing applications, the issue of where to not transmit energy can be as important as where to transmit energy. For example, we may know the angular locations of users in adjacent cells of a cellular communication network with whom we wish to not interfere. Another example is for the transmit beamspace-based MIMO radar, where we assume a prior distribution of targets. The strategy is to concentrate energy in the areas of highest probability of target location, and mitigate energy transmission to areas of low (possibly zero) probability of target location. Thus, the null-shaping constraints of the form $$\begin{aligned} \mathbf a^H(\theta_l)\mathbf W \mathbf W^H \mathbf a(\theta_l) &= 0,\ \; l = 1,\cdots,L. \label{eq:SOS}\end{aligned}$$ are of high practical importance. Here $\theta_l$, $l = 1,\cdots,L$ are the locations in which we do not wish to transmit energy. It can be shown that the vectors $\mathbf w$ which satisfy all lie in the same face of the PSD cone, and thus, a reduction to this face can be easily derived by way of Theorem $1$. However first, let us consider what equations imply. Note that equations are sum of squares (SOS) polynomials by construction. Defining $y_k(\theta) \triangleq \mathbf a^H (\theta) \mathbf w_k$, we can rewrite each equation as $\mathbf a^H(\theta_l)\mathbf W \mathbf W^H \mathbf a(\theta_l) = \sum_{k=1}^{K} \left| y_k \right|^2$ from which the SOS nature of is apparent. The relation $\sum_{k=1}^{K} \left| y_k \right|^2 = 0$ obviously implies that each $y_k = 0$. Therefore, by introducing the matrix $\mathbf A \triangleq [\mathbf a(\theta_1), \cdots, \mathbf a(\theta_L)]$ and rewriting equations as $$\begin{aligned} {\mathrm diag}\bigg\{ \mathbf A^H\mathbf W \mathbf W^H \mathbf A \bigg\} &= \mathbf 0 \label{eq:SOS2}\end{aligned}$$ we see that (or equivalently ) can be satisfied if and only if the column space of $\mathbf W$ is a subspace of the nullspace of $\mathbf A^H$, i.e., $\mathcal{C} \{ \mathbf W\} \subset \mathcal{N} \{\mathbf A^H \} \triangleq \{\mathbf w \in \mathbb{C}^N | \mathbf A^H \mathbf w = \mathbf 0\}$. Here ${\mathrm diag}\{ \cdot \} $ is the operation that takes the diagonal elements of a square matrix and write them in a vector, $\mathcal{N} \{\cdot \}$ denotes the nullspace of a matrix, and $\mathbf 0$ is the vector of all zeros. Equivalently, $\mathbf A$ is in the nullspace of $\mathbf W^H$, however, $\mathbf W$ is the design variable, and thus, we consider $\mathcal{N} \{\mathbf A^H \}$. From the definition of $\mathcal{N} \{\mathbf A^H \}$ it is clear that every $\mathbf w \in \mathcal{N} \{\mathbf A^H \}$ describes the coefficients of a polynomial with roots at the generators of $\mathbf A^H$, denoted as $\alpha_l^*$, that is, $$\begin{aligned} \mathbf A^H \mathbf w = \mathbf 0 &\iff \sum_{i=0}^{N-1}(\alpha^*_{l})^i \mathbf w_i = 0,\ \forall l \in 1,\cdots,L . \label{eq:Nulldef1}\end{aligned}$$ A polynomial $P(x)$ has a root at some point $\alpha$ if and only if $(x-\alpha)$ is a factor of $P(x)$ [@Vinberg]. By induction, it can be seen that a polynomial $P(x)$ has roots at points $\alpha_1^*, \cdots, \alpha_l^*$ if and only if $P(x) = Q(x)B(x)$ where $$\begin{aligned} Q(x) \triangleq \prod_{l=1}^{L} (x - \alpha_l^*). \label{eq:Qdef}\end{aligned}$$ From and , the nullspace $\mathcal{N} \{\mathbf A^H \}$ can be expressed as $$\begin{aligned} \mathcal{N} \{\mathbf A^H \} &= Q(x) \mathbb{C}_{N-L}[x] = \langle Q(x) \rangle |_{N} \nonumber \end{aligned}$$ where $\mathbb{C}_{N-L}[x]$ denotes the space of all polynomials of degree strictly less than $N-L$. The degree is strictly less than $N-L$ as a constant polynomial is defined to have degree $0$. $\mathbb{C}_{N-L}[x]$ has the standard polynomial basis of $\{1,x,x^2,\cdots,x^{N-L-1}\}$, and thus a basis for $\mathcal{N} \{\mathbf A^H \}$ is $\mathcal{B} \triangleq Q(x)\{1,x,x^2,\cdots,x^{N-L-1}\}$. Every ideal is first of all an additive Abelian group, as are vector spaces, and thus, the addition of any two elements in the ideal will result in another element in the ideal. Since it is a vector space as well, scaling by an element $\mathbf w$ in $\mathbb{C}$ will yield another element in the ideal. So, if we have a matrix representation of the basis $\mathcal{B}$ denoted by $\mathbf Q$, any matrix of the form $\mathbf Q \mathbf P$ will remain in the ideal $\mathcal {I} |_{N}$. Taking $\mathbf P$ to be invertible, we can conclude that, having fixed $L$, the polynomial ideal $\mathcal {I} |_{N}$ describes an entire proper face of the PSD cone of dimension $(N-L)(N-L+1)/2$, by direct comparison with the result of Theorem $1$. Thus, the inclusion of constraints of the form to any of the problems described in Subsection \[sec:Applications\] requires that feasible set is restricted to this face. Construction of $\mathbf Q$ {#subsec:ConstQ} --------------------------- Let $\mathbf q \triangleq [(-1)^{L-1}s_{L-1},(-1)^{L-2}s_{L-2},\cdots,(-1)s_1,1]^T$ where $s_1,\cdots,s_{L-1}$ are the elementary symmetric functions of $\alpha_1^*,\cdots,\alpha_L^*$ and $[ \cdot ]^T$ stands for the transpose. The $k$-th elementary symmetric function in $L$ variables (in this case, $\alpha_1^*,\cdots,\alpha_L^*$) is the sum of the products of the $k$ subsets of those $L$ variables. For example, if $L = 3$ then $$\begin{aligned} s_3 &= \alpha_1^* + \alpha_2^* + \alpha_3^* \nonumber \\ s_2 &= (\alpha_1\alpha_2)^* + (\alpha_2\alpha_3)^* + (\alpha_1\alpha_3)^*\nonumber \\ s_1 &= (\alpha_1\alpha_2\alpha_3)^* \nonumber\end{aligned}$$ Let $\mathbf q' \triangleq [\mathbf q,0,\cdots,0]^T\ \in \mathbb{C}^N$. Then a basis of $\mathcal{N} \{ \mathbf A^H \}$ is represented by the columns of the Toeplitz matrix, with $\mathbf q'$ as the first column $$\begin{aligned} \mathbf Q &= \begin{bmatrix} \includegraphics[scale=0.28]{mattDiag3.pdf} \end{bmatrix} . \nonumber \end{aligned}$$ For a polynomial $Q(x) = a_0 + a_1x + \cdots + a_Lx^L$, with roots $\alpha_1^*,\cdots,\alpha_l^*$, Viète’s formulas yield the coefficients $a_0, \cdots, a_{L-1}$ as $$\begin{aligned} s_1(\alpha_1^*,\cdots,\alpha_L^*) &= -\frac{a_{L-1}}{a_L} \nonumber \\ s_2(\alpha_1^*,\cdots,\alpha_L^*) &= \frac{a_{L-2}}{a_L} \nonumber \\ &\ \vdots \nonumber \\ s_L(\alpha_1^*,\cdots,\alpha_L^*) &= (-1)^L\frac{a_{0}}{a_L}. \nonumber\end{aligned}$$ Thus, the elements of the vector $\mathbf q$ are the coefficients of $Q(x)$, which are given as a function of the roots of $Q(x)$ by Viète’s formulas, with $a_L = 1$. On Dimensionality Reduction {#sec:dimreduce} --------------------------- Given a set ${\mathfrak{A}} = \{ \alpha_1, \alpha_2, \cdots, \alpha_L \}$, the previous subsection provides an exact method for construction of the matrix $\mathbf Q$, the columns of which span $\langle Q(x) \rangle |_{N}$. From this construction, it is clear that there is a direct link between the cardinality of $\mathfrak{A}$ and the dimensionality of the column space of $\mathbf Q$. Specifically, if $|{\mathfrak A}| = L$, then $K \triangleq \dim (\mathcal{C} \{\mathbf Q \}) = N - L$. As such, the construction of $\mathbf Q$ allows for the exact control of the dimension of the problem. That is, given a desired solution of rank $K$, it is possible to use $\mathbf Q$ in tandem with the parametrization given in Theorem 1 to restrict the problem to a $K(K+1)/2$ dimensional face of the PSD cone by choosing $K = N - L$. Note that $\mathfrak{A}$ is a set on which a collection of polynomials vanish, namely, the ideal $\mathcal{I}$. Such sets are called [*algebraic varieties*]{}. Formally, the definition of an algebraic variety associated to a particular ideal $\mathcal I$ is given as $$\begin{aligned} \mathfrak{V}(\mathcal I) &\triangleq \{ p \in \mathbb{K}^N | f(p) = 0\ \forall f \in \mathcal I \} \nonumber\end{aligned}$$ and an ideal given an associated variety is defined as $$\begin{aligned} \mathcal I(\mathfrak{V}) &\triangleq \{ f \in \mathbb{K}[x_1,\cdots,x_n] | f(p) = 0\ \forall p \in \mathfrak{V} \}. \nonumber \end{aligned}$$ Given a set of varieties and inclusion relationship $$\begin{aligned} \mathfrak{V}_0 \subset \mathfrak{V}_1 \subset \cdots \subset \mathfrak{V}_M \nonumber\end{aligned}$$ it is easy to see that taking the ideals associated to these varieties reverses the order of inclusion. Specifically, for the above inclusion relationship $$\begin{aligned} \mathcal{I} (\mathfrak{V}_0) \supset \mathcal{I} (\mathfrak{V}_1) \supset \cdots \supset \mathcal{I} (\mathfrak{V}_M) \nonumber\end{aligned}$$ holds [@IVA]. For example, take $\mathbb{K}[x_1,\cdots,x_n] = \mathbb{C}[x]$, and let $\mathcal I = \langle (x-1) \rangle$. Clearly, the variety $\mathfrak{V} (\mathcal I) = \{1\}$. Now, consider the variety $\mathfrak{V}' = \{1,2\}$, which clearly contains $\mathfrak{V}$. Its associated ideal $\mathcal{I} (\mathfrak{V}) = \langle (x-1) (x-2) \rangle$ which is obviously contained in $\mathcal{I} (\mathfrak{V})$. Using the definitions provided, it is not difficult to come to a formal proof of the inclusion reversing relationship for general varieties and ideals. This relationship becomes practically important when the number of constraints $L$ of type does not satisfy the relationship $N - L = K$. Consider, for example in the optimal downlink beamforming problem, the case where a communication system consisting of a transmit array of $N = 20$ antennas at the base station in the cell of interest, $K = 3$ single antenna users in the cell, and $L = 4$ users in adjacent cells at the given directions $\theta_1, \theta_2, \theta_3, \theta_4$. By using constraints corresponding to the directions of the users in the adjacent cells, we constrain the problem dimension to $20-4 = 16$. However, we would like to restrict the problem dimension further to $3$. Using the inclusion reversing relationship between ideals and varieties, it is clear that if $\mathfrak{V}_0 = \{\alpha_1, \alpha_2, \cdots, \alpha_4\}$ and $\mathfrak{V}_M = \{\mathfrak{V}_0, \alpha_{L+1 = 5}, \cdots, \alpha_{M = 17} \}$, then $\mathcal{I} (\mathfrak{V}_M) \subset \mathcal{I} (\mathfrak{V}_0)$. Thus, the inclusion reversing relationship between ideals and their associated varieties allow us to set roots at arbitrary locations, so long as the new variety $\mathfrak{V}'$ contains the variety described by the equations . In this way, it is possible to both restrict the dimension of the problem to the desired dimension $K$, while remaining in the feasible set of the problem. It is possible, for example, to set multiple roots in the directions of the users in adjacent cells. As will be shown in the simulations section, this will lead to extremely deep nulls in these directions. There is a direct, but incomplete parallel with the rank-nullity theorem from linear algebra. Given constraints , we can construct the matrix $\mathbf A^H$, find an orthogonal projector to it, and use it as the basis $\mathbf Q$. Adding new, distinct directions to will reduce the dimension of the nullspace. However, addition of a multiple constraint will not restrict the nullspace of $\mathbf A^H$ further since its rank will not change. Thus, there are many distinct sub-faces which can be described using the relationship between varieties and ideals which are not readily described by way of the rank-nullity theorem. Generalization of Generalized Sidelobe Canceller ================================================ In [@GriffithsJim], Griffiths and Jim proposed an algorithm that has become the standard approach to linearly constrained adaptive beamforming. Their approach, referred to within [@GriffithsJim] as the GSC, uses a two step procedure in order to produce a beampattern with a fixed mainlobe and suppressed sidelobes. In the first step, a beampattern with a fixed response in the “look” direction is produced by convolving a vector of constraints with a normalized beamforming vector with the desired mainlobe response. In the second step, the signals in the “look” direction are blocked out, while the output power is minimized. If $y_{\mathrm w} (t)$ is the signal corresponding to the first part of the Griffiths-Jim beamformer, and $y_{\mathrm n} (t)$ is the signal corresponding to the second part, then the overall beamformed signal is expressed as $y(t) = y_{\mathrm w} (t) - y_{\mathrm n} (t)$, where $t$ is discrete time index. In order to block the signal in the “look” direction, the authors use the assumption of ideal steering. To wit, they assume that the signal impinges on the broadside of the array. If we assume a ULA of $M$ antennas, the signal at the $m$-th antenna is $x_m(t) = s(t) + n_m(t)$. The assumption of ideal steering allows us to state that the desired signal $s(t)$ will be identical at each antenna (differing only by noise), and thus, a sufficient condition for blocking of the desired signal is $\mathbf w^T \mathbf{1} = 0$ where $\mathbf w$ is the blocking beamforming vector and $\mathbf{1}$ is the vector of all ones. Using the definition of the steering vector, it is can be seen that $\mathbf a(0) = \mathbf 1$, and thus, any beamforming vector satisfying $\mathbf w^T \mathbf 1 = 0$ will have a null at $\theta = 0$. Equivalently, $\mathbf w$ contains the coefficients of a polynomial with at least one root at $\alpha(0) = e^{j2\pi / \lambda_c d_x\sin(0)} = 1$. The $M - 1$ vectors $\mathbf w_m$ are compiled into an $(M-1) \times M$ matrix $\mathbf W_B$ with rows $\mathbf w_m^T$. It is clear that all $\mathbf w_m \in (x-1)\mathbb{C}_{M-1}[x]$, and thus, lie in the polynomial ideal $\mathcal{I} (1)$. The underlying algebraic structure allows several generalizing statements to be made. Instead of requiring the ideal steering, we can just require that $\mathbf w^T \mathbf a(\theta_0) = 0$, as $\mathbf w \in (x-\alpha(\theta_0))\mathbb{C}_{M-1}[x]$ is a necessary and sufficient condition for $s(k)$ to be blocked. Requiring that $\mathbf w$ be linearly independent for all $1\leq i \leq M-1$ implies that all the polynomials only share a single root at $\alpha(\theta_0)$. If multiple signals impinge upon the array from directions $\theta_l, \; 1\leq l \leq L$, and we wish to simultaneously block each of them in order to implement the GSC, the row-space of $\mathbf W_B \subset Q(x)\mathbb{C}_{M-2}[x]$ where $Q(x) = \prod_{l=1}^{L} (x-\alpha(\theta_l))$. In this case, we can have as many as $M - L$ vectors $\mathbf w_m$. In [@GriffithsJim], the authors give only an example of the matrix $\mathbf W_B$ for $M = 4$. Moreover, as was shown in the previous subsection, the method proposed in this paper allows for further dimensionality reduction of the problem if computational complexity is an issue. Reformulation of Rank-Constrained SDP {#eq:reformulation} ===================================== Given the construction of $\mathbf Q$ obtained, for example, as in Subsection \[subsec:ConstQ\], it is now possible to reformulate the rank-constrained SDP such that it will be convex and thus solvable in polynomial time, yet have a solution which is rank-$K$ by construction. By introducing constraints of the form to the problem – and using the findings of Section \[sec:Null Shape\], the problem can be equivalently rewritten as $$\begin{aligned} \min_{\mathbf X}\ &\; \mathrm{tr} \{ \mathbf X \mathbf Q^H \mathbf C \mathbf Q \} \label{eq:SDP_obj1c} \\ \mathrm{s.t.} &\; \mathrm{tr} \{ \mathbf X \mathbf Q^H \mathbf B_j \mathbf Q \} = \delta_j, \; j \in \mathfrak{J} \label{eq:affine1} \\ &\; \mathbf X - \gamma \mathbf I \succeq 0 \label{eq:PSD2c}, \end{aligned}$$ where $\gamma$ is some arbitrarily small positive real number and $\mathbf I$ denotes the identity matrix. From Theorem 1, it can be easily observed that the objective function and the constraints are evaluated on a face of the PSD cone $\mathbb S_n^+$. As we have seen in Section \[sec:Null Shape\], every point in the feasible set of this reformulated problem satisfies by construction. Moreover, the rank constraint has disappeared, and with it, the non-convexity of the problem. In addition, selecting a variety $\mathfrak{V}$ of correct size has resulted in a reduced problem dimension $K$. The constraint ensures that the solution to the optimization problem is full rank. Thus, the problem will have an optimal solution which is exactly of the desired rank $K$. If $\gamma$ is allowed to be $0$, the solution to the problem –, $\mathbf X^{\star}$, may lie on the boundary of the cone $\mathbb{S}_k^+$ (note the dimension), and thus cause $\mathbf X^{\star}$ to drop rank. This serves as the main motivation for introducing the constraint . Moreover, the restriction has transformed the feasible set of the original problem – to an affine slice of the positive definite cone. To wit, if there exists a feasible point to the problem at all, then the relative interior of the set is non-empty, and thus strong duality holds. Using these observations, we can reformulate, for example, the transmit beamspace design problem for MIMO radar, that is, the problem –. Introducing new matrices $\mathbf D \triangleq \mathbf Q^H \mathbf a(\theta) \mathbf a^H(\theta) \mathbf Q$ and $\mathbf H_i \triangleq \mathbf Q^H \mathbf B_i \mathbf Q$, and also using the cyclic property of the trace operator, the problem – can be equivalently rewritten as $$\begin{aligned} \min_{\mathbf X} \max_{\theta} &\; \left| G_{\mathrm d} (\theta) - {\mathrm tr} \{ \mathbf X \mathbf D \} \right| \label{eq:idealobj} \\ \mathrm{s.t.} &\; {\mathrm tr}\{ \mathbf X \mathbf H_j \} = \frac{E}{N},\ j = 1,\cdots,N \label{eq:idealpow} \\ &\; \mathbf X - \gamma \mathbf I%_{N-L} \succeq 0 \label{eq:idealrank} . \end{aligned}$$ Finally, since the solution to the reformulated problem will be necessarily PD, as a consequence of the constraint , it can be decomposed using the Cholesky decomposition as $\mathbf X^{\star} = \mathbf R \mathbf R^H$, giving us a simple way to recover the beamspace matrix $\mathbf W$ as $\mathbf W \triangleq \mathbf Q \mathbf R$. Obviously any unitary rotation of $\mathbf W$ recovered this way would also be a valid beamspace matrix. On the Conjugacy of Beamforming and Parameter Estimation ======================================================== It is well known that the beamforming and parameter estimation problems in array processing are conjugate with one another. In one, we are given the information of a target location and are tasked with fitting a beampattern fitting to this information in an optimal way. That is for minimum variance distortionless response beamforming, given the target location, and the signal auto-correlation matrix, minimize the variance of the array output while holding the distortionless response in the target location constant [@Capon]. In the other, we are given a signal auto-correlation matrix, and tasked with discovering the locations of the targets. In the case of transmit beamforming, the connection is also explicit. For transmit beamforming, the objective is to design a signal cross-correlation matrix given some information about the distribution of targets within an environment, while in parameter estimation, we are given a signal autocorrelation matrix, and asked to provide the target locations. Consider $L$ sources impinging upon a ULA. The observation vector can be written as $$\begin{aligned} \mathbf x(t) = \mathbf A\mathbf s(t) + \mathbf n \nonumber\end{aligned}$$ where $\mathbf A = [\mathbf a(\theta_1),\cdots,\mathbf a(\theta_L)]$, $\mathbf s(t)$ is the $L \times 1$ signal vector at time instant $t$ and $\mathbf n$ is observation noise. Assuming that $n \sim \mathcal{N}(0,\sigma^2 \mathbf I)$, and the number of snapshots $T$ is large enough, the sample covariance matrix of $\mathbf x(t)$ can be found as $$\begin{aligned} \mathbf R_{xx} = \frac{1}{T}\sum\limits_{t=1}^{T} \mathbf x (t) \mathbf x^H (t) = \mathbf A\mathbf S \mathbf A^H + \sigma^2 \mathbf I \nonumber \end{aligned}$$ If $T \geq N$ with probability equal to 1, the matrix $\mathbf R_{xx}$ is full rank. Since $\mathbf R_{xx}$ is Hermitian it has a full complement of real positive eigenvalues, and their associated eigenvectors must be mutually orthogonal. Thus, the sample covariance matrix can be written as $$\begin{aligned} \mathbf R_{xx} = \mathbf Q_{\mathrm s} \Lambda_{\mathrm s} \mathbf Q_{\mathrm s}^H + \mathbf Q_{\mathrm n} \Lambda_{\mathrm n} \mathbf Q_{\mathrm n}^H \nonumber\end{aligned}$$ where $\mathbf Q_{\mathrm s}$, $\mathbf Q_{\mathrm n}$ are the matrices containing the signal and noise eigenvectors respectively, and $\Lambda_{\mathrm s}$, $\Lambda_{\mathrm n}$ are the diagonal matrices containing the signal and noise eigenvalues respectively. Since $\mathbf Q_{\mathrm n} \perp \mathbf Q_{\mathrm s}$, it follows that $\mathbf Q_{\mathrm n} \perp \mathbf A$, and thus, $$\begin{aligned} \mathbf a^H(\theta_l) \mathbf Q_{\mathrm n} = 0,\ \forall \theta_l. \label{orth}\end{aligned}$$ As has been shown in Section \[sec:Null Shape\], the equation of the type can be satisfied if and only if the columns of $\mathbf Q_{\mathrm n}$, when viewed as the coefficients of polynomials with coefficients in $\mathbb{C}$, are in the univariate ideal generated by the variety given by the set of desirable roots $\mathfrak{V} = \{\alpha_1^*,\cdots,\alpha_L^*\}$. At once, the conjugacy between parameter estimation and beamforming problems become apparent in exact terms. In the transmit beamforming problem, we are given a variety, and design a signal cross-correlation matrix. While in the parameter estimation problem, we are given a sample covariance matrix and are asked to provide the variety which best explains it. Pisarenko’s method, MUSIC, and root-MUSIC [@VanTrees] all use the orthogonality of the noise and signal subspaces. The root-MUSIC algorithm in brief is to find $\mathbf z$ such that $$\begin{aligned} \mathbf z^H \mathbf Q_{\mathrm n} \mathbf Q_{\mathrm n}^H \mathbf z = 0 \label{eq:rootMUSIC}\end{aligned}$$ which poses two fundamental challenges. The first one is thesignal/noise subspace selection, and the second is root selection [@MahdiMe]. Subspace selection is typically performed by selecting the eigenvectors which correspond to the $N-L$ eigenvalues. This works very well in practice in cases where there are many samples $T$, and a high signal-to-noise ratio. However, if either of these assumptions is invalid, a phenomenon when some columns of $\mathbf Q_{\mathrm s}$ will be erroneously selected to form $\tilde{\mathbf Q}_{\mathrm n}$ may occur. Since is a sum of sum of squares polynomials, it can only be zero if each of the sum of squares polynomials is zero simultaneously, or equivalently, if each column of $\mathbf Q_{\mathrm n}$ is in the polynomial ideal of the variety $\mathcal{I} (\mathfrak{V})$. This provides a new criterion for the selection of noise eigenvectors from $\mathbf Q$: choose the eigenvectors which are “closest”(with respect to some appropriate measure) to being in a univariate ideal. Let $f_1,\cdots,f_N$ be the polynomials with coefficients equal to the entries of the eigenvalues of $\mathbf Q$, and assume that they are ordered such that $f_1,\cdots,f_L$ correspond to the signal eigenvectors, and $f_{L+1},\cdots,f_N$ correspond to the noise eigenvectors. In the absence of noise, the polynomial ideal structure enforces that $$\begin{aligned} f_{L+1} &= gh_1 \nonumber \\ f_{L+2} &= gh_2 \nonumber \\ &\vdots \nonumber \\ f_N &= gh_{N-L} \nonumber\end{aligned}$$ where $\mathbf g \triangleq \prod\limits_{l = 1}^{L} (x-\alpha_l^*)$ and $h_1,\cdots,h_{N-L}$ are coprime polynomials (due to the mutual orthogonality of the eigenvectors, these polynomials must necessarily be coprime). In the presence of noise, however, $g$ becomes perturbed and the system of equations becomes $$\begin{aligned} f_{L+1} &= \tilde{g}_1h_1 \nonumber \\ f_{L+2} &= \tilde{g}_2h_2 \nonumber \\ &\vdots \nonumber \\ f_{N} &= \tilde{g}_{N-L}h_{N-L} \nonumber \end{aligned}$$ where $\tilde{g}_i \triangleq \prod\limits_{l = 1}^{L} (x - \alpha_l^* + \epsilon_{l,i})$. Naturally, the identification of $g$ solves both the subspace selection problem and the root selection problem. The root-MUSIC polynomial therefore, in actuality, has no roots, while the ideal root-MUSIC polynomial has only $L$ roots which correspond exactly to the target locations. What is meant by the number of roots in the classical root-MUSIC algorithm is the sum of the number of roots of each individual sum of squares polynomial. However, due to the polynomial ideal structure, $L(N-L)$ of these roots are very close to one another. Thus, instead of choosing the roots which are closest to the unit circle, it makes sense to choose the roots which are closest to each other. \[sec:simulation\] Simulation Results ================== To exhibit the capabilities of the proposed method, we present two distinct simulation examples based on solving the problem – in two different scenarios. In both scenarios, the underlying system setup remains the same: a ULA with $N = 20$ antenna elements spaced at multiples of $\lambda_c/2$ acts as a transmitter with the goal of closely matching a desired radiation pattern. As this is a transmit beamforming problem, noise is not present in the model. Array imperfections are not considered in the simulations. In both scenarios, the method based on SDR is offered as a comparison. Throughout all simulations, a transition region of $5^o$ is allowed on either side of the passband region. [**Example 1:**]{} In this example, a uniform prior target distribution over the interval $\Theta = [-15^o,15^o]$ is assumed. Thus, the goal is to transmit energy uniformly within the sector $\Theta$ while suppressing transmitted energy as low as possible elsewhere. To design $\mathbf Q$, null directions are selected outside of the sector $\Theta$ in a roughly uniform spread. The number of transmitted waveforms is $K = 4$, and thus, the number of nulls to be set is $L = N - K = 16$. The variety chosen is $\mathfrak{V} = \{\pm \alpha(75^o),\pm \alpha(60^o),\pm \alpha(50^o),\pm \alpha(43^o),\pm \alpha(34^o),\pm \alpha(33^o),\\ \pm \alpha(26^o),\pm \alpha(22^o)\}^*$. Given the variety $\mathfrak{V}$, the matrix $\mathbf Q$ is constructed in line with the findings of Section \[sec:Null Shape\]. After that, the problem – is solved. \[fig:Figure11\] \[fig:Figure12\] Fig. 2 shows the solution of – under aforementioned system assumptions. The vertical lines describe the locations of the nulls corresponding to the set $\mathfrak{V}$. It is observed that the correspondence for the proposed method is exact, and ASL stands here for average sidelobe level, which is an average level of sidelobes in all directions outside of the sector of interest obtained by the proposed method. A comparison between the proposed method and the solution obtained based on SDR approach to the problem – is shown in Fig. 3. The ASL corresponding to the solution of the problem – obtained via SDR is -4.73 dB. A gap of 19.8 dB in terms of ASL while a very close agreement between the passbands is observed. Indeed, the peak sidelobe level of the proposed method is 10 dB below the ASL of the SDR result, while the mean-squared error (MSE) between the solution of the proposed method and the desired beampattern is $229$, compared to $247$ in the case of the SDR-based method. It should be noted that a rank-$K$ approximation of $\mathbf X_{SDR}^\star$ still has to be achieved. However, $\mathbf X_{SDR}^\star$ is typically extremely ill-conditioned ($\kappa \approx 10^{10}$), and thus, an exact reconstruction of $\mathbf X_{SDR}^\star$ using the full rank matrix decomposition is not possible. If constraints of the type are used to restrict the condition number of $\mathbf X$, they will be active, and the solution of $\mathbf X_{SDR}^\star$ will thus degrade further. Lastly, there are no optimality guarantees for randomization techniques for the case of general rank approximations. The comparison provided therefore represents the most favorable comparison possible with the SDR-based method, yet the results of the proposed method are dramatically better. [**Example 2:**]{} In this scenario, the situation is somewhat reversed to the previous example. Instead of a uniform prior distribution of targets, we are given a specific location in which we must not transmit energy, and no other prior information about the environment. In such a case, it is most sensible to uniformly transmit energy in all directions except for the given one. In this scenario, the direction in which we wish to transmit no energy is $\theta = -13^o$. To design $\mathbf Q$, the variety is chosen to be the singleton $\mathfrak{V} = \alpha(-13^o)$. Note that in this case, $\mathbf Q$ corresponds exactly to the blocking matrix in Griffiths-Jim beamformer without the assumption of ideal steering, for a source impinging from a direction of $-13^o$. Fig. 4 shows the solution of the problem – in comparison to the one provided by the SDR-based method. It can be seen that the methods perform almost exactly the same in terms of passband ripple, null-depth, and roll-off. Given the findings of Subsection \[sec:dimreduce\], it is important to investigate both the capability of the proposed method to perform the dimensionality reduction given only a single null-direction, and the effect that the dimensionality reduction has on the performance of the method. To test this capability, we set the variety by which $\mathbf Q$ will be designed to be $\mathfrak{V}' = \{\alpha(-13^o),\alpha(-13^o),\alpha(-13^o)\}$. Since $\mathfrak{V}' \supset \mathfrak{V}$, we know that $\mathcal{I}(\mathfrak{V}) \supset \mathcal{I}(\mathfrak{V}')$, and thus, $\mathcal{C}(\mathbf Q) \supset \mathcal{C}(\mathbf Q)$, which in turns means that the restriction is contained by the feasible set as the original problem –. Fig. 5 exhibits a performance comparison between the proposed and SDR-based methods. It is observed that the null-depth for the proposed method is several orders of magnitude deeper than that of the SDR-based method. As the proposed method only uses $17$ degrees of freedom, compared to the $20$ used by the SDR-based method, pass-band ripple increases. In the case of the Griffiths-Jim beamformer of [@GriffithsJim], this would correspond to a lessened ability to minimize the noise variance. Conclusions {#sec:conclusions} =========== A new approach to solving a class of the rank-constrained SDP problems has been presented. Instead of relaxing such non-convex problem to a feasible set of positive semidefinite matrices, we restrict the problem to a space of polynomials whose dimension is equal to the desired rank of the solution. The resulting optimization problem is then convex and can be efficiently and exactly solved, while the solution of the original rank-constrained SDP problem can be exactly recovered from the solution of the restricted one through a simple matrix decomposition. We show how this approach can be applied to solving some important signal processing problems with so-called null-shaping constraints, which enforce the desired algebraic structure to the feasible set. Specifically, we have shown how to apply the proposed approach to address such signal processing problems as transmit beamspace design in MIMO radar, downlink beamforming design in MIMO communication, and GSC design. As the beamforming and parameter estimation problems are known to be conjugate of each other, we have also shown, as a byproduct of the main studies here, the conjugacy in exact terms, and have formulated a new exact algebraic theoretically motivated criterion for signal/noise subspace identification. Simulation results performed for the problem of rank-constrained beamforming design have shown an exact agreement of the solution with the proposed algebraic structure, as well as significant performance improvements compared to the existing SDR-based method. [1]{} J. M. Wozencraft and I. M. Jacobs, *Principles of Communication Engineering*, New York: John Wiley & Sons Inc., 1965. H. L. Van Trees, *Detection, Estimation, and Modulation Theory. Part IV: Optimum Array Processing.* New York: Wiley-Interscience, 2002. H. Bölcskei, F. Hlawatsch, H. G. Feichtinger, “Frame-theoretic analysis of oversampled filter banks,” *IEEE Trans. Signal Process.*, vol. 46, no. 12, pp. 3256–3268, Dec. 1998. S. Mallat, *A Wavelet Tour of Signal Processing,* San Diego: Academic Press, 1998. D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, “The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains,” *IEEE Signal Proc. Mag.*, vol. 30, no. 3, pp. 83–98, 2013. A. Sandryhaila and J. M. F. Moura, “Discrete signal processing on graphs,” *IEEE Trans. Signal Process.*, vol. 61, no. 7, pp. 1644–1656, Apr. 2013. A. Sandryhaila and J. M. F. Moura, “Discrete signal processing on graphs: Frequency analysis,” *IEEE Trans. Signal Process.*, vol. 62, no. 12, pp. 3042–3054, June 2014. E. J. Candès and B. Recht, “Exact matrix completion via convex optimization,” *Found. of Comput. Math.*, vol. 9, no. 6, pp. 717–772, Dec. 2009. F. J. Király, J. Theran, R. Tomioka, “The algebraic combinatorial approach for low-rank matrix completion,” *J. Machine Learning Research*, vol.. 16, no. 8, pp. 1391-1436. Aug. 2015. E. J. Candès, Y. C. Eldar, T. Strohmer, and V. Voroninsky, “Phase retrieval via matrix completion,” *SIAM J. Imaging Sci.*, vol. 6, no. 1, pp. 199–225, 2013. F. J. Király, M. Ehler “The algebraic approach to phase retrieval and explicit inversion at the identifiability threshold,” Preprint, 26 pages, arXiv:1402.4053, 2014. M. Bengtsson and B. Ottersten, “Optimal transmit beamforming using semidefinite optimization,” in *Proc. 37th Ann. Allerton Conf. Commun., Contr., Comput.*, USA, Sept. 1999, pp. 987–996. Y. Huang, D. Palomar, “A dual perspective on separable semidefinite programming with applications to optimal downlink beamforming,” [*IEEE Trans. Signal Processing*]{}, vol. 58, no. 8, pp. 4254–4271, Aug. 2010. A. Khabbazibasmenj and S. A. Vorobyov, “Robust adaptive beamforming for general-rank signal model with positive semi-definite constraint via POTDC,” *IEEE Trans. Signal Process.*, vol. 61, no. 23, pp. 6103–6117, Dec. 2013. D. R. Fuhrmann, and G. San Antonio, “Transmit beamforming for MIMO radar systems using signal cross-correlation,” *IEEE Trans. Aerospace and Electronic Systems*, vol. 44, no.1, pp. 171–186, Jan 2008. A. Hassanien and S. A. Vorobyov, “Phased-MIMO radar: A tradeoff between phased-array and MIMO radars,” *IEEE Trans. Signal Process.*, vol. 58, no. 6, pp. 3137–-3151, June 2010. A. Hassanien and S. A. Vorobyov, “Transmit energy focusing for DOA estimation in MIMO radar with colocated antennas,” *IEEE Trans. Signal Process.*, vol. 59, no. 6, pp. 2669–2682, June 2011. A. Khabbazibasmenj, A. Hassanien, S. A. Vorobyov, and M. W. Morency, “Efficient transmit beamspace design for search-free based DOA estimation in MIMO radar,” *IEEE Trans. Signal Process.*, vol. 62, no. 6, pp. 1490–1500, Mar. 2014. D. S. Papailiopoulos and A. G. Dimakis, “Interference alignment as a rank constrained rank minimization,” *IEEE Trans. Signal Process.*, vol. 60, no. 8, pp.  4278–4288, Aug. 2012. O. L. Frost, “An algorithm for linearly constrained adaptive array processing,” *Proc. IEEE*, vol. 60, no. 8, pp. 926-934, Aug. 1972. L. J. Griffiths, C. W. Jim, “An alternative approach to linearly constrained adaptive beamforming,” *IEEE Trans. Antennas and Propagation*, vol. 30, no.1, pp. 27-33, Jan. 1982. M. X. Goemans, and D. P. Williamson, “Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming,” *Journal of the Association for Computing Machinery*, vol. 42, no. 6, pp. 1115–1145, Nov. 1995. Z.-Q. Luo, W.-K. Ma, A. M.-C. So, Y. Ye, and S. Zhang, “Semidefinite relaxation of quadratic optimization problems: From its practical deployments and scope of applicability to key theoretical results,” *IEEE Signal Process. Mag.*, vol. 27, no. 3, *Special Issue on Convex Optimization for Signal Processing*, pp. 20–34, May 2010. K. T. Phan, S. A. Vorobyov, N. D. Sidiropoulos, and C. Tellambura, “Spectrum sharing in wireless networks via QoS-aware secondary multicast beamforming,” *IEEE Trans. Signal Process.*, vol. 57, no. 6, pp. 2323–2335, June 2009. M. W. Morency and S. A. Vorobyov, “An algebraic approach to rank-constrained beamforming,” in *Proc. 6th IEEE Int. Workshop Computational Advances in Multi-Sensor Adaptive Processing*, Cancun, Mexico, Dec. 2015, pp. 17–20. A. Barvinok, *A Course in Convexity.* Graduate Studies in Mathematics, American Mathematical Society, vol. 54, 2002. E.B. Vinberg, *A Course in Algebra*.1em plus 0.5em minus 0.4emMoscow: Factorial Press, 2001. D. Cox, J. Little, D. O’Shea *Ideals, Varieties, and Algorithms*. Third Edition. 1em plus 0.5em minus 0.4emSpringer Science+Business Media, 2007. F. Permenter, and P. A. Parrilo, “Partial facial reduction: simplified, equivalent SDPs via approximations of the PSD cone.” http://arxiv.org/abs/1408.4685 J. Capon, “High-resolution frequency-wavenumber spectrum analysis,” *Proc. IEEE*, vol. 57, pp. 1408–-1418, Aug. 1969. M. Shaghaghi and S. A. Vorobyov, “Subspace leakage analysis and improved DOA estimation with small sample size,” *IEEE Trans. Signal Process.*, vol. 63, no. 12, pp. 3251–3265, June 2015. [^1]: M. W. Morency is with the Dept. Microelectronics, School of Electrical Engineering, Mathematics, and Computer Science, Delft University of Technology, Mekelweg 4, 2628 CD Delft, The Netherlands; email: [M.W.Morency@tudelft.nl]{}. He was with the Dept. Signal Processing and Acoustics, Aalto University, PO Box 13000 FI-00076, Finland. S. A. Vorobyov is with the Dept. Signal Processing and Acoustics, Aalto University, PO Box 13000 FI-00076 Aalto, Finland; email: [svor@ieee.org]{}. Preliminary results of this work in application to the problem of rank-constrained transmit beamspace design in multiple-input multiple-output radar have been presented at the [*IEEE CAMSAP 2015*]{} and have been awarded the first prize Best Student Paper Award. S. A. Vorobyov is the corresponding author. [^2]: Preliminary results for the problem of rank-constrained transmit beamspace design for MIMO radar have been reported in [@CAMSAP15]. [^3]: Note, that although it is common to refer to the operation $\bullet$ as multiplication or addition, the operation $\bullet$ need not be the conventional notion of multiplication or addition. [^4]: A ring with the property that $R\setminus \{0\}$ is an Abelian group with respect to multiplication $\diamond$ is a Field (e.g. the set of real numbers). [^5]: This is required in many system design problems. For example, in the transmit beamspace design for MIMO radar problem, which we will describe in detail in the next section, a rank-deficient $\mathbf W$ would correspond to a loss of degrees of freedom in the design of the transmit beampattern.
[**Low-x scaling\ in $\gamma^*p$ total cross sections${}^{*}$**]{}\ [**Dieter Schildknecht**]{}\ Fakultät für Physik, Universität Bielefeld\ D-33501 Bielefeld\ [**Bernd Surrow**]{}\ Max-Planck Institut für Physik (Werner-Heisenberg-Institut)\ Föhringer Ring 6, D-80805 München\ and\ [**Mikhail Tentyukov${}^{**}$**]{}\ Fakultät für Physik, Universität Bielefeld\ D-33501 Bielefeld [**Abstract**]{}\ We show that the experimental data for the total virtual-photon proton cross section, $\sigma_{\gamma^*p} (W^2, Q^2)$, for $x_{bj} \lsim 0.1$ lie on a universal curve, when plotted against $\eta = (Q^2 + m^2_0)/ \Lambda^2(W^2)$, where $\Lambda^2 (W^2) = C_1 (W^2+W^2_0)^{C_2}$ is determined by the parameters $C_1, C_2$ and $W^2_0$. The observed scaling law follows from the generalized-vector-dominance/colour-dipole picture (GVD/CDP) of low-x deep inelastic scattering. ------------------------------------------------------------------------ ${}^*$ Supported by the BMBF, Bonn, Germany, Contract 05 HT9PBA2\ ${}^{**}$ On leave from BLTP JINR, Dubna, Russia The present note will be concerned with deep inelastic scattering (DIS) in the kinematic range of low $x_{\rm bj} \simeq Q^{2}/W^{2} \ll 0.1$ that has been and is being explored at HERA. In particular, we will show that the data [@1; @2; @3; @4; @5; @5a] on photo- and electroproduction, for $x<0.1$, in good approximation, lie on a single curve, when plotted against the dimensionless (‘low-$x$ scaling’) variable $$\eta=\frac{Q^{2}+m_{0}^{2}}{\Lambda^{2}(W^{2})}.$$ The value of the threshold mass, $m_{0}<m_{\rho^{0}}$, as well as the parameters $C_{1}$, $C_{2}$ and $W_{0}^{2}$ contained in $$\Lambda^{2}(W^{2})=C_{1}(W^{2}+W_{0}^{2})^{C_{2}}$$ are fixed by the experimental data themselves. We will proceed in two steps, the first one being a purely empirical analysis of the data, while in the second step, we will show, how the observed behaviour of $\sigma_{\gamma^* p}(W^2, Q^2)$ can be understood in terms of generalized vector dominance (GVD) [@6; @7][^1] or, equivalently [@8], [@8a] the colour-dipole picture (CDP) [@9]. 1. In the first step, the model-independent phenomenological analysis of the experimental data, we assume the analytic form of the scaling variable $\eta$ according to (1) and (2), and, in addition, the existence of a continuous function of $\eta$ that is supposed to describe the data for $\sigma_{\gamma^* p} (W^2,Q^2)$, when these are plotted against $\eta$. For the technical analysis, we assume that this continuous function, without much loss of generality, may be represented by a piecewise linear function of $\eta$. This assumption allows us to perform a fit that determines the values of the parameters[^2] $m_{0}^{2}$, $C_{2}$ and $W_{0}^{2}$ simultaneously with the values of the piecewise linear function $\sigma_{\gamma^{*} p}(\eta )$ at a number of points, $\eta_i (i=1,...,N)$, of the variable $\eta$. 2. In the second step, we show how an approximate scaling law in terms of the variable $\eta$ follows from GVD or the CDP. We will restrict ourselves to only present the essential theoretical assumptions and conclusions. For a detailed account, we have to refer to a forthcoming paper [@10]. Turning to step i), in fig.1, we show the result of the model-independent analysis. Imposing the kinematic restrictions of $x \leq 0.1$ and $Q^{2} \leq 1000\,$GeV$^{2}$, all available experimental data [@1; @2; @3; @4; @5; @5a] on photo- and electroproduction are indeed seen to lie on a smooth curve, that is approximated by the piecewise linear fit curve. The parameters that determine the scaling variable $\eta$ were found to be given by $$\begin{aligned} m_{0}^{2} & = & 0.125 \pm 0.0.027\; {\rm GeV^{2}}, \nonumber \\ C_2 & = & 0.28 \pm 0.06, \nonumber \\ W_{0}^{2} & = & 439 \pm 94\; {\rm GeV^{2}}, \end{aligned}$$ with a $\chi^{2}$ per degree of freedom ($ndf$) of $\chi^2/ndf=1.15$. We add the remark, that an analogous procedure applied to the experimental data, without a restriction on $x$, does [*not*]{} lead to a universal curve. Likewise, restricting oneself to only those data points that belong to $x>0.1$, [*no*]{} universal curve is obtained either; the fitting procedure leads to entirely unacceptable results on the quality of the fit as quantified by the value of $\chi^{2}$ per degree of freedom. We turn to step ii), the theoretical interpretation of the above results in terms of GVD or the CDP. Both pictures have in common the basic concept of virtual transitions of the photon to $q\bar{q}$ (or $q\bar{q}g$) vector states with subsequent diffractive scattering from the proton. Provided the configuration of the $q\bar{q}$ states the photon is coupled to, and the generic structure of two-gluon exchange [@11] in the scattering from the proton is taken into account, (off-diagonal) GVD becomes identical [@8] to the CDP. While GVD is conventionally formulated in terms of integrals over the masses of the propagating ($q\bar{q}$) vector states, and the two-dimensional momentum transfer (carried by the gluon), the CDP involves integration over the product of the square of the photon wave function and the ($q\bar{q}$)p (‘colour-dipole’) cross-section in transverse position space. For the subsequent discussion, it will be useful, to note the relationship [@8] between the colour-dipole cross-sections in (two-dimensional) transverse position space and in momentum-transfer space that guarantees the generic structure of two-gluon exchange, $$\sigma_{(q \bar q)p} (r^2_\bot , z, W^2) = \int d^2 l_\bot \tilde\sigma_{(q \bar q)p} (l^2_\bot , z, W^2) (1 - \exp (-i \vec l_\bot \cdot \vec r_\bot)).$$ Indeed from (4), $$\sigma_{(q\bar q)p} (r^2_\bot , z, W^2) \longrightarrow \cases{ 0, & for $r_\bot \rightarrow 0$, \cr \int d^2 l_\bot \tilde\sigma_{(q\bar q)p} (l^2_\bot , z, W^2), & for $r_\bot \rightarrow \infty$,\cr}$$ thus fulfilling what has been called ‘colour transparency’ [@9] and what indeed guarantees the generic structure of two-gluon exchange. This is explicitly seen [@8], when representing $\sigma_{\gamma^{*} p}(W^{2}, Q^{2})$ in momentum space. In connection with (4) and (5), we remind the reader of the notation being employed, the configuration of $q \bar q$ states being described by the (transverse) interquark separation, $r_\bot$, and the (light cone) variable, $z$, that is related to the $q \bar q$-rest-frame angle by $4 z (1-z) \equiv \sin^2 \theta$. From (5), $\tilde{\sigma}_{(q\bar{q})p}(l^{2}_\bot,z,W^{2})$ should vanish sufficiently rapidly to yield a convergent integral. Accordingly, it may be suggestive to assume a Gaussian in $l^{2}_\bot$ for $\tilde{\sigma}_{(q\bar{q})p}(l^{2}_\bot ,z,W^{2})$. Actually, explicit calculations become much simpler if, without much loss of generality, instead of a Gaussian a $\delta$-function located at a finite value of $l^{2}_\bot$ is used [@8] as an effective description of $\tilde{\sigma}_{(q\bar{q})p}(l^{2}_\bot ,z,W^{2})$. Accordingly, we will adopt the simple ansatz, $$\tilde{\sigma}_{(q\bar{q})p}(l^{2}_\bot,z,W^{2})= \sigma^{(\infty)}(W^2) \frac{1}{\pi} \delta (l^2_\bot - z (1 - z) \Lambda^2 (W^2)).$$ This ansatz associates with any given energy, $W$, an (effective) fixed value of (two-dimensional gluon) momentum transfer, $|\vec l_\bot |$, determined by the so far unspecified function $\Lambda (W^2)$. The ansatz (6) also incorporates the assumption that ‘aligned’, $z\rightarrow 0$, configurations [@12] of the $(q\bar{q})$ pair absorb vanishing, $l^{2}_\bot \rightarrow 0$, gluon momentum. For the subsequent interpretation of our results, we note the explicit form of the transverse-position-space colour-dipole cross section, obtained by substituting (6) into (4), $$\begin{aligned} & & \sigma_{(q\bar{q})p}(r^{2}_\bot,z,W^{2}) = \sigma^{(\infty)}(W^2)\left( 1 - J_0 \left( r_\bot \cdot \sqrt{z(1-z)}\Lambda (W^2)\right)\right) \\ &\cong & \sigma^{(\infty)} (W^2) \cases{ \frac{1}{4} z (1-z) \Lambda^2 (W^2) r^2_\bot , & for $\frac{1}{4} z (1-z)\Lambda^2 (W^2) r^2_\bot \longrightarrow 0$, \cr & \cr 1 & for $\frac{1}{4} z (1 - z)\Lambda^2 (W^2) r^2_\bot \longrightarrow\infty$. \cr} \nonumber\end{aligned}$$ The limit of $\sigma^{(\infty)}(W^{2})$ in the second line of the approximate equality in (7) actually stands for an oscillating behaviour of the Bessel function, $J_{0}(r_\bot\sqrt{z(1-z)}\Lambda(W^{2}))$, around $\sigma^{(\infty)}(W^{2})$, when its argument tends towards infinity. Apart from these oscillations, the behaviour of $\sigma_{(q\bar{q})p}(r^{2}_\bot,z,W^{2})$ in (7) is identical to the one obtained, if the $\delta$-function in (6) is replaced by a Gaussian. Concerning the high-energy behaviour of $\sigma_{(q\bar{q})p}(r^{2}_\bot, z,W^{2})$, we note that it is consistent with unitarity restrictions, provided a decent high-energy behaviour is imposed on $\sigma^{(\infty)}(W^{2})$. We stress that the ansatz (6) is by far not as specific as it might appear at first sight. It constitutes a simple effective realization, compare (7), of the underlying requirements of colour transparency, (4), (5), and hadronic unitarity for the colour-dipole cross section. The unitarity requirement enters via the decent high-energy behaviour of $\sigma^{(\infty)}(W^2)$. Referring to [@10] for details, we note that the ansatz (6) allows one to simplify the GVD expression [@8] for $\sigma_{\gamma^{*}p}(W^{2}, Q^{2})$ (by integrating over $d^{2}l$ and $dz$) to become $$\sigma_{\gamma^{*}p}(W^{2},Q^{2})= \sigma_{\gamma p} (W^2) \frac{(I_T + I_L)}{I_T |_{Q^2 = 0}} ,$$ where $\sigma_{\gamma p}(W^{2})$ denotes the photoproduction cross section, $\sigma_{\gamma^{*}p}(W^{2},Q^{2}=0)$, and the dimensionless quantities $I_{T}$ and $I_{L}$ contain integrations over the squares of the ingoing and outgoing masses, $M$ and $M^{'}$, of the $q\bar{q}$ states coupled to the ingoing and outgoing photon in the (virtual) forward-Compton-scattering amplitude. Expression (8) contains the requirement of a smooth transition of $\sigma_{\gamma^* p} (W^2, Q^2)$ to photoproduction; it allowed us, to eliminate $\sigma^{(\infty)}(W^{2})$ in terms $\sigma_{\gamma p}(W^{2})$. Explicitly, the integrals $I_{T}$ and $I_{L}$, that are related to the transverse and longitudinal contributions to $\sigma_{\gamma^{*}p}(W^2, Q^2)$, respectively, are given by[^3] $$\begin{aligned} I_{T} \left( \frac{Q^2}{\Lambda^2 (W^2)}, \frac{m^2_0}{\Lambda^2(W^2)} \right) & &\hspace*{-0.5cm} = {1 \over \pi} \int^\infty_{m^2_0} dM^2 \int^{(M+\Lambda (W^2))^2}_{(M-\Lambda (W^2))^2} dM^{\prime 2}\omega (M^2, M^{\prime 2}, \Lambda^2 (W^2))\nonumber \\ & &\hspace*{-1.5cm} \times \left[ \frac{M^2}{(Q^2+M^2)^2}- \frac{M^{\prime 2}+M^2-\Lambda^2 (W^2)} {2(Q^2+M^2)(Q^2+M^{\prime 2})} \right],\end{aligned}$$ and $$\begin{aligned} I_{L} \left( \frac{Q^2}{\Lambda^2(W^2)}, \frac{m^2_0}{\Lambda^2(W^2)} \right) & &\hspace*{-0.5cm}= {1 \over \pi} \int^\infty_{m^2_0} dM^2 \int^{(M+\Lambda (W^2))^2}_{(M-\Lambda (W^2))^2} dM^{\prime 2}\nonumber \omega (M^2, M^{\prime 2}, \Lambda^2 (W^2))\\ & &\hspace*{-0.5cm} \times \left[ \frac{Q^2}{(Q^2+M^2)^2}- \frac{Q^2} {(Q^2+M^2)(Q^2+M^{\prime 2})} \right],\end{aligned}$$ where the integration measure $\omega (M^2, M^{\prime 2}, \Lambda^2 (W^2))$ fulfils $$\frac{1}{\pi} \int^{(M+\Lambda (W^2))^2}_{(M-\Lambda (W^2))^2} d M^{\prime 2} \omega (M^2, M^{\prime 2}, \Lambda^2 (W^2))= 1 .$$ The explicit expression for $\omega(M^2, M^{\prime 2}, \Lambda^2 (W^2)))$ is given in [@8]. It is the form (8) to (10) of the theory that explicitly displays the structure [@6; @7] of (off-diagonal) GVD. The appearance of $\Lambda(W^{2})$ in the integration limits over $dM^{'2}$ is worth noting. One expects that the effective mass range for off-diagonal transitions, $M^{'2} \neq M^{2}$, should increase with increasing energy, $W$, thus implying that $\Lambda (W^2)$ should not be constant, but should increase with increasing energy. We were able to derive explicit analytic expressions [@10] for the integrals in (9) and (10). In the present context, we only note that, in very good approximation, the sum of $I_{T}$ and $I_{L}$ only depends on the dimensionless combination (1). While the general explicit expressions for $I_{T}$ and $I_{L}$ are complicated, in the most important limits, they become simple. Indeed, $$I = I_{T}+I_{L} \equiv \cases{ \log \left( \frac{\Lambda^2 (W^2)}{Q^2 + m^2_0} \right), & for $\Lambda^2 (W^2) \gg Q^2 + m^2_0$, \cr & \cr \frac{1}{2} \frac{\Lambda^2 (W^2)}{Q^2+m^2_0} , & for $\Lambda^2 (W^2) \ll Q^2 + m^2_0$. \cr}$$ In addition, $I_L$ vanishes for $Q^2$ towards zero. As the moderate rise of photoproduction, $\sigma_{\gamma p}(W^{2})$, with energy, and the moderate logarithmic rise of the denominator in (8) approximately cancel each other, according to (12), we have indeed obtained approximate scaling of $\sigma_{\gamma^* p} (W^2, Q^2)$ in the variable $\eta$ defined in (1), i.e. $\sigma_{\gamma^* p} (W^2, Q^2) \simeq \sigma_{\gamma^* p}(\eta)$. Moreover, the theoretically expected increase of $\Lambda^2 (W^2)$ with increasing energy coincides with the above result, (3), of the phenomenological analysis of the experimental data. The theoretical results (8) to (12) for $\sigma_{\gamma^* p}(W^2,Q^2)$ were obtained by incorporating the $q\bar{q}$ configuration in the virtual photon, as known from $e^{+}e^{-}$ annihilation, as well as the generic structure of two-gluon exchange into the ansatz for the virtual Compton-forward-scattering amplitude at low $x$. As stressed before, the simplifying $\delta$-function ansatz (6) is to be seen as an effective realization of the generic two-gluon exchange structure, combined with hadronic unitarity, without much loss of generality. We turn to the analysis of the experimental data in terms of the theoretical results in (8) to (12). This essentially amounts to introducing an empirically satisfactory parameterization for the photoproduction cross-section, $\sigma_{\gamma p}(W^{2})$, and to determining the threshold mass, $m_{0}^{2}$, and the energy dependence of $\Lambda^{2}(W^{2})$ in fits to the experimental data. Adopting a Regge parameterization for $\sigma_{\gamma p}(W^{2})$, $$\sigma_{\gamma p}(W^{2})= A_R \cdot (W^2)^{\alpha_R -1} + A_P \cdot (W^2)^ {\alpha_P-1},$$ where $W^2$ is to be inserted in units of GeV$^2$ and [@13] $$\begin{aligned} A_R & = & 145.0 \pm 2.0\; {\rm \mu b} , \nonumber \\ \alpha_R & = & 0.5 \\ A_P & = & 63.5 \pm 0.9\; {\rm \mu b} , \nonumber \\ \alpha_P & = & 1.097 \pm 0.002 , \nonumber \end{aligned}$$ we again proceed in two steps. In a first step, we do not impose any specific form for the functional dependence of $\Lambda^{2}(W^{2})$ except for the (technically necessary) assumption that $\Lambda^2 (W^2)$ can be represented by a piecewise linear function of $W^2$. A fit to the experimental data on $\sigma_{\gamma^* p} (W^2, Q^2)$ then determines the values of $\Lambda^2 (W^2_i)$, with $i = 1, ..., N$, that define the piecewise linear function, $\Lambda^{2}(W^{2})$. In fig.2, we show the result of this procedure, $\Lambda^2 (W^2_i)$ with $i=1, ..., 46$, including errors, obtained from the fit to the experimental data. For the fit, the restrictions of $x \le 0.1$ and $Q^2 \le 100$ GeV$^2$ were applied to the data. In a second step, we adopt the power-law ansatz (2), and again perform a fit to the data for $\sigma_ {\gamma^* p}(W^2, Q^2)$. The agreement of the resulting curve for $\Lambda^{2}(W^{2})$ with the piecewise linear fit result in fig.2 shows that the power-law ansatz for $\Lambda^{2}(W^{2})$ is borne out by the data within the theoretical framework for $\sigma_{\gamma^{*} p}$ in (8) to (11), that specifies the $Q^{2}$ dependence. From the fit to the data under the restriction of $x\leq 0.01$ and $Q^{2}\leq 100$GeV$^{2}$, we obtained $$\begin{aligned} m_{0}^{2} & = & 0.16 \pm 0.01\; {\rm GeV^{2}}, \nonumber \\ C_{1} & = & 0.34 \pm 0.05, \\ C_{2} & = & 0.27 \pm 0.01 , \nonumber \\ W_{0}^{2} & = & 882 \pm 246\; {\rm GeV^{2}}, \nonumber\end{aligned}$$ with $\chi^{2}/ndf=1.15$. The result (15), in particular the value of the exponent $C_2$ that determines the rise of $\Lambda^2(W^2)$ with energy, is in reasonable agreement with the result (3) of the model-independent analysis[^4]. This implies that the $Q^2$ dependence of the data is correctly reproduced by the GVD/CDP in (8) to (12). In other words, our procedure that combines the model-independent analysis of the data with the one based on the GVD/CDP, has provided us with successful tests of the $W^2$- and $Q^2$-dependence that are independent from each other. According to (7), the $W^2$ dependence of $\Lambda^2(W^2)$, displayed in fig.2, determines the energy dependence of the colour-dipole cross section. The DIS experiments at low x directly measure this quantity, in particular for $Q^2 \ge \Lambda^2(W^2)$. We have verified that the replacement of the power-law ansatz (2) for $\Lambda^2(W^2)$ by a logarithmic one, $$\Lambda^2(W^2)=C_1^\prime \log (W^2/{W^\prime}^2_0+C_2^\prime),$$ leads to an equally good fit to the data. One finds $$\begin{aligned} m_{0}^{2} & = & 0.157 \pm 0.009\; {\rm GeV^{2}}, \nonumber \\ C_1^\prime & = & 1.64 \pm 0.14, \\ C_2^\prime & = & 4.1 \pm 0.4 , \nonumber \\ {W^\prime}^2_0 & = & 1015 \pm 334\; {\rm GeV^{2}}, \nonumber\end{aligned}$$ with $\chi^{2}/ndf=1.19$. In fig.3, we show an explicit comparison of the experimental data with the GVD/CDP predictions. The (approximate) coincidence of the theoretical predictions for various values of $W^2$ demonstrates the scaling of the theory in terms of the low-x scaling variable $\eta$. Figure 3a, with the restrictions $x<0.01$ and $Q^2<100$ GeV$^2$ imposed on the data (as in the above fit) shows the good agreement between theory and experiment. In fig.3b, we show the deviations between theory and experiment, when the data for $x\ge 0.01$ are plotted[^5] Finally, fig.4a demonstrates agreement of the GVD/CDP with experiment in a representation of $\sigma_{\gamma^* p} (W^2, Q^2)$ against $W^2$ for fixed values of $Q^2$. A subsample of all data used in the fit is presented for illustration. The explicit analytical form of the theoretical expression for the cross-section, $\sigma_{\gamma^{*} p} (W^2,Q^2)$, allows us to investigate its behaviour at energies far beyond the ones being explored at HERA. According to (8) with (12), at any fixed $Q^{2}$, we have a strong power-like increase with energy, as $\Lambda^{2}(W^{2})$, while, finally, for sufficiently large energy, the power law turns into a logarithmic rise implying an energy behaviour as in photoproduction. This is explicitly seen in fig.4b. The approach to the asymptotic $W$ dependence becomes slower, if the power-law ansatz for $\Lambda^2(W^2)$ in (2) is replaced by the logarithmic one in (16). The transition from a strong power law (or ‘hard’) rise with energy to the soft rise in photoproduction is obviously related to the behaviour of the dipole cross-section (7) that in turn is largely dictated by the generic two-gluon exchange structure (4), (5) and the unitarity restriction on the growth of $\sigma^{(\infty)}(W^2)$. At any (sufficiently small) fixed value of $r_\bot$, (corresponding to an approximately fixed value of $Q^{2}$), the dipole cross-section rises rapidly, as $\Lambda^{2}(W^{2})$, to finally settle down to the limiting value of $\sigma^{(\infty)}(W^{2})$, according to the second line on the right-hand side in (7). As seen in fig.4b, the scale for this transition to occur is extremely large, however, unless $Q^{2}$ is very small. It appears that even THERA energies of order $W^2 \cong 10^6$ GeV$^2$ may be too small to see this transition in the energy dependence, except at sufficiently small $Q^2$. The necessary extension of the present investigation to a careful treatment of charm and of the diffractively produced final states in general is beyond the scope[^6] of the present work. The closest in spirit to the present investigation is the work by Forshaw, Kerley and Shaw [@a] and by Golec-Biernat and Wüsthoff [@b][^7]. While we agree with the general picture of low-x DIS drawn by these authors, there are numerous essential differences though. In our treatment, the dependence of the colour-dipole cross section on the configuration variable $z$ is taken into account in contrast to refs.[@a] and [@b]. Our dipole cross section does not depend on $Q^2$, in agreement with the mass-dispersion relations (9), (10), but in distinction from the $Q^2$ (or rather $x$) dependence in ref.[@b]. Decent high-energy behaviour at any $Q^2$ (“saturation”) follows from the underlying assumptions of colour transparency (the generic two-gluon exchange structure) and hadronic unitarity in distinction from the two-pomeron ansatz in ref.[@a] and in ref.[@d] that needs modification at energies beyond the ones explored at HERA[^8]. In conclusion, a unique picture, the GVD/CDP, emerges for DIS in the low-x diffraction region. In terms of the (virtual) Compton-forward-scattering amplitude, the photon virtually dissociates into $(q \bar q)$ vector states that propagate and undergo diffraction scattering from the proton as conjectured in GVD a long time ago. Our knowledge on the photon-$(q \bar q)$ transition from $e^+ e^-$ annihilation together with the gluon-exchange dynamics from QCD allows for a much more detailed theoretical description of $\sigma_{\gamma^* p}(W^2, Q^2)$ than available at the time when GVD was introduced. In terms of the GVD/CDP, experiments on DIS at low x measure the energy dependence of the $(q \bar q)$/colour-dipole proton cross section, $\sigma_{(q \bar q)p} (r^2_\bot , z, W^2)$. A strong energy dependence of this cross section for small interquark separation (not entirely unexpected within the GVD/CDP) is extracted from the data at large $Q^2$. The combination of colour transparency (generic two-gluon-exchange structure) with hadronic unitarity then implies that for any interquark separation the strong increase of the colour-dipole cross section, at sufficiently high energy, will settle down to the smooth increase of purely hadronic interactions. As a consequence, also the strong increase with energy of $\sigma_{\gamma^* p} (W^2, Q^2)$ at large $Q^{2}$ will eventually reach the behaviour observed in $(Q^2 = 0)$ photoproduction and hadron-hadron interactions. \ [*Acknowledgement*]{}\ One of us (D.S.) thanks the theory group of the Max-Planck-Institut für Physik in München, where part of this work was done, for warm hospitality. Thanks to Wolfgang Ochs for useful discussions, and particular thanks to Leo Stodolsky for his insistence that there should exist a simple scaling behaviour in DIS at low x. We thank G. Cvetic for useful discussions and collaboration during the early stages of this work, and we also thank John Dainton and A.B. Kaidalov for useful discussions at ‘Diffraction2000’ in Centraro (Sept. 2-7), where this work was first presented. [99]{} ZEUS 94: ZEUS Collab., M. Derrick et al., [*Z. f. Physik*]{} C72 (1996) 399.\ ZEUS SVTX 95: ZEUS Collab., J. Breitweg et al., [*Eur. Phys. J.*]{} C7 (1999) 609.\ ZEUS BPC 95: ZEUS Collab., J. Breitweg et al., [*Phys. Lett.*]{} B407 (1997) 432.\ ZEUS BPT 97: ZEUS Collab., J. Breitweg et al., [*Phys. Lett.*]{} B487 (2000) 1-2, 53. H1 SVTX 95: H1 Collab., C. Adloff et al., [*Nucl. Phys.*]{} B497 (1997) 3.\ H1 94: H1 Collab., S. Aid et al., [*Nucl. Phys.*]{} B470 (1996) 3. E665 Collab., Adams et al., [*Phys. Rev.*]{} D54 (1996) 3006. NMC Collab., Arneodo et al., [*Nucl. Phys.*]{} B483 (1997) 3. BCDMS Collab., Benvenuti et al., [*Phys. Lett.*]{} B223 (1989) 485. ZEUS Collab., M. Derrick et al., [*Z. Phys.*]{} C63 (1994) 391;\ H1 Collab., A. Aid et al., [*Z. Phys.*]{} C69 (1995) 27. J.J. Sakurai and D. Schildknecht, [*Phys. Lett.*]{} [**40B**]{} (1972) 121;\ B. Gorczyca and D. Schildknecht, [*Phys. Lett.*]{} [**47B**]{} (1973) 71. H. Fraas, B.J. Read and D. Schildknecht, [*Nucl. Phys.*]{} [**B86**]{} (1975) 346;\ R. Devenish, D. Schildknecht, [*Phys. Rev.*]{} D19 (1976) 93. V.N. Gribov, [*Sov. Phys. JETP*]{} 30 (1970) 709. G.Cvetic, D. Schildknecht, A. Shoshi, [*Eur. Phys. J.*]{} C 13, (2000) 301;\ [*Acta Physica Polonica*]{} B30 (1999) 3265;\ D. Schildknecht, Contribution to DIS2000 (Liverpool, April 2000), hep-ph/0006153. L. Frankfurt, V. Guzey, M. Strikman, [*Phys. Rev.*]{} D58 (1998) 094093. N.N. Nikolaev, B.G. Zakharov, [*Z. Phys.*]{} C49, (1991) 607. G. Cvetic, D. Schildknecht, B. Surrow, M. Tentyukov, in preparation. J. Gunion, D. Soper, [*Phys. Rev.*]{} D15, (1977) 2617. J.D. Bjorken, hep-ph/9601363 ZEUS Collab., J. Breitweg et al., [*Eur. Phys. J.*]{} C7 (1999) 609,\ B. Surrow, [*Eur. Phys. J. direct*]{} C 2 (1999), 1. D. Schildknecht, G. Schuler and B. Surrow [*Phys. Lett.*]{} B 449 (1999) 328;\ D. Schildknecht, [*Nucl. Phys.*]{} B (Proc. Suppl.) 79 (1999) 195. J. Forshaw, G. Kerley and G. Shaw, [*Phys. Rev.*]{} D60 (1999) 074012; hep-ph/0007257. K. Golec-Biernat and M. Wüsthoff, [*Phys. Rev.*]{} D59 (1999) 014017; [*Phys. Rev.*]{} D60 (1999) 114023. A.M. Stasto, K. Golec-Biernat and J. Kwiecinski, hep-ph 0007192. M.F. McDermott, DESY00-126 A. Donnachie and P.V. Landshoff, [*Phys. Lett.*]{} B437 (1998) 408. \[fig1\] \[fig2\] \[fig3\] \[fig4\] [^1]: Compare also ref.[@7a] for photo-and electroproduction off nuclei. [^2]: In the model-independent fit, the parameter $C_1>0$ is an arbitrary input parameter as with a scaling function $f(\eta)$ also $f(C_1^{-1}\eta)$ describes the data. The model-independent fit was carried out for several values of $C_1$ around $C_1=0.34$ as uniquely determined in the fit based on the GVD/CDP to be described below. [^3]: In (9) and (10), we have suppressed an additive (compensation) term that assures that the integration over $dM^{\prime 2}$ in the off-diagonal term has the correct lower limit of $M^{\prime 2} \ge m^2_0$, compare ref.[@8]. [^4]: When plotted, including errors, there is a significant overlap of $\Lambda^2(W^2)$ with the parameters from (3) and (15), respectively. [^5]: The fact that the model-independent analysis yields scaling for $x<0.1$, while fig.3b demonstrates violations for $x>0.01$ needs further investigation beyond the scope of the present note. [^6]: Compare, however ref.[@15] for a treatment of vector-meson production [^7]: While the present work was in progress we became aware of ref.[@c], where the observation of a scaling behaviour of $\sigma_{\gamma^* p}(W^2, Q^2)$ within the framework of ref.[@b] is being reported [^8]: For additional references and a report on a recent discussion meeting on the CDP, we refer to ref.[@e]
--- author: - 'F. Aharonian' - 'A.G. Akhperjanian' - 'A.R. Bazer-Bachi' - 'M. Beilicke' - 'W. Benbow' - 'D. Berge' - 'K. Bernlöhr' - 'C. Boisson' - 'O. Bolz' - 'V. Borrel' - 'I. Braun' - 'F. Breitling' - 'A.M. Brown' - 'P.M. Chadwick' - 'L.-M. Chounet' - 'R. Cornils' - 'L. Costamante' - 'B. Degrange' - 'H.J. Dickinson' - 'A. Djannati-Ataï' - 'L.O’C. Drury' - 'G. Dubus' - 'D. Emmanoulopoulos' - 'P. Espigat' - 'F. Feinstein' - 'G. Fontaine' - 'Y. Fuchs' - 'S. Funk' - 'Y.A. Gallant' - 'B. Giebels' - 'S. Gillessen' - 'J.F. Glicenstein' - 'P. Goret' - 'C. Hadjichristidis' - 'M. Hauser' - 'G. Heinzelmann' - 'G. Henri' - 'G. Hermann' - 'J.A. Hinton' - 'W. Hofmann' - 'M. Holleran' - 'D. Horns' - 'A. Jacholkowska' - 'O.C. de Jager' - 'B. Khélifi' - 'Nu. Komin' - 'A. Konopelko' - 'I.J. Latham' - 'R. Le Gallou' - 'A. Lemière' - 'M. Lemoine-Goumard' - 'N. Leroy' - 'T. Lohse' - 'J.M. Martin' - 'O. Martineau-Huynh' - 'A. Marcowith' - 'C. Masterson' - 'T.J.L. McComb' - 'M. de Naurois' - 'S.J. Nolan' - 'A. Noutsos' - 'K.J. Orford' - 'J.L. Osborne' - 'M. Ouchrif' - 'M. Panter' - 'G. Pelletier' - 'S. Pita' - 'G. Pühlhofer' - 'M. Punch' - 'B.C. Raubenheimer' - 'M. Raue' - 'J. Raux' - 'S.M. Rayner' - 'A. Reimer' - 'O. Reimer' - 'J. Ripken' - 'L. Rob' - 'L. Rolland' - 'G. Rowell' - 'V. Sahakian' - 'L. Saugé' - 'S. Schlenker' - 'R. Schlickeiser' - 'C. Schuster' - 'U. Schwanke' - 'M. Siewert' - 'H. Sol' - 'D. Spangler' - 'R. Steenkamp' - 'C. Stegmann' - 'J.-P. Tavernet' - 'R. Terrier' - 'C.G. Théoret' - 'M. Tluczykont' - 'G. Vasileiadis' - 'C. Venter' - 'P. Vincent' - 'H.J. Völk' - 'S.J. Wagner' bibliography: - 'VelaJr.bib' date: 'Received ? / Accepted ?' title: 'Detection of TeV [$\gamma$]{}-ray Emission from the Shell-Type Supernova Remnant  with H.E.S.S.' --- Introduction ============  (also called ) is a young shell-type supernova remnant (SNR) in the line of sight to the Vela SNR. The observed X-ray emission of  extends over a roughly circular region with a diameter of $\sim \! 2\degr$ with a brightening towards the north-western, western and southern part of the shell and towards the centre. The observed X-ray spectrum is clearly dominated by a continuum which indicates a non-thermal origin of the emission [@Aschenbach; @Tsunemi; @VelaJrASCA; @XMM]. Deep X-ray observations with the ASCA, CHANDRA and BeppoSAX satellites revealed a compact source in the central region of :  () [@VelaJrASCA; @VelaJrBeppoSAX; @VelaJr2; @CHANDRACentral]. This source has been suggested to be a neutron star, supported by the detection of an coincident H$\alpha$ nebula [@HaCenter]. An association of the neutron star candidate with  would point to a core-collapse supernova. However, recent X-ray observations suggest that  is the result of a sub-Chandrasekhar type Ia supernova explosion [@XMM], which would imply that the compact object is not related to . Radio observations show only weak emission from the shell and no emission from the centre [@VelaJrRadio]. The age and distance of  were calculated from the diameter seen in X-rays and the flux of $^{44}$Ti lines to be $680\pm100$years and $\sim \! 200$pc with upper limits of 1100years and 500pc, respectively [@Progenitor; @Iyudin]. An age between 630 and 970years was estimated by @Tsunemi based on the observation of Ca lines. These estimates for distance and age would imply that  is one of the closest supernovae in recent history, whereas @VelaJrASCA argue that  might be located near the Vela Molecular Ridge at a much larger distance of 1–2kpc. Shell-type SNRs with non-thermal X-ray emission are prime candidates for accelerating cosmic rays up to very high energies [@SN1006_ASCA; @RXJ1713_ASCA; @Voelk]. Their detection in [V<span style="font-variant:small-caps;">HE</span>]{}[$\gamma$]{}-rays is expected to be possible with modern atmospheric Cherenkov telescopes [@DruryAharonianVoelk94], and to provide insight into the underlying acceleration mechanisms. So far, only one of these SNRs, , was detected by two independent experiments [@CangarooRXJ; @Enomoto; @Nature] employing the imaging atmospheric Cherenkov technique. The CANGAROO collaboration detected [$\gamma$]{}-ray emission from the north-western part of the  SNR [@Cangaroo2]. Here we report on the detection of the entire SNR  by [H.E.S.S.]{} in a short observation campaign. [H.E.S.S.]{} (High Energy Stereoscopic System) is an array of four imaging Cherenkov telescopes dedicated to the detection of [V<span style="font-variant:small-caps;">HE</span>]{} [$\gamma$]{}-rays with energies above 100GeV [@Performance]. Each telescope has a tessellated mirror with an area of 107m$^2$ [@StatusOptics; @StatusOptics2] and a camera consisting of 960 photomultiplier tubes [@StatusCamera]. The [H.E.S.S.]{} array can detect point sources at flux levels of about 1% of the Crab nebula flux with a significance of 5$\sigma$ in 25h of observation [@Performance]. [H.E.S.S.]{} is currently the most sensitive instrument to observe [V<span style="font-variant:small-caps;">HE</span>]{} [$\gamma$]{} -ray sources. With its angular resolution of better than $0.1\degr$ per event and its large field of view ($5\degr$) it is additionally in an ideal position to unravel the [$\gamma$]{}-ray morphology of extended sources. Data Set and Analysis Technique ===============================  was observed by [H.E.S.S.]{} for 4.5h in February 2004 in standard operation mode using all four telescopes and a trigger requiring the simultaneous observation of an air shower by at least two telescopes [@CentralTrigger]. A run quality selection based on weather conditions and system monitoring was applied. The selected data, which were taken at zenith angles between $22\degr$ and $30\degr$ with a mean of $25\degr$, represent a dead-time-corrected total exposure time (live time) of 3.16h. Due to technical problems during data taking, the data of only three telescopes could be included in the analysis. The background level was estimated from off-source runs, observing sky regions where no [$\gamma$]{}-ray sources are known. Off-source data were recorded between April and June 2004 at zenith angles between $13\degr$ and $34\degr$ (mean: $25\degr$) with a live time of 4.71h (data set OS1). Another off-source data set (referred to as OS2), taken at a different sky position with less statistics and a possible contamination from a [$\gamma$]{}-ray source, served to verify the results obtained with OS1. The OS2 data set was recorded between January and March 2004 at zenith angles in the range of $22\degr$ to $32\degr$ with a mean of $27\degr$. The data of one telescope were excluded from the analysis of both off-source data sets to match the experimental setup of the on-source data set. In order to reject most of the low-energy cosmic rays (CR), only camera images with intensities of more than 200 photo electrons were considered for shower reconstruction. For further reduction of the CR background, cuts on scaled image parameters were applied. These cuts were optimised using Monte Carlo simulations for point-like sources with a flux on the level of 10% of the Crab flux. The directions of the air showers were reconstructed from shower images in different cameras and the [$\gamma$]{}-ray energy was determined from the image intensity and the shower geometry with a typical relative resolution of $\sim \! 15\%$. The energy threshold after all cuts is about [400GeV]{}. A detailed description of the analysis technique can be found in @PKS2155. Results ======= Figure \[Fig:ThetaSqr\] shows the radial distribution of the excess of [$\gamma$]{}-rays from  as a function of the reconstructed squared angular distance, $\theta^2$, from the nominal centre of the SNR (RA 8h52$\fm$0, Dec $-46\degr 22\arcmin$). The excess was obtained by subtracting the live-time-normalised background from the on-source data. The inset of Fig. \[Fig:ThetaSqr\] shows good agreement between on-source and off-source data in the range above $1~\mathrm{deg}^2$. The distributions are not flat outside the signal region since the instrument’s acceptance drops off towards greater values of $\theta^2$. In the region $\theta^2 \le 1~\mathrm{deg}^2$, approximately corresponding to the X-ray radius of the SNR, a clear excess of [($700 \pm 60$)]{} events corresponding to a photon rate of [$(3.7 \pm 0.3)$min$^{-1}$]{} was found. The significance of the signal is [12$\sigma$]{}, calculated from [2406]{} on events and [2541]{} off events with a normalisation factor of [0.671]{} using formula (17) of @LiMa. The $\theta^2$-distribution of the excess is much wider than the distribution measured for point-like sources [@PKS2155]. The source is clearly extended, with a radius of the order of $1\degr$. To address the question of emission from a compact central object, the central region of the SNR was tested for the presence of a point-like source by applying a point-source cut ($\theta^2 \le 0.02~\mathrm{deg}^2$) around . No significant excess was found; the upper limit (99.9% confidence level) on the integral photon flux above 1TeV is [$1.3 \times 10^{-12}\,\mathrm{cm}^{-2} \mathrm{s}^{-1}$]{}. A sky map of the excess is displayed in Fig. \[Fig:Excess\]. No correction for the instrument’s acceptance, which drops by 23% towards the source boundary at $1^\circ$, was applied. An excess of [$\gamma$]{}-rays from an extended region is visible. The overlaid contour plot was derived from X-ray data taken in scanning mode with the PSPC detector aboard the ROSAT satellite [@ROSAT], restricting the photon energies to above 1.3keV in analogy to the original detection by @Aschenbach. We note that the X-ray data are contaminated with emission from the Vela SNR and (east of ). Ignoring this and the fact that the [$\gamma$]{}-ray data were not corrected for acceptance and exposure, the correlation coefficient between the [$\gamma$]{}-ray counts and X-ray counts in bins of $0.3\degr \times 0.3\degr$ size was found to be $0.67 \pm 0.05$. The use of ASCA X-ray data [@VelaJrASCA], which does not cover the complete SNR, yields a very similar correlation coefficient. More detailed studies of the [$\gamma$]{}-ray morphology have to await future high-statistics observations. The differential photon flux spectrum of the [$\gamma$]{}-ray emission from the whole SNR is shown in Fig. \[Fig:Spectrum\]. A power law $$\varphi(E) = \frac{\mathrm{d}\Phi}{\mathrm{d}E} = \varphi_{1\,\mathrm{TeV}} \cdot \left(\frac{E}{1\,\mbox{TeV}}\right)^{-\Gamma}$$ was fitted to the data points (solid line in Fig. \[Fig:Spectrum\]) with a $\chi^2 / \mathrm{d.o.f.} = 10/7$ and results in a photon index of [$\Gamma = 2.1 \pm 0.1_{\mathrm{stat}}$]{} and a differential flux at 1TeV of [$\varphi_{1\,\mathrm{TeV}} = (2.1 \pm 0.2_{\mathrm{stat}}) \times 10^{-11} \mbox{cm}^{-2}\mbox{s}^{-1}\mbox{TeV}^{-1}$]{}. The corresponding integral flux above 1TeV is [$\Phi(E>1\,\mbox{TeV}) = (1.9 \pm 0.3_{\mathrm{stat}}) \times 10^{-11} \mbox{cm}^{-2}\mbox{s}^{-1}$]{} which is of the order of the Crab flux at these energies [@HESSCrab]. As a cross-check, the analysis was repeated using data set OS2 for background estimation and compatible results were obtained. Systematic uncertainties were estimated by varying cuts and the binning in the energy spectrum determination, and by repeating the analysis with different background data sets (OS1, OS2 and subsamples of OS1) and with different atmosphere models in the simulation. The systematic errors are $\sim 0.2$ for the photon index and $\sim 30\%$ for both the differential flux at 1TeV and the integral flux. Discussion ==========  is the second shell-type SNR which has been spatially resolved at TeV energies, following the H.E.S.S. detection of [@Nature]. Similar to ,  is a weak radio source and was initially discovered in X-ray observations. There are some other similarities between these two SNRs, in particular: (i) in both sources the non-thermal X-ray component strongly dominates over the thermal component and (ii) both are strong sources of extended TeV emission spatially correlated with X-rays. There are two basic mechanisms for TeV [$\gamma$]{}-ray production in young SNRs – inverse Compton scattering (IC) of multi-TeV electrons on photons of the cosmic microwave background (CMB) and other target photon fields, and $\pi^0$-decay [$\gamma$]{}-rays from inelastic interactions of accelerated protons with ambient gas. The measured [$\gamma$]{}-ray flux spectrum of  translates into an energy flux between 1 and 10TeV of [$w_\gamma(1-10\,\mathrm{TeV}) = \int_{1\,\rm{TeV}}^{10\,\rm{TeV}} E \varphi(E) \mathrm{d}E \approx 7 \times 10^{-11} \mathrm{erg}\, \mathrm{cm}^{-2} \mathrm{s}^{-1}$]{}, which is quite close to the X-ray energy flux of the entire remnant of $w_{\rm{X}}(0.5-10\,\mathrm{keV}) \sim \! 10^{-10} \mathrm{erg}\,\mathrm{cm}^{-2} \mathrm{s}^{-1}$ [@VelaJrASCA]. If the [$\gamma$]{}-ray emission is entirely due to the IC process on CMB photons, and assuming that the synchrotron and IC emissions are produced by the same electrons and the emission regions have roughly the same size ($\xi \approx 1$) then, according to $w_\gamma/w_{\rm{X}} \simeq 0.1 (B/10\mu \mathrm{G})^{-2} \xi$ [@Aharonian97], the magnetic field in the [$\gamma$]{}-ray production region cannot significantly exceed the interstellar value of several $\mu \mathrm{G}$. If one assumes a larger magnetic field in the remnant the IC scenario would therefore become less favourable. On the other hand, the TeV flux can be easily explained in terms of interactions of accelerated protons with the ambient gas. The total energy in accelerated protons in the range $10-100$TeV required to provide the observed TeV flux is estimated to be [$W(10-100\,\mathrm{TeV}) \approx t_{pp \rightarrow \pi^0} \times L_\gamma(1-10\,\mathrm{TeV}) \approx 1.5 \times 10^{48} (d/200\,\mathrm{pc})^2 (n/1\,\mathrm{cm}^{-3})^{-1}\,\mathrm{erg}$]{}, where $t_{pp \rightarrow \pi^0} \approx 4.5 \times 10^{15} (n/1\,\mathrm{cm}^{-3})^{-1} \mathrm{s}$ is the characteristic cooling time of protons through the $\pi^0$ production channel, and [$L_\gamma (1-10\,\mathrm{TeV})=4 \pi d^2 w_\gamma(1-10\,\mathrm{TeV}) \approx 3 \times 10^{32}(d/200\,\mathrm{pc})^2 \,\mathrm{erg/s}$]{} is the luminosity of the source in [$\gamma$]{}-rays between 1 and 10TeV. Assuming that the power-law proton spectrum with spectral index $\alpha \approx \Gamma$ continues down to $E \sim 1 \,\mathrm{GeV}$, the total energy in protons is estimated to be [$W_{\mathrm{tot}} \approx 10^{49} (d/200\,\mathrm{pc})^2 (n/1\,\mathrm{cm}^{-3})^{-1} \mathrm{erg}$]{}. Thus, for distances to the SNR in the order of $d \approx 200$pc the conversion of several percent of the assumed mechanical explosion energy of $10^{51}$erg to the acceleration of protons up to $\geq 100$TeV would be sufficient to explain the observed TeV [$\gamma$]{}-ray flux by nucleonic interactions in a medium of density comparable to the average density of the interstellar medium, $n \sim 1\,\rm cm^{-3}$. For larger distances a correspondingly higher fraction of the explosion energy would have to be converted into the acceleration of protons. More data will be taken with [H.E.S.S.]{} in order to study the morphology of the remnant in detail, to compare the results with the CANGAROO measurement and to distinguish between electronic and hadronic acceleration scenarios. The support of the Namibian authorities and of the University of Namibia in facilitating the construction and operation of H.E.S.S. is gratefully acknowledged, as is the support by the German Ministry for Education and Research (BMBF), the Max Planck Society, the French Ministry for Research, the CNRS-IN2P3 and the Astroparticle Interdisciplinary Programme of the CNRS, the U.K. Particle Physics and Astronomy Research Council (PPARC), the IPNP of the Charles University, the South African Department of Science and Technology and National Research Foundation, and by the University of Namibia. We appreciate the excellent work of the technical support staff in Berlin, Durham, Hamburg, Heidelberg, Palaiseau, Paris, Saclay, and in Namibia in the construction and operation of the equipment.
--- author: - 'P.G. Willemsen,  C.A.L Bailer-Jones,  T.A. Kaempf, K.S. de Boer' bibliography: - 'H4080.bib' date: ' Recieved 4 November 2002 / Accepted 5 February 2003 ' title: Automated Determination of Stellar Parameters from Simulated Dispersed Images for DIVA --- Introduction ============= The DIVA satellite was proposed in 1996 by a German consortium of astronomical institutes (@Roeser6, @Roeser5) and is currently foreseen for launch in 2006. DIVA will measure the positions, brightnesses and proper motions of some 35 million stars. The scientific goal is to study the Milky Way and to improve the calibration of stellar properties and parameters. This mission follows up on the HIPPARCOS satellite which measured parallaxes for 100000 stars. For about 20000 of these stars, the accuracy in parallax was better than 10%. With the DIVA satellite this number of stars will be increased by at least a factor of 25 (@Roeser1). DIVA will perform an all-sky survey with a limiting visual magnitude of $V \simeq 15.5$ mag. Note that every observed star will be measured about 120 times in the course of the mission. The stated magnitude limits refer to the combined images of all single measurements. The measurements include the precise determination of positions, trigonometric parallaxes, proper motions, colours and magnitudes. For about 13 million stars, spectrophotometric data will also be obtained down to a visual magnitude of $V \simeq 13.5$. An additional UV telescope will perform photometry in two spectral ranges adjacent to the Balmer jump. The DIVA survey represents a large scale and deep astrometric and photometric survey of the local part in our Galaxy. The importance of these data to modern astrophysics will be significant, with applications ranging from stellar structure and evolution to cosmological aspects. Examples are a precise determination of the luminosity function in the solar neighbourhood, a better understanding of the structure and formation of our galaxy, the estimation of the amount of dark matter as well as a better calibration of the cosmological distance ladder (@Roeser1). After the mission the photometric and spectrophotometric images will be used to obtain the brightness, the colour and the [[dispi]{}s]{}  for the stars. The [[dispi]{}s]{}will allow to derive the astrophysically relevant parameters [$T_{\rm eff}$]{}, [$\log g$]{}, [\[M/H\]]{} and [$E(B-V)$]{}. Especially the derivation of [$\log g$]{} is of importance for objects too distant to result in an accurate parallax. With these objects in mind, we have carried out the present study. We will demonstrate that the essential parameters of the stars can be retrieved with a reasonable level of accuracy from the [[dispi]{}s]{}alone. We will show that astrophysical parameters can be well derived down to the survey limit, perhaps even adequately for stars 1 to 2 magnitudes fainter. There are good scientific arguments to reach fainter in selected fields, see e.g. [@Salim]. DIVA DISPIS =========== The concept of a DISPI ---------------------- The DIVA satellite is not only unique in its applications and abilities but also in the way it records spectra. DIVA will use a grating system yielding a dispersion of $\simeq$ 200nm/mm on the focal plane with a total efficiency of about 60%. For astrometric and other reasons, the resulting (spectral) orders of the grating are not separated. Thus “classical" spectrophotometry will not be obtained. Instead, the detector will record a pixel related intensity function for each star, in which all orders (and thus wavelengths) overlap. Such a one-dimensional position-coded intensity distribution is called [dispi]{} ([DISP]{}ersed [I]{}ntensity). Fig.\[disp\] shows the position-coded wavelength of the grating’s orders. One can see that the resolution of the second and third orders increases by factors of two and three relative to the first order one, respectively. In the cross-dispersion direction, there are physical pixels while in the direction of dispersion there will be on-chip binning, resulting in “effective pixels". One such effective pixel corresponds to about 11.6 nm in the first order. The maximum transmission of the first order is at about 750 nm (see Fig.\[trans\]), while that of the second and third orders are at about 380 and 250 nm respectively (see also @Scholz98, @Scholz20). Dealing with DISPIs ------------------- Given the nature of the [[dispi]{}s]{}, a classical spectrophotometric analysis – like line and continuum fitting – to derive astrophysical parameters is cumbersome. We will show that by training Artificial Neural Networks (ANNs) on simulated [[dispi]{}s]{}, we can readily access this information without any further pre-processing of the signal. Using [[dispi]{}s]{} from calibration stars, i.e. stars with known physical and apparent properties, we would initially build up a standard set of [[dispi]{}]{} data. The automated classification technique as developed will then use this library. The calibration could be iteratively improved using [[dispi]{}s]{} and their parametrization results obtained during the mission. Note that in these simulations, we did not use absolute fluxes as they will be available from the mission (see below). In this work, only the shape of a [[dispi]{}]{} and its line features were used for tests to determine basic stellar parameters. Typical [[dispi]{}s]{} can be seen in Fig.\[disp4500\] and Fig.\[disp9500\] for a cool and a hot star, respectively. One can see that the first, second and third orders contribute different amounts to the total light in a [[dispi]{}]{} for different temperatures. The first order’s transmission maximum is at about 700 nm while the second and third orders contribute mostly at shorter wavelengths. Thus, for the cooler star, the second order contributes less to the [[dispi]{}]{} in the case of the bluer, hotter star. The third order’s contribution becomes negligible for low temperatures. Note that the “continuum" of a [[dispi]{}]{} is mainly defined by the first and partly by the second order. For a hot star, spectral line features are essentially only visible in the second and third orders due to their two- and threefold higher resolution. Only for strong molecular bands in very cool stars are features resolved in the first order. [cc]{} &\ \ 8 & 122\ 9& 77\ 10 & 47\ 11 & 28\ 12 & 16\ \ 8 & 91\ 9 & 56\ 10 & 34\ 11 & 20\ 12 & 11\ \ 8 & 82\ 9 & 50\ 10 & 30\ 11 & 17\ 12 & 9\ \ 8 & 95\ 9 & 59\ 10 & 36\ 11 & 21\ 12 & 12\ \ 8 & 118\ 9 & 74\ 10 & 46\ 11 & 28\ 12 & 16\ The signal-to-noise ratio of a single [[dispi]{}]{} as measured from the 10 central pixels around effective pixel position 60 is shown for different visual magnitudes and temperatures in Table \[sn\]. Artificial Neural Networks {#ANN} ========================== Neural networks have proven useful in a number of scientific disciplines for interpolating multidimensional data, and thus providing a nonlinear mapping between an input domain (in this case the [[dispi]{}s]{}) and an output domain (the stellar parameters). For an overview of Artifical Neural Networks (ANNs) and their application in astronomy for stellar classification see, for example, [@Bailer2002]. The software used in this work is that of [@soft]. A network consists of an input layer, one or two hidden layers and an output layer. Each layer is made up of several nodes. All the nodes in one layer are connected to all the nodes in the preceding and/or following layers. These connections have adaptable “weights", so that each node performs a weighted sum of all its inputs and passes this sum through a nonlinear transfer function. That weighted sum is then passed on to the next layer. Before the network can be used for parametrisation, it needs to be trained, meaning the weights have to be set to their appropriate values to perform the desired mapping. In this process, [[dispi]{}s]{} together with known stellar parameters as target values are presented to the network. From these data, the optimum weights are determined by iteratively adjusting the weights between the layers to minimize an output error, i.e. the discrepancy between the targets and the network outputs. This is performed by a multidimensional numerical minimization, in this case with the conjugate gradients method. When this minimization converges, the weights are fixed and the network can be used in its “application" phase: now, only the [[dispi]{}]{} input flux vector is presented and the network’s outputs produce the stellar parameters of these [[dispi]{}s]{}. Since we used only the central 51 effective pixels of the [[dispi]{}s]{} (range 30 to 80, see Fig.\[disp4500\] and \[disp9500\]), the input layer of the network was always made up of the same number of nodes, i.e. 51. We found that the performance was best when using two hidden layers, each containing 7 nodes. More nodes did not improve the result significantly but increased the training time considerably. With four output parameters this network then contains $51 \cdot 7 + 7 \cdot 7 + 7 \cdot 4 = 434$ weights (plus 18 bias weights). Since we wanted to classify [[dispi]{}s]{} solely based on their shapes, the absolute flux information was removed by area-normalizing each [[dispi]{}]{}, i.e. each flux bin of a given [[dispi]{}]{}was divided by the total number of counts in that [[dispi]{}]{}. Given the non-uniform distribution of the training data over [$T_{\rm eff}$]{}, we classified [[dispi]{}s]{} in terms of log[$T_{\rm eff}$]{} instead of [$T_{\rm eff}$]{}. Note that, in our tests, we have not included distance information as it eventually might be done using DIVA parallaxes, since the present goal was to test the retrieval of stellar parameters from [[dispi]{}s]{} only. The parametrization errors given below are the average (over some set of [[dispi]{}s]{}) errors for each parameter, i.e. $$\label{A} A = \frac{1}{N} \cdot \sum_{p=1}^N \left|C(p) - T(p)\right|$$ where $p$ denotes the $p^{\rm th}$ [[dispi]{}]{} and $T$ is the target (or “true") value for this parameter. Since the network’s function approximation can depend on the initial settings of the weights, it is sometimes recommended to use a “committee" of several networks with identical topologies but different initializations. The quantity $C(p)$ is the classification output averaged over a committee of three networks. Data simulation {#input} =============== Models of DISPIs ---------------- The model of the spectrophotometric output from DIVA used in this work was developed by [@Scholz98]. This software requires a spectral energy distribution as input and creates a two-dimensional signal output image on the detector, containing the dispersed intensity. These images have 114$\times$150 pixels where the latter number, refering to the dispersion direction, is in effective pixels and the former number, refering to the scanning direction is in physical pixels. Fig.\[fulldisp\] shows such an image. Ultimately, only a narrow window around the [[dispi]{}]{} will be read from the focal plane data stream and trasmitted to ground, the so-called Spectroscopic (SC) window. As input spectra we used synthetic spectra from [@basel97] and [@basel98]. In total there were about 5600 spectra covering a parameter grid with 68 values for [$T_{\rm eff}$]{} between 2000 K and 50000 K (in steps of 200 K for the low temperature star, and 2500 K for the high temperature stars), 19 possible values for [$\log g$]{}ranging from $-1.02 \leq\ $[$\log g$]{}$\ \leq 5.5$ in steps of approx. 0.1 to 0.3 dex and 13 values for [\[M/H\]]{} with $-5 \leq\ $[\[M/H\]]{}$\ \leq 1$ in steps of 0.5 and 0.1 dex. Note that in our tests there were no input data for metallicities in the range from $-$2.5 to $-$0.3 dex.[^1] The obvious advantages of using synthetic spectra are the complete wavelength range from 200 to 1200 nm and the large number of spectra over a large parameter space. We are currently constructing a library of (previously published) real stellar spectra. However, since it combines spectra from many different available catalogues, there is a considerable heterogeneity among these data. Moreover, few stars have been observed with the desired wavelength range from the UV to the IR. Interstellar extinction was modelled by using a synthetic extinction curve for $R=3.1$ given as $\frac{A(\lambda)}{E(B-V)}$ versus $\lambda$. We used the extinction curve from [@Fitzpatrick], simulating 7 different extinction values in steps of 0.15 and 0.2 in the range $0.0 \leq\ $[$E(B-V)$]{}$\ \leq 1.0$ mag. Note that the zeroth order was omitted in this and all other simulations, so that we only worked with dispersed images made up of the first to third spectral order. Since the data were area-normalized before passing them through the neural network, the magnitude information of the zeroth order is lost anyway. For the simulation of the UV-telescope (see below), the same extinction curve was applied. This procedure was done for five different visual magnitudes in the range from $8 \leq\ $V$\ \leq 12$ mag. Noise was added to these two-dimensional intensity distributions by passing them through another software tool developed by Ralf Scholz. Here, a mean sky backround of $sky = 0.04$e$^-$/(pix s) and a dark current of $dark = 2$e$^-$/(pix s) were added with additional source and sky Poisson noise. The CCD’s read out noise was 2 e$^-$/eff.pix. The size of the SC window to be cut from the on-board data stream around the [[dispi]{}]{} is crucial as it determines the data rate which is in turn related to the satellite’s overall performance: A smaller window permits a larger number of SC windows (objects) to be transmitted. This would, for example, permit a fainter magnitude limit. The optimum window size, i.e. the window around a dispersed image with the highest amount of important and lowest amount of redundant information, was investigated in earlier studies. Concerning the window size in the cross dispersion direction it was found that the innermost 7 pixel are sufficient (@windivaH) in terms of highest S/N. However, due to the satellite’s intrinsic attitude uncertainty it is required that the smallest acceptable window size in the scanning direction be 12 pixel (see @windivaB). For our studies we therefore summed up the TDI-rows over the innermost 13 rows (6 pixels in each direction about the central row). Future work will use a profile fit to obtain the stellar intensity. The optimum size in the dispersion direction was evaluated by S/N studies and the (spectral) information content. This amount of information was measured by the ability of Neural Networks to determine the stellar parameters [$T_{\rm eff}$]{}, [$\log g$]{} and [\[M/H\]]{} for different ranges of [[dispi]{}s]{}. It was found (@report2) that these parameters can be adequately retrieved from approximately 45 effective pixels around the maximum intensity in the [[dispi]{}s]{}(which is at about effective pixel 60). However, since these earlier studies included only [[dispi]{}s]{} with [$T_{\rm eff}$]{}$\geq$ 4000 K and since the overall intensity distribution moves to smaller effective pixel values for lower temperatures, we chose the range from 30 to 80 effective pixels in this work. This should also be appropriate for very red objects like L and M dwarfs with [$T_{\rm eff}$]{}$\simeq$ 1200–4000 K. For further processing, the simulated sky was subtracted from the dispersed image by evaluating the background level from a single column in scanning direction next to the dispersed image. The UV imaging telescope will make use of the same type of CCD’s as the main instrument. The UV magnitudes in the two different passbands next to the Balmer jump were calculated from the same synthetic spectra as described above, simply by integrating the flux in the ranges from 310 to 360 nm and 380 to 410 nm. Of course, the true filters will not have exactly square transmission curves, but this approximation is sufficient for a first analysis of the influence of the UV channel. The two UV flux values were fed into the network in three different ways. First, we calculated the $asinh$ of the flux ratio, i.e. $asinh(UV_{\rm short}/UV_{\rm long}$) (note that the $asinh$ - function is not undefined for negative values, in contrast to the log-function. Negative values might occur due to noise for very low temperature stars with almost no flux in the UV). This ratio is designed to be sensitive to the Balmer jump thus yielding additional information about gravity and temperature. Second, we summed up the intensity in a [[dispi]{}]{} in the range 70 to 80 effective pixel ($\sum_{i=70}^{80}I_i$) and calculated the ratios ($ UV_{\rm short}/\sum_{i=70}^{80}I_i$) and ($ UV_{\rm long}/\sum_{i=70}^{80}I_i$). Since the first order’s contribution in the selected effective pixel range corresponds to a wavelength range from about 550 to 600 nm (see Fig.\[disp\]), these ratios should be a good measure of extinction due to the long “lever" ranging from the UV to the visual/red part of the spectrum. Noise in single DISPIs versus end of mission stacked DISPIs ----------------------------------------------------------- The results reported in this paper (Sect.\[apply\]) have been obtained using *single* [[dispi]{}s]{}. However, by the end of the mission, DIVA will have imaged each star about 120 times. Thus the final signal-to-noise ratio for any given magnitude will be much better than from a single measurement. Therefore, the parametrization performance will also be improved or, equivalently, will be achieved at a fainter magnitude. We calculate the final S/N from a sum of 100 two-dimensional intensity distributions. From the ratio of this final S/N to the single [[dispi]{}]{} S/N, we can find the equivalent magnitude difference which gives the magnitude to which our parametrization results for a [*single*]{} [[dispi]{}]{} can be applied to, without having to do a set of separate simulations on summed [[dispi]{}s]{}. The resulting $\Delta V$ is given in Fig.\[final\] (see further Sect.\[apply\]). We see, for example, that a [[dispi]{}]{} made up of one-hundred frames each with $V=14$ mag has the same S/N as a single [[dispi]{}]{} of a star with magnitude $V=10.8$ mag. [*Unless stated otherwise, all results below will refer to end-of-mission data quality.*]{} Preparing the ANN input data {#data_sets} ============================ The ensemble of [[dispi]{}s]{} was divided into several smaller samples with different temperature ranges. We chose seven different ranges with a broad distinction between low - (the *L-samples*), mid - (*M-samples*) and high temperatures (*H-samples*). The abbreviations as stated in Table \[set\] are used throughout this work. The numbers in the last column show the total number of [[dispi]{}s]{}  in the training set in this temperature interval for the case without and with extinction included (approximately the same number of [[dispi]{}s]{} in each temperature range was used in the application set.) [ccc]{} sample & temperature range & without/with ext.\ *L$_{1}$* & 2000 K $\leq$ [$T_{\rm eff}$]{}$<$ 4000 K & 330/2300\ *L$_{2}$* & 4000 K $\leq$ [$T_{\rm eff}$]{}$<$ 6000 K & 570/3980\ *L$_{3}$* & 6000 K $\leq$ [$T_{\rm eff}$]{}$<$ 8000 K & 500/3500\ *M$_{1}$* & 8000 K $\leq$ [$T_{\rm eff}$]{}$<$ 10000 K & 400/2800\ *M$_{2}$* & 10000 K $\leq$ [$T_{\rm eff}$]{}$<$ 12000 K & 180/1200\ *H$_{1}$* & 12000 K $\leq$ [$T_{\rm eff}$]{}$<$ 20000 K & 390/2700\ *H$_{2}$* & 20000 K $\leq$ [$T_{\rm eff}$]{}$<$ 50000 K & 450/3100\ We found that the separation into such small temperature regions yielded improved parametrization results. This is understandable as the classification results for the stellar parameters, especially [$\log g$]{} and [\[M/H\]]{}, depend upon the presence of spectral features in a [[dispi]{}]{} which are also closely related to the temperature of a star. This effect was also found in [@Weaver2], using spectra in the near-infrared and classifying them in terms of MK stellar types and luminosity classes. For our ANN work we chose simple temperature ranges, also aiming at database subsamples of similar size. Though some of the intervals roughly correspond to the temperatures found in certain MK classes which are characterized by certain line-ratios, i.e. common physical characteristics, the chosen distinction was motivated to allow for a reasonable training time for the networks. Another reason was to see what can be learned from [[dispi]{}s]{} in different temperature regimes in principle. The mid-temperature samples (*M*-samples) were defined for the range in which the Balmer jump and the H-lines (e.g. H$_{\beta}$) change their meaning as indicators for temperature and surface gravity (see e.g. @Napiwotzki). Under real conditions one would have to employ a broad classifier to first separate [[dispi]{}s]{} into smaller (possibly overlapping) temperature ranges. This could be also based on neural networks, but also on other methods, such as minimum distance methods. Each temperature sample was finally divided into two *disjoint* parts, the training- and the application data. This means that our classification results (see Sect.\[apply\]) are from [[dispi]{}s]{} in the gaps of our training grid. The generalization performance of a network or its ability to classify previously unseen data is influenced by three factors: The size of the training set (and how representative it is), the architecture of the network and the physical complexity of the specific problem, which also includes the presence of noise. Though there are distribution-free, worst-case formulae for estimating the minimum size of the training set (based on the so called VC dimension, see also @Haykin), these are often of little value in practical problems. As a rule of thumb, it is sometimes stated (@Duda) that there should be (W $\cdot$ 10) different training samples in the training set, W denoting the total number of free parameters (i.e. weights) in the network. In our network without extinction there were 452 weights. Thus, in some cases, there were fewer training samples than free parameters. However, we found good generalization performance (see Sect.\[apply\] and results therein). This may be due both to (1) the “similarity" of the [[dispi]{}s]{} in a specific [$T_{\rm eff}$]{} range, giving rise to a rather smooth (well-behaved) input-output function to be approximated, and (2) redundancy in the input space. Both give rise to a smaller number of effective free parameters in the network. We also tested whether there are significant differences between determining each parameter separately in different networks and determining all parameters simultaneously. In the first case each network would have only one ouput node while in the latter case the network had multiple outputs. If the parameters ([$T_{\rm eff}$]{}, [$\log g$]{} etc.) were independent of each other, one could train a network for each parameter separately. However, we know that the stellar parameters influence a stellar energy distribution simultaneously at least for certain parameter ranges, (e.g. hot stars show metal lines less clearly than cool stars). Also, for specific spectral features, changes in the chemical composition [\[M/H\]]{}can sometimes mimic gravity effects (see for example @Gray92). Varying extinction can cause changes in the slope of a stellar energy distribution which are similar to those resulting from a different temperature. Recently, [@Snider2001] determined stellar parameters for low-metallicity stars from stellar spectra (wavelength range from 3800 to 4500 [[Å]{}]{}). They reported better classification results when training networks on each parameter separately. We tested several network topologies with the number of output nodes ranging from 1 to 3 (in case of extinction from 1 to 4) in different combinations of the parameters. It was found that single output networks did not improve the results. We therefore classified all parameters simultaneously. Results {#apply} ======== In this section we report the results from our parameter determination. In order to appreciate the results one has to realize that the effects of [$T_{\rm eff}$]{}, [$\log g$]{} and [\[M/H\]]{} on a spectrum differ significantly in magnitude. The strongest signal is that of temperature (the Planck function for black bodies). The much weaker signal is that of [$\log g$]{}, present in the width of spectral lines but only weakly in the continuum. Metallicity is a very weak signal visible in individual spectral lines or perhaps in broader opacity structures (such as G-band or molecular bands). However, in a [[dispi]{}]{} essentially all line structure is washed out. These general aspects can also be found in [@Gray92]. For more see Sect.\[discussion\]. The errors given are the average errors as in Eq.(\[A\]). [$T_{\rm eff}$]{} was classified in terms of log[$T_{\rm eff}$]{} but for better understanding, the resulting errors were transformed to give the *fractional* error in [$T_{\rm eff}$]{}. These fractional errors are stated throughout this paper. The networks had the same overall topology for all tests, though the number of inputs naturally was three larger for the case with UV information. We did not tune the networks to the very best performance possible. Tests showed that results in individual [$T_{\rm eff}$]{}-ranges could be improved through adjustments of several free parameters in the networks (e.g. the number of hidden nodes, number of iterations etc.). In some cases, we had to adjust some of these parameters, for example in the cases that the networks did not converge properly. However, such individual tests are very time-consuming. For the purpose of this paper, we used one topology. Fig.\[Fno2\_ext\] shows the classification results for the stellar parameters [$T_{\rm eff}$]{}, [$\log g$]{} and [\[M/H\]]{} in case of no extinction, while Fig.\[G\_ext\] presents the results for simulations with extinction included. For each plot, the left column shows the results when UV data were included, while the right column refers to the cases without additional UV data. The upper magnitude scale shows the visual magnitudes for a single detection, whereas the lower magnitude scale is relevant for a sum of 100 [[dispi]{}s]{}, representative of end-of-mission quality data (see Fig.\[final\]). As a comparison, we tested the performances of random, i.e. untrained, networks. These are presented in Table \[randomtab\]. If trained networks give parameters with uncertainites larger than the values listed here, those parameter values are not meaningful. The corresponding error for [$E(B-V)$]{} was about 0.25 mag for all temperature ranges and magnitudes. [lccccccc]{} Param. & $A_{L1}^{*}$ & $A_{L2}$ & $A_{L3}$ & $A_{M1}$ & $A_{M2}$ & $A_{H1}$ & $A_{H2}$\ [$T_{\rm eff}$]{}& 0.15 & 0.1 & 0.07 & 0.06 & 0.05 & 0.15 & 0.17\ [$\log g$]{}& 1.9 & 1.4 & 1.3 & 1.0 & 0.8 & 0.8 & 0.6\ [\[M/H\]]{}& 1.2 & 1.8 & 1.8 & - & - & - & -\ $^{*}$ Temperature ranges as in Table \[set\] A variation in [\[M/H\]]{} changes a [[dispi]{}]{} only very subtly for higher temperature stars due to the simple fact that metallicity features are very weak in the energy distribution of hotter stars. Discussion and conclusions {#discussion} ========================== As can be seen from Fig.\[Fno2\_ext\] and \[G\_ext\] our results show interesting trends related with the temperature of the stars. In general, temperature is the dominating factor for the shape of and even the details in a spectral energy distribution. Overall, [$T_{\rm eff}$]{} can be retrieved to very acceptable accuracy, even without additional UV information: the classification is better than 10 % even for very faint stars ($V=14$ mag, no extinction included: through this section the V magnitude refers to end-of-mission quality data). Including UV fluxes improves the [$T_{\rm eff}$]{} parameter results most noticeably for hot stars ([$T_{\rm eff}$]{}$>$ 9000 K) and here especially for the fainter ones. For example, in case of no extinction, the error for [[dispi]{}s]{}  with $V=14$ mag in the [$T_{\rm eff}$]{} range 12000–20000 K (H1 sample) drops from 5 % to about 3 % when the UV data are included. When extinction is included, the temperature error drops from 9 % to about 4 % for this temperature range at this magnitude. Though the information in the short wavelength range is already available in the [[dispi]{}]{}, the higher sensitivity of the UV telescope obviously contributes essential information. Concerning [$\log g$]{} we see from the figures that the classification performance can be improved when additional UV information is included. For example, in case of no extinction and for temperatures in the range 8000 K $\leq$ [$T_{\rm eff}$]{} $<$ 10000 K (M$_{1}$) and visual magniude $V=14$, the error in [$\log g$]{} reduces from about 0.4 to 0.15 dex with additional UV information. At this magnitude but for temperatures in the range 6000 K $\leq$ [$T_{\rm eff}$]{} $<$ 8000 K (L$_{3}$), the error reduces considerably from about 0.85 dex to 0.15 dex. These results emphasize the benefit of UV telescope data in the classification process. In general, the [$\log g$]{} results are poorer for temperatures in the range 4000 K $\leq$ [$T_{\rm eff}$]{} $<$ 6000 K (L2 sample) when compared to other ranges. This is understandable since in this temperature range hardly any atomic/molecular signatures sensitive to the density (and thus $\log g$) of the gases in a stellar atmosphere are present. In contrast, for the very low temperature ranges the numerous mostly molecular spectral features provide the information about the density of the atmospheric gases. For the higher temperatures the Balmer jump provides still gravity information. Metallicity is the most difficult parameter to derive from [[dispi]{}s]{}. This was to be expected: In data with such low spectral resolution all details of spectral line information, and thus of metal abundances, are lost. In Figs. \[Fno2\_ext\] and \[G\_ext\] only the classification results for metallicities in the range $-$0.3 dex to 1.0 dex are shown. This is due to a lack of input data in the metallicity range from $-$2.5 dex to $-$0.3 dex (see above). Very low metallicities ([\[M/H\]]{} $\leq$ $-$2.5 dex) make only a very small imprint on a [[dispi]{}]{} such that the classification almost fails completely at these resolutions, except for very bright, cool stars ([$T_{\rm eff}$]{} $\leq$ 4000 K). The results for metallicities in the analysed metallicity range are reasonably good even for low magnitudes (better than about 0.3 dex for $V \leq 14$ mag, no extinction). When extinction is included, the metallicity performance declines considerably, except for the very cool objects ([$T_{\rm eff}$]{} $<$ 4000 K, L1 sample) the metallicity of which can be determined to better than 0.3 dex for all simulated magnitudes. For [[dispi]{}s]{} in the temperature range 6000 K $\leq$ [$T_{\rm eff}$]{} $<$ 7000 K ($L_{3}$) we see that the error can be reduced from about 0.45 dex to 0.3 dex (no extinction) and from about 0.8 dex to 0.6 dex (with extinction) when the UV information is included. Extinction is not easy to be retrieved solely from the shape of a [[dispi]{}]{}, since its overall effect mimics, to some extent, that of temperature. However, as can be seen from the figures, the determination of extinction improves when the UV information is included. This was to be expected since the UV data gives the strength of the Balmer jump, giving [$T_{\rm eff}$]{} information unblurred by extinction. One may ask whether the accuracy of the parameters derived would be better when working with the single orders of the DIVA dispersed image. We have tested this for a limited number of parameter combinations. We generally found that for brighter objects ($V \leq\ 13$ mag) the accuracy is improved when using the single orders and then only by a small amount (in the range 30 to 80 effective pixels). This might be surprising at first but it is understandable since for fainter objects, using the signal in a single order, deteriorates the S/N, so we find a poorer result from the parameter extraction routine. Apart from these results, to obtain the separate order’s signals would require a deconconvolution of the [[dispi]{}]{} which in itself leads to increased uncertainty in the intensities. Moreover, a deconvolution requires knowledge of the nature of the objects so there is no guarantee that this process will yield unique results. Since DIVA will not have separate order images we have not pursued this aspect further. Several neural network approaches to stellar parametrization have been reported in astronomy. A comparison with those would have to address at least two aspects, such as nature of the type of network and characteristics of the data used. The networks may indeed be very different in structure such as learning and regularization technique and especially topology (compare e.g. @Weaver1, @Snider2001 and this work). But because in all these cases the data were of very different nature (e.g. different resolutions and number of input flux bins), a general comparison of the results is not really possible. A few remarks can be made nevertheless. Projects to obtain MK classifications normally use data in the wavelength range from 3800 to 5200 Å with high spectral resolution of about 2 to 3 Å (see e.g. @Bailer98). [@Weaver1] and [@Weaver2] classified stars in terms of MK classes in the visual to near-infrared wavelength range 5800-8900 [Å]{} with a resolution of 15 [Å]{}. However, even the resolution of these spectra is still much better (by a factor of approx. three) than the “best" one of [[dispi]{}s]{} which is about 40 [Å]{} in the low efficiency third order. We would expect such resolutions to give better precision for spectral type or [$T_{\rm eff}$]{} as well as for line sensitive parameters ([$\log g$]{} and [\[M/H\]]{}) than with [[dispi]{}s]{} on account of the higher resolution. [@Snider2001] recently classified spectra having 1 to 2 [Å]{} resolution. They determined [$T_{\rm eff}$]{}, [$\log g$]{} and [\[M/H\]]{} of low metallicity stars to an accuracy of about 150 K in [$T_{\rm eff}$]{}  in the range 4250 K $<$ [$T_{\rm eff}$]{}$<$ 6500 K, 0.30 dex in [$\log g$]{} over the range $1.0 \leq\ $[$\log g$]{}$\ \leq 5.0$ dex and 0.20 dex in [\[M/H\]]{} for $-4 \leq\ $[\[M/H\]]{}$\ \leq 0.3$ dex. From our results, we find for this temperature range ($L_{2}$ and $L_{3}$ sample) a classification precision in [$T_{\rm eff}$]{} of better than 5% for $V \leq 14$ mag (no extinction) without UV information and about 2% when UV data are included. Only for brighter stars ($V \leq 13$ mag) do we find that [$\log g$]{} can be determined from [[dispi]{}s]{} to better than 0.3 dex for temperatures in the range 4000 K $\leq$ [$T_{\rm eff}$]{} $<$ 6000 K but only when UV data are included. Concerning metallicity, our results are comparable (better than 0.2 dex for visual magnitudes $V \leq\ 12$ mag, no extinction, UV data included). Clearly, this is because we have only used metallicities in the range from $-$0.3 dex to +1 dex (see above). A comparison with the neural network approach using synthetic data for a test of possible GAIA photometric systems (@synspec) may be of relevance. Bailer-Jones also used input data with various moderate resolutions, some of them similar to those of the spectral orders in [[dispi]{}s]{}. The effects of the quantum efficiency (QE) of the detectors was not included, so that in his tests the information provided in the vicinity of (and shortward of) the Balmer jump could be utilized in full. After shifting our results to the fainter magnitudes reachable with GAIAs larger telescope, while considering the other differences between Bailer-Jones’ and our investigation as well as the differences between the DIVA and GAIA optics and data format, one must conclude that these ANN analyses work to similar satisfaction. Little work has been done so far concerning the automated determination of interstellar extinction. [@Weaver1] tested the effect of extinction and found that [$E(B-V)$]{} could be determined from spectra of A type stars with an accuracy of 0.05 mag in the range of [$E(B-V)$]{} of 0 - 1.5 mag. [@Gulati97] used IUE low-dispersion spectra (wavelength range 1153 - 3201 [Å]{}, spectral resolution 6 [Å]{}) from O and B stars. Applying reddening to their spectra in the [$E(B-V)$]{} range of 0.05 - 0.95 mag in steps of 0.05 mag, they were able to retrieve extinction with high accuracy to about 0.08 mag, clearly because of the presence of the 2200 Å bump in the input data. From Fig.\[G\_ext\] we see that extinction can be determined from [[dispi]{}s]{} to better than 0.08 mag for visual magnitudes $V \leq\ 14$ mag for all temperatures in case that UV data are included. Concerning the DIVA satellite, [@Elsner99] found interesting results with Minimum Distance Methods. However, the optical concept of DIVA has changed considerably since then. Thus, also here a comparison of the quality may lead to a skewed judgement and we therefore refrain from going into detail. A final remark deals with the effect of selecting our training sample randomly from the database. A random selection may accidentally lead to larger regions of parameter space without training data. Since our object (“application") sample is the complement of the training set, objects falling in those gaps clearly are classified worse than objects near trained points. [@Malyuto2] demonstrated such effects when using Minimum Distance Methods. The average errors in our results are influenced by such effects, but this was not investigated. In considering the accuracies obtained with our ANN approach we have to note that real stellar spectra show a much more complex behaviour than synthetic ones. For example, even the more realistic approach of non-LTE models (@Hauschildt99) cannot properly describe the true behaviour of elements in a stellar atmosphere (see e.g. @Gray92, Ch.13). Moreover, good colour calibrations in accord with observed data are still difficult to obtain, as described e.g. in [@Westera2002]. It is difficult to estimate how to properly weigh such intrinsic inconsistencies with respect to the final performance of the DIVA satellite. The effect of such cosmic scatter probably is that the final performance of DIVA might be less accurate than our results from these ideal synthetic spectra, or, that the accuracy curves of Fig.\[Fno2\_ext\] and Fig.\[G\_ext\] are to be shifted somewhat to brighter magnitudes. We argued that additional UV data can improve the parameter results considerably in the classification process. The spectral library of our present simulations does not include, however, changes in, e.g., alpha-process elements which can show up in changes also in the range of DIVA’s UV channels (e.g. the CN violet system in the range from 385 to 422 nm). The conclusions of the discussion are\ 1) The ANN method is well suited to obtain astrophysical parameters from DIVA [[dispi]{}s]{}.\ 2) The accuracy obtained is related with the strength of the signal of each parameter as present in a [[dispi]{}]{}: [$T_{\rm eff}$]{} is best, followed by [$\log g$]{}, and then [$E(B-V)$]{} and [\[M/H\]]{}.\ 3) The accuracy is clearly related with temperature: toward higher temperature the signal of both [$\log g$]{} and [\[M/H\]]{} decreases considerably.\ 4) Our results were obtained with synthetic spectra. Real stars will not all behave like text book objects and the classification coming from real data will necessarily be less good, albeit always to an unknown amount per star.\ 5) The classification quality is absolutely adequate to be able to select objects of desired characteristics from the final DIVA database to do statistical analyses and/or for efficient post mission type-related investigations. This project is carried out in preparation for the DIVA mission and we thank the DLR for financial support (Projectnr. 50QD0103). We thank Martin Altmann, Uli Bastian, Michael Hilker, Thibault Lejeune, Valeri Malyuto and Klaus Reif for helpful discussions. We also thank Oliver Cordes, Ole Marggraf and Sven Helmer for assistance with the computer system level support. [^1]: In the meantime we started simulations with a set of spectra with a complete range of metallicities.
--- abstract: | With the rapid development of neural architecture search (NAS), researchers found powerful network architectures for a wide range of vision tasks. However, it remains unclear if the searched architecture can transfer across different types of tasks as manually designed ones did. This paper puts forward this problem, referred to as **NAS in the wild**, which explores the possibility of finding the optimal architecture in a proxy dataset and then deploying it to mostly unseen scenarios. We instantiate this setting using a currently popular algorithm named differentiable architecture search (DARTS), which often suffers unsatisfying performance while being transferred across different tasks. We argue that the accuracy drop originates from the formulation that uses a super-network for search but a sub-network for re-training. The different properties of these stages have resulted in a significant **optimization gap**, and consequently, the architectural parameters “over-fit" the super-network. To alleviate the gap, we present a progressive method that gradually increases the network depth during the search stage, which leads to the Progressive DARTS (P-DARTS) algorithm. With a reduced search cost ($7$ hours on a single GPU), P-DARTS achieves improved performance on both the proxy dataset (CIFAR10) and a few target problems (ImageNet classification, COCO detection and three ReID benchmarks). Our code is available at . author: - Xin Chen - Lingxi Xie - Jun Wu - Qi Tian bibliography: - 'egbib.bib' date: 'Received: date / Accepted: date' title: 'Progressive DARTS: Bridging the Optimization Gap for *NAS in the Wild*' --- [example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore Introduction {#intro} ============ Recently, the research progress of computer vision has been largely boosted by deep learning [@lecun2015deep]. The core part of deep learning is to design and optimize deep neural networks, for which a few popular models have been manually designed and achieved state-of-the-art performance at that time [@krizhevsky2012imagenet; @szegedy2015going; @he2016deep; @zhang2018shufflenet; @howard2017mobilenets]. However, designing neural network architectures requires both expertise and heavy computational resources. The appearance of neural architecture search (NAS) has changed this situation, which aims to discover powerful network architectures automatically and has achieved remarkable success in image recognition [@zoph2016neural; @zoph2018learning; @liu2018progressive; @tan2019efficientnet]. In the early age of NAS, researchers focused on heuristic search methods, which sample architectures from a large search space and perform individual evaluations. Such approaches, while being safe in finding powerful architectures, require massive computational overheads [@zoph2016neural; @real2018regularized; @zoph2018learning]. To alleviate this burden, researchers have designed efficient approaches to reuse computation in the searched architectures [@cai2018efficient], which was later developed into constructing a super-network to cover the entire search space [@pham2018efficient]. Among them, DARTS [@liu2018darts] is an elegant solution that relaxes the discrete search space into a continuous, differentiable function. Thus, the search process requires optimizing the super-network and can be finished within GPU-hours. Despite the efficiency of super-network-based search methods, most of them suffer from the issue of instability, which indicates that (i) the accuracy can be sensitive to random initialization, and (ii) the searched architecture sometimes incurs unsatisfying performance in other datasets or tasks. While directly searching over the target problem is always a solution, we argue that studying this topic may unleash the potentials of NAS. To this end, we formalize a setting named **NAS in the wild**, illustrated in Figure \[motivation\], which advocates for the searched architecture on any proxy dataset to be easily deployed to different application scenarios. We argue that the instability issue originates from that the search stage fits the *super-network* on the proxy dataset, but the re-training stage actually applies the optimal *sub-network* to either the same dataset or a different task. Even if the proxy dataset and the target dataset are the same, one cannot expect the best super-network, after being pruned, produces the best sub-network. This is called the **optimization gap**. In this work, we explore a practical method to alleviate the gap, which involves gradually adjusting the super-network so that its properties converge to the sub-network by the end of the search process. Our approach, named Progressive DARTS (P-DARTS), is built on DARTS, a recently published method for differentiable NAS. As shown in Figure \[motivation\](b), the search process of P-DARTS is divided into multiple stages, and the depth of the super-network gets increased at the end of each stage. This brings two technical issues, and we provide solutions accordingly. First, since heavier computational overheads are required when searching with a deeper super-network, we propose **search space approximation**, which reduces the number of candidates (operations) when the network depth is increased. Second, optimizing a deep super-network may cause unstable gradients, and thus the search algorithm is biased heavily towards *skip-connect*, a learning-free operator that often falls on a rapid direction of gradient decay. Consequently, it reduces the learning ability of the found architecture, for which we propose *search space regularization*, which (i) introduces operation-level Dropout [@srivastava2014dropout] to alleviate the dominance of *skip-connect* during training, and (ii) regularizes the appearance of *skip-connect* when determining the final sub-network. The effectiveness of P-DARTS is firstly verified on the standard vision setting, *i.e.*, searching and evaluating the architecture on the CIFAR10 dataset. We achieve state-of-the-art performance (a test error of $2.50\%$) on CIFAR10 with $3.4$M parameters. In addition, we demonstrate the benefits of search space approximation and regularization: the former reduces the search cost to $0.3$ GPU-days on CIFAR10, surpassing ENAS [@pham2018efficient], an approach known for search efficiency; the latter largely reduces the fluctuation of individual search trials and thus improving its reliability. Next, we investigate the application in the wild, in which the searched architecture on CIFAR10 transfers well to CIFAR100 classification, ImageNet classification, COCO detection, and three person re-identification (ReID) tasks, *e.g.*, on ImageNet, it achieves top-$1$/$5$ errors of $24.4\%$/$7.4\%$, respectively, comparable to the state-of-the-art under the mobile setting. Furthermore, architecture search is also performed on ImageNet, and the discovered architecture shows superior performance. The preliminary version of this work appeared as [@chen2019progressive]. In this journal version, we extend the original work by several aspects. First, we present the new setting of *NAS in the wild*, which provides a benchmark for evaluating the generalization ability of NAS approaches. Second, we complement a few diagnostic experiments to further reveal that bridging the optimization gap is helpful to accomplish the goal of NAS in the wild. Third, we extend the search method so that it can directly search on ImageNet and thus produce more powerful architectures for large-scale image recognition. The remaining part of this paper is organized as follows. Section \[rw\] briefly introduces related work to our research. Then, Section \[problem\] illustrates the problem, NAS in the wild, and Section \[method\] elaborates the optimization gap and the P-DARTS approach. After extensive experiments are shown in Section \[exp\], we conclude this work in Section \[conclusion\]. Related Work {#rw} ============ Image recognition is a fundamental task in computer vision. In recent years, with the development of deep learning, CNNs have been dominating image recognition [@krizhevsky2012imagenet; @simonyan2014very; @he2016deep]. A few elaborately designed handcrafted architectures have been proposed, including VGGNet [@simonyan2014very], ResNet [@he2016deep], DenseNet [@huang2017densely], *etc.*, all of which highlighted the importance of human experts in network design. In the era of hand-designed architectures, the main roadmap of architecture design resided in how to enlarge the depth of CNNs efficiently. AlexNet [@krizhevsky2012imagenet] proposed to use the ReLU activation function and Local Response Normalization (LRN) to alleviate the gradient diffusion and achieved the state-of-the-art performance on ImageNet classification at that time. VGGNet [@simonyan2014very] proposed to stack convolutions with identical small kernel size and initialize deeper networks with previously learned weights of a shallow work, which resulted in a network of $19$ layers. GoogLenet [@szegedy2015going] introduced to connect convolutions with different kernel sizes in parallel, which led to a reduction of network parameters, an increase of network depth, and a promotion on parameter utilization. In ResNet [@he2016deep], the depth of networks was further increased to $152$ layers for ImageNet and even $1\rm{,}202$ layers for CIFAR10, with the help of the newly proposed skip connection and residual block. After that, DenseNet [@huang2017densely] inserted skip connection between all layers in the building block to formulate a densely connected CNN, which largely strengthened information propagation and feature reutilization. Apart from this depth route, network width was also a critical aspect of performance promotion. WRN [@zagoruyko2016wide] explored the possibility of scaling up the network width of ResNet and achieved brilliant results. PyramidNet [@han2017deep] extended this idea to design a pyramid-like ResNet, which further promoted the network capability. This work is in the category of the emerging field of neural architecture search, a process of automating architecture engineering technique [@elsken2018neural]. In the early 2000s, pioneer researchers attempted to generate better topology automatically with evolutionary algorithms [@stanley2002evolving]. Early NAS works tried to search for basic components and topology of neural networks to construct a complete network [@baker2016designing; @suganuma2017genetic; @xie2017genetic], while recent works focused on finding robust cells [@zoph2018learning; @real2018regularized; @dong2019searching]. Among these works, heuristic algorithms were widely adopted in the NAS pipeline. Baker [*et al.*]{} [@baker2016designing] firstly applied reinforcement learning (RL) to neural architecture search and adopted an RNN-based controller to guide the sampling process for the network configuration. [@xie2017genetic] encoded the architecture of a CNN into binary codes and used a general evolutionary algorithm to evolve for a better global network topology. Considering the weakness of the scalability of a global network architecture, [@zoph2016neural] adopted RL to search for the configuration of building blocks, which are also referred to as cells. [@real2018regularized] proposed to regularize the standard evolutionary algorithm in the NAS pipeline with aging evolution and, for the first time, surpassed the best manually designed architectures on image recognition. A critical drawback of the above approaches is the expensive search cost ($3\rm{,}150$ GPU-days for EA-based AmoebaNet [@real2018regularized] and $20\rm{,}000$ GPU-days for RL-based NASNet [@zoph2016neural]), because their methods require to sample and evaluate numerous architectures by training them from scratch. There were two lines of solutions. The first one involved reducing the search space [@zoph2018learning], and the second one optimized the exploration policy (*e.g.*, learning a surrogate model [@liu2018progressive]) in the search space so that the search process becomes more efficient. Recently, search efficiency has become one of the main concerns on NAS, and the search cost was reduced to a few GPU-days with the help of weight sharing technique [@pham2018efficient; @liu2018darts]. In this pipeline, a super-network that contains all candidate architectures in the search space is trained, and sub-architectures are evaluated with shared weights from the super-network. ENAS [@pham2018efficient] proposed to adopt a parameter sharing scheme among child models to bypass the time-consuming process of candidate architecture evaluation by training them from scratch, which dramatically reduced the search cost to less than one GPU-day. DARTS [@liu2018darts] introduced a differentiable NAS framework to relax the discrete search space into a continuous one by weighting candidate operations with architectural parameters, which achieved comparable performance and remarkable efficiency improvement compared to previous approaches. Following DARTS, GDAS [@dong2019searching] proposed to use the Gumbel-softmax sampling trick to guide the sub-graph selection process. With the BinaryConnect scheme, ProxylessNAS [@cai2018proxylessnas] adopted the differentiable framework and proposed to search architectures on the target task instead of adopting the conventional proxy-based framework. A main drawback of DARTS-based approaches is the instability issue caused by the optimization gap depicted in Section \[intro\]. SNAS [@xie2018snas] proposed to constrain the architecture parameters to be one-hot to tackle the inconsistency in optimizing objectives between search and evaluation scenarios, which can be regarded as an attempt of reducing the optimization gap. However, SNAS reported only comparable classification performance to DARTS on both proxy and target datasets. Problem: *NAS in the Wild* {#problem} ========================== We investigate the setting of *NAS in the Wild*, which seeks for a NAS algorithm that can search in a *proxy* dataset and freely transfer to a wide range of target datasets or even other types of recognition tasks. This is important for real-world scenarios, as there may not be sufficient resources, in terms of either data or computation, for a complete NAS process to be executed. Note that the community has witnessed a few recent works, sometimes referred to as *proxyless* NAS [@cai2018proxylessnas], in searching neural architectures on the target dataset directly. Our setting does not contradict these efforts, and we argue that both settings have their own advantages. On the one hand, searching on the target dataset directly enables more accurate properties of the specified dataset to be captured and, most often, leads to improved performance on the target dataset. On the other hand, we desire the ability of directly transferring the searched architecture to other scenarios. This task not only makes it easier in application, and also raises new challenges which we believe beneficial for the research field of NAS. The most significant difficulty brought by this setting is the enlarged gap between the search stage and the evaluation stage, which we will elaborate in detail in Section \[method:gap\]. In this paper, we present a practical solution that largely shrinks this gap and thus improves the ability of model transfer. Method: Progressive DARTS {#method} ========================= ![image](pipeline2_ext){width="0.95\linewidth"} Preliminary: Differentiable Architecture Search ----------------------------------------------- Our work is based on DARTS [@liu2018darts], which adopts a cell-based search framework that searches for robust architectures building blocks, [*i.e.*]{}, cells, and then stacks searched cells orderly for $L$ times to construct the target network. Thus, the search space is represented in the form of cells. A cell is denoted as a directed acyclic graph (DAG) $\mathcal{G}$ and composed of $N$ nodes (vertexes) and their corresponding edges. A node $x_i$ represents a feature layer, [*i.e.*]{}, the output of a specific operation. The first two nodes of a cell are the input nodes, which come from the outputs of previous cells or stem convolutions located at the beginning of the network. We denote the operation space as $\mathcal{O}$, in which each element represents a candidate operation (mathematical function) $o(\cdot)$. An intermediate node $x_j$ is connected to all of its preceding nodes $\{x_1, x_2, ..., x_{j-1}\}$ with edge $\mathrm{E}_{(i, j)} (i < j)$, where operations from the operation space are used to link the information flow between node $x_i$ and $x_j$. To relax the discrete search space to be continuous, operations on each edge are weighted with a set of architectural parameters $\alpha^{(i, j)}$, which is normalized with the Softmax function and is thus formulated as: $$\label{eq1} \centering f_{i, j}(x_i) = \sum_{o\in\mathcal{O}_{i, j}}{\frac{\mathrm{exp}(\alpha_o^{(i,j)})}{\sum_{o'\in\mathcal{O}}\mathrm{exp}(\alpha_{o'}^{(i,j)})}o(x_i)}.$$ All feature maps passed into the intermediate node $x_j$ are integrated together by summation, denoted as $x_j = \sum_{i < j}{f_{i,j}(x_i)}$. The output node is defined as $x_{N-1} = \mathrm{concat}(x_2, x_3, \cdots, x_{N-2})$, where $\mathrm{concat}(\cdot)$ concatenates all input signals in the channel dimension. The architectural parameters in DARTS are jointly optimized with the network parameters, *i.e.*, the convolutional weights. The output architecture is generated by operation pruning according to the learned architectural parameters, with at most one non-*zero* operation on a specific edge and two preserved edge for each intermediate node. For more technical details, please refer to the original DARTS paper [@liu2018darts]. The Optimization Gap {#method:gap} -------------------- The most significant drawback of DARTS, especially when discussed in the scenario of *NAS in the wild*, lies in the gap between search and evaluation. To be specific, in DARTS as well as other super-network-based NAS approaches, the search goal is to optimize the objective function with respect to network parameters, $\omega$, and the architectural parameters, $\alpha$, on the proxy dataset, $\mathcal{D}_\mathrm{S}$. The overall objective can be written as: $$\label{eq2} \centering \begin{aligned} \alpha^\star={\arg\min_{\alpha}}\left\{{\min_{\omega_\mathrm{S}}}\mathcal{L}_\mathrm{S}(\omega_\mathrm{S},\alpha;\mathcal{D}_\mathrm{S}, \mathcal{C}_\mathrm{S})\right\}, \end{aligned}$$ where $\mathcal{C}_\mathrm{S}$ denotes the network configuration for architecture search. The output of the search process is an optimal sub-network $\mathcal{A}$ generated according to the learned, optimal architectural parameter, $\alpha^\star$. At the evaluation phase, the destination is to train the evaluation network constructed with the searched architecture $\mathcal{A}$ on the target dataset $\mathcal{D}_\mathrm{E}$ for a better performance, thus we have: $$\label{eq2} \centering \begin{aligned} \omega_\mathrm{E}^\star=\arg\min_{\omega_\mathrm{E}}\mathcal{L}_\mathrm{E}(\omega_\mathrm{E};\mathcal{A}, \mathcal{D}_\mathrm{E}, \mathcal{C}_\mathrm{E}), \end{aligned}$$ where $\omega_\mathrm{E}$ and $\mathcal{C}_\mathrm{E}$ denote the network parameters and configuration at evaluation time, respectively. There are severe mismatch problems that happen to the DARTS framework between search and evaluation scenarios (mismatch between $\mathcal{C}_\mathrm{S}$ and $\mathcal{C}_\mathrm{E}$) in network shape, hyper-parameters, training policies, *etc.*. We summarize these problems into the **optimization gap** between training the super-network and applying the sub-network to the target network. For example, a typical optimization gap comes from the inconsistency of the operation pruning process, since the objective of the super-network is to jointly optimize network weights $\omega_\mathrm{S}$ of all candidate operations and the architectural parameters $\alpha$, while the objective of training the target network is only to optimize the network weights $\omega_\mathrm{E}$ of a few selected operations. These mismatches result in dramatic performance deterioration when the discovered architectures are applied to real-world applications. In particular, the difference between the proxy and target dataset and/or task can even enlarge the optimization gap, and thus cause difficulties of applying the searched architecture in the wild. Progressive Search to Bridge the Depth Gap ------------------------------------------ Among the optimization gaps, that caused by different network depths is one of the main sources of performance deterioration. We propose to alleviate it by progressively increasing the search depth, which is built upon our search space approximation scheme. Besides, the mismatch on network width, *i.e.*, the number of channels of feature maps, is also an essential factor associated with performance when searching architectures on large and complex datasets, and we tackle it by progressively increasing search width. To be specific, the architecture search process of DARTS is performed on a super-network with $8$ cells, and the discovered architecture is evaluated on a network with either $20$ cells (on CIFAR10) or $14$ cells (on ImageNet). There is a considerable difference between the behavior of shallow and deep networks [@ioffe2015batch; @srivastava2015training; @he2016deep], which implies that the architectures we discovered in the search process are not necessarily the optimal one for evaluation. We name this the **depth gap** between search and evaluation. To verify it, we executed the search process of DARTS for multiple times and found that the normal cells of discovered architectures tend to keep shallow connections instead of deep ones, [*i.e.*]{}, the search algorithm prefers to preserve those edges connected to the input nodes instead of cascading between intermediate nodes. This is because shallow networks often enjoy faster gradient descent during the search process. However, such property contradicts the common sense that deeper networks tend to perform better [@simonyan2014very; @szegedy2015going; @he2016deep; @huang2017densely]. Therefore, we propose to bridge the depth gap with the strategy that progressively increases the network depth during the search process, so that at the end of the search, the depth of the super-network is sufficiently close to the network configuration used in the evaluation. Here we adopt a progressive manner, instead of directly increasing the search depth to the target level, because we expect to search in shallow networks to reduce the search space with respect to candidate operations, so as to alleviate the risk of search in deep networks. The effectiveness of this progressive strategy will be verified in Section \[comp\_depth\]. The difficulty comes from two aspects. First, the computational overhead increases linearly with the search depth, which brings issues in both time expenses and computational overheads. In particular, in DARTS, GPU memory usage is proportional to the depth of the super-network. The limited GPU memory forms a major obstacle, and the most straightforward solution is to trim the channels number in each operation – DARTS [@liu2018darts] tried it but reported a slight performance deterioration, because it enlarged the mismatch on network width, another aspect of the optimization gap. To address this problem, we propose a **search space approximation** scheme to progressively reduce the number of candidate operations at the end of each stage according to the architectural parameters, the scores of operations in the previous stage as the criterion of selection. Details of search space approximation are presented in Section \[ssa\]. Second, we find that when searching on a deeper super-network, the differentiable approaches tend to bias towards the *skip-connect* operation, because it accelerates forward and backward propagation and often leads to the fastest route of gradient descent. However, this operation is parameter-free, which implies a relatively weak ability to learn visual representations. To this end, we propose another scheme named **search space regularization**, which adds an operation-level Dropout [@srivastava2014dropout] to prevent the architecture from “over-fitting" and restricts the number of preserved *skip-connects* for further stability. Details of search space regularization are presented in Section \[ssr\]. ### Search Space Approximation {#ssa} A toy example is presented in Figure \[pipeline\] to demonstrate the idea of search space approximation. The entire search process is split into multiple stages, including an initial stage, one or a few intermediate stages, and a final stage. For each stage, $\mathfrak{S}_k$, the super-network is constructed with $L_k$ cells and the operation space consists of $O_k$ candidate operations, *i.e.*, $|\mathcal{O}_{(i, j)}^k| = O_k$. According to our motivation, the super-network of the initial stage is relatively shallow, but the operation space is large ($\mathcal{O}_{(i, j)}^1 \equiv \mathcal{O}$). After each stage, $\mathfrak{S}_{k-1}$, the architectural parameters $\alpha_{k-1}$ are optimized and the scores of the candidate operations on each edge are ranked according to the learned $\alpha_{k-1}$. We increase the depth of the super-network by stacking more cells, *i.e.*, $L_{k}>L_{k-1}$, and approximate the operation space according to the ranking scores in the meantime. As a consequence, the new operation set on each edge $\mathcal{O}_{(i, j)}^k$ has a smaller size than $\mathcal{O}_{(i, j)}^{k-1}$, or equivalently, $O_{k}<O_{k-1}$. The criterion of approximation is to drop a part of less important operations on each edge, which are specified to be those assigned with a lower weight during the previous stage, $\mathfrak{S}_{k-1}$. As shown in Table \[mem\_usage\], this strategy is memory-efficient, which enables the deployment of our approach on regular GPUs, *e.g.*, with a memory of $16$GB. The growth of architecture depth continues until it is sufficiently close to that used in the evaluation. After the last search stage, the final cell topology (bold lines in Figure \[pipeline\](c)) is derived according to the learned architecture parameters $\alpha_K$. Following DARTS, for each intermediate node, we keep two individual edges whose largest non-*zero* weights are top-ranked and preserve the most important operation on each retained edge. ### Search Space Regularization {#ssr} At the start of each stage, $\mathfrak{S}_k$, we train the (modified) architecture from scratch, *i.e.*, all network weights and architectural parameters are re-initialized randomly, because several candidates have been abandoned on each edge[^1]. However, training a deeper network is harder than training a shallow one [@srivastava2015training]. In our particular setting, we observe that information prefers to flow through *skip-connect* instead of *convolution* or *pooling*, which is arguably due to the fact that *skip-connect* often leads to rapid gradient descent, especially on small proxy datasets (CIFAR10 or CIFAR100) which are relatively easy to fit. The gradient of a *skip-connect* operation with respect to the input is always $1.0$, while that of *convolutions* is much smaller ($\left[10^{-3},10^{-2}\right]$ according to our statistics). Another important reason is that, during the start of training, weights in *convolutions* are less meaningful, which results in unstable outputs compared to *skip-connect* which is weight-free, and such outputs are not likely to have high weights in classification. Both reasons make *skip-connect* accumulate weights much more rapidly than other operations. Consequently, the search process tends to generate architectures with many *skip-connect* operations, which limits the number of learnable parameters and thus produces an unsatisfying performance at the evaluation stage. This is essentially a kind of over-fitting. We address this problem by search space regularization, which consists of two parts. First, we insert operation-level Dropout [@srivastava2014dropout] after each *skip-connect* operation to partially “cut off" the straightforward path through *skip-connect*, and facilitate the algorithm to explore other operations. However, if we constantly block the path through *skip-connect*, the algorithm will unfairly drop them by assigning lower weights to them, which is harmful to the final performance. To address this contradiction, we gradually decay the Dropout rate during the training process in each search stage Thus the straightforward path through *skip-connect* is blocked at the beginning and treated equally afterward when parameters of other operations are well learned, leaving the algorithm itself to make the decision. Despite the use of operation-level Dropout, we still observe that *skip-connect*, as a special kind of operation, has a significant impact on recognition accuracy at the evaluation stage. Empirically, we perform $3$ individual search processes on CIFAR10 with identical search setting, but find that the number of preserved *skip-connects* in the normal cell, after the final stage, varies from $2$ to $4$. In the meantime, the recognition performance at the evaluation stage is also highly correlated to this number, as we observed before. This motivates us to design the second regularization rule, architecture refinement, which simply restricts the number of preserved *skip-connect* operations of the final architecture to be a constant $M$. This is done with an iterative process, which starts with constructing a cell topology using the standard rule described by DARTS. If the number of *skip-connects* is not exactly $M$, we search for the $M$ *skip-connect* operations with the largest architecture weights in this cell topology and set the weights of others to $0$, then redo cell construction with modified architectural parameters. We emphasize that the second regularization technique must be applied on top of the first one, otherwise, in the situations without operation-level Dropout, the search process is producing low-quality architectural weights, based on which we could not build up a powerful architecture even with a fixed number of *skip-connects*. --------------------------------------- --------- ------------ ----------------- ----------------------------------------------------------------------------------------------------------------- -------- **** **Params** **Search Cost** **** (lr)[2-3]{} **C10** **C100** **(M)** **(GPU-days)** DenseNet-BC [@huang2017densely] 3.46 17.18 25.6 - manual NASNet-A + cutout [@zoph2018learning] 2.65 - 3.3 1[,]{}800 & RL\ AmoebaNet-A + cutout [@real2018regularized] & 3.34 & - & 3.2 & 3[,]{}150 & evolution\ AmoebaNet-B + cutout [@real2018regularized] & 2.55 & - & 2.8 & 3[,]{}150 & evolution\ Hireachical Evolution [@liu2017hierarchical] & 3.75 & - & 15.7 & 300 & evolution\ PNAS [@liu2018progressive] & 3.41 & - & 3.2 & 225 & SMBO\ ENAS + cutout [@pham2018efficient] & 2.89 & - & 4.6 & 0.5 & RL\ DARTS (first order) + cutout [@liu2018darts] & 3.00 & 17.76$^\dagger$ & 3.3 & 1.5$^\ddagger$ & gradient-based\ DARTS (second order) + cutout [@liu2018darts] & 2.76 & 17.54$^\dagger$ & 3.3 & 4.0$^\ddagger$ & gradient-based\ SNAS + mild constraint + cutout [@xie2018snas] & 2.98 & - & 2.9 & 1.5 & gradient-based\ SNAS + moderate constraint + cutout [@xie2018snas] & 2.85 & - & 2.8 & 1.5 & gradient-based\ SNAS + aggressive constraint + cutout [@xie2018snas] & 3.10 & - & 2.3 & 1.5 & gradient-based\ ProxylessNAS [@cai2018proxylessnas] + cutout & 2.08 & - & 5.7 & 4.0 & gradient-based\ P-DARTS (searched on CIFAR10) + cutout & 2.50 & 17.20 & 3.4 & 0.3 & gradient-based\ P-DARTS (searched on CIFAR100) + cutout & 2.62 & 15.92 & 3.6 & 0.3 & gradient-based\ P-DARTS (searched on CIFAR10, large) + cutout & 2.25 & 15.27 & 10.5 & 0.3 & gradient-based\ P-DARTS (searched on CIFAR100, large) + cutout & 2.43 & 14.64 & 11.0 & 0.3 & gradient-based\ --------------------------------------- --------- ------------ ----------------- ----------------------------------------------------------------------------------------------------------------- -------- \[tab\_ev\_cifar\] Relationship to Prior Work -------------------------- Though having a similar name, Progressive NAS or PNAS [@liu2018progressive] was driven by a different motivation. PNAS explored the search space progressively by searching for operations node-by-node within each cell. Our approach shares a similar progressive search manner – we perform the search at the cell level to enlarge the architecture depth, while PNAS did it at the operation level (within a cell) to reduce the number of architectures to evaluate. There exist other efforts in alleviating the optimization gap between search and evaluation. For example, SNAS [@xie2018snas] aimed at eliminating the bias between the search and evaluation objectives of differentiable NAS approaches by forcing the architecture weights on each edge to be one-hot. Our work is also able to get rid of the bias, which we achieve by enlarging the architecture depth during the search stage. Another example of bridging the optimization gap is ProxylessNAS [@cai2018proxylessnas], which introduced a differentiable NAS scheme to directly learn architectures on the target task (and hardware) without a proxy dataset. It achieved high memory efficiency by applying binary masks to candidate operations and forcing only one path in the over-parameterized network to be activated and loaded into GPU. Different from it, our approach tackles the memory overhead by search space approximation. Besides, ProxylessNAS searched for global topology instead of cell topology, which requires strong priors on the target task as well as the search space, while P-DARTS does not need such priors. Our approach is much faster than ProxylessNAS ($0.3$ GPU-days vs. $4.0$ GPU-days on CIFAR10 and $2.0$ GPU-days vs. $8.3$ GPU-days on ImageNet). Last but not least, we believe that the phenomenon that the *skip-connect* operation emerges may be caused by the mathematical mechanism that DARTS was built upon. Some recent work [@bi2019stabilizing] pointed out issues in optimization, and we look forward to exploring the relationship between these issues and the optimization gap. Experiments {#exp} =========== The CIFAR10 and CIFAR100 Datasets --------------------------------- Following standard vision setting, we search and evaluate architectures on the CIFAR10 [@krizhevsky2009learning] dataset. To further demonstrate the capability of our proposed method, we also execute architecture search on CIFAR100. Each of CIFAR10 and CIFAR100 has $50$K/$10$K training/testing images with a fixed spatial resolution of $32\times32$, which are distributed over $10$/$100$ classes. In the architecture search scenario, the training set is randomly split into two equal subsets, one for learning network parameters ([*e.g.*]{}, convolutional weights) and the other for tuning the architectural parameters ([*i.e.*]{}, operation weights). In the evaluation scenario, standard training/testing split is applied. ### Architecture Search {#sr} \[ncells\_s1\]\ \[ncells\_s2\] \[ncells\_s3\]\ \[ncells\_dv2\] \[edge\_level\]\ The whole search process is split into $3$ stages. The search space and network configuration are identical to DARTS at the initial stage (stage $1$) except that only $5$ cells are stacked in the search network for acceleration (we tried the original setting with $8$ cells and obtained similar results). The number of stacked cells increases from $5$ to $11$ for the intermediate stage (stage $2$) and $17$ for the final stage (stage $3$). The numbers of operations preserved on each edge of the super-network are $8$, $5$, and $3$ for stage $1$, $2$, and $3$, respectively. The Dropout probability on *skip-connect* is decayed exponentially and the initial values for the reported results are set to be $0.0$, $0.4$, $0.7$ on CIFAR10 for stage $1$, $2$ and $3$, respectively, and $0.1$, $0.2$, $0.3$ for CIFAR100. For a proper tradeoff between classification accuracy and computational overhead, the final discovered cells are restricted to keep at most $2$ *skip-connects*, which guarantees a fair comparison with DARTS and other state-of-the-art approaches. For each stage, the super-network is trained for $25$ epochs with a batch size of $96$, where only network parameters are tuned in the first $10$ epochs while both network and architectural parameters are jointly optimized in the rest $15$ epochs. An Adam optimizer with learning rate $\eta = 0.0006$, weight decay $0.001$ and momentum $\beta=(0.5, 0.999)$ is adopted for architectural parameters. The limitation of GPU memory is the main concern when we determine hyper-parameters related to GPU memory size, [*e.g.*]{}, the batch size. The first-order optimization scheme of DARTS is leveraged to learn the architectural parameters in consideration of acceleration, thus the architectural parameters and network parameters are optimized in an alternative manner. The architecture search process on CIFAR10 and CIFAR100 is performed on a single Nvidia Tesla P100, which takes around $7$ hours, resulting in a search cost of $0.3$ GPU-days. When we change the GPU device to an Nvidia Tesla V100 ($16$GB), the search cost is reduced to $0.2$ GPU-days (around $4.5$ hours). Architectures discovered by P-DARTS on CIFAR10 tend to preserve more deep connections than the one discovered by DARTS, as shown in Figure \[ncells\](c) and Figure \[ncells\](d). Moreover, the deep connections in the architecture discovered by P-DARTS are deeper than that in DARTS, which means that the longest path in the cell cascades more levels in depth. In other words, there are more serial layers in the cell instead of parallel ones, making the target network further deeper and achieving better classification performance. Notably, our method also allows architecture search on CIFAR100 while prior approaches mostly failed. The evaluation results in Table \[tab\_ev\_cifar\] show that the discovered architecture on CIFAR100 outperforms those architectures transferred from CIFAR10. We tried to perform architecture search on CIFAR100 with DARTS using the code released by the authors but get architectures full of *skip-connects*, which results in much worse classification performance. ### Architecture Evaluation Following the convention[@liu2018darts], an evaluation network stacked with $20$ cells and $36$ initial channels is trained from scratch for $600$ epochs with a batch size of $128$. Additional regularization methods are also applied including Cutout regularization [@devries2017improved; @zhong2017random] of length $16$, drop-path [@larsson2016fractalnet] of probability $0.3$ and auxiliary towers [@szegedy2015going] of weight $0.4$. A standard SGD optimizer with a momentum of $0.9$, a weight decay of $0.0003$ for CIFAR10 and $0.0005$ for CIFAR100 is adopted to optimize the network parameters. The cosine annealing scheme is applied to decay the learning rate from $0.025$ to $0$. To explore the potential of the searched architectures, we further increase the number of initial channels from $36$ to $64$, which is denoted as the large setting. Evaluation results and comparison with state-of-the-art approaches are summarized in Table \[tab\_ev\_cifar\]. As demonstrated in Table \[tab\_ev\_cifar\], P-DARTS achieves a $2.50\%$ test error on CIFAR10 with a search cost of only $0.3$ GPU-days. To obtain a similar performance, AmoebaNet [@real2018regularized] spent thousands of GPU-hours, which is four orders of magnitude more computational resources. Our P-DARTS also outperforms DARTS and SNAS by a large margin with comparable parameter count. Notably, architectures discovered by P-DARTS outperform ENAS, the previously most efficient approach, in both classification performance and search cost, with fewer parameters. The architectures discovered both DARTS and P-DARTS on CIFAR10 are transferred to CIFAR100 to test the transferability between similar datasets. Obvious superiority of P-DARTS is observed in terms of classification accuracy. As mentioned previously, P-DARTS is able to support architecture search on other proxy datasets such as CIFAR100. For a fair comparison, we tried to perform architecture search on CIFAR100 with the publicly available code of DARTS but resulted in architectures full of *skip-connect* operations. The discovered architecture on CIFAR100 significantly outperforms those architectures transferred from CIFAR10. An interesting point is that the directly searched architectures perform better when evaluated on the search dataset than those transferred ones for both CIFAR10 and CIFAR100. Such a phenomenon provides a proof of the existence of dataset bias in NAS. Diagnostic Experiments {#exp_diag} ---------------------- Before continuing to ImageNet search and in-the-wild evaluation experiments, we conduct diagnostic studies to better understand the properties of P-DARTS. ### Comparison on the Depth of Search Networks {#comp_depth} Since the search process of P-DARTS is divided into multiple stages, we perform a case study to extract architectures from each search stage with the same rule to validate the usefulness of bridging the depth gap. Architectures from each stage are evaluated to demonstrate their capability for image classification. The topology of discovered architectures (only normal cells are shown) and their corresponding classification performance are summarized in Figure \[ncells\]. To show the difference in the topology of cells searched with different depth, we add the architecture discovered by second-order DARTS (DARTS\_V2, $8$ cells in the search network) for comparison. The lowest test error is achieved by the architecture obtained from the final search stage (stage $3$), which validates the effectiveness of shrinking the depth gap. From Figure \[ncells\] we can observe that these discovered architectures share some common edges, for example *sep\_conv\_$3\times3$* at edge $\mathrm{E}_{(c_{k-2}, 2)}$ for all stage of P-DARTS and at edge $\mathrm{E}_{(c_{k-1}, 0)}$ for stage $2$, $3$ of P-DARTS and DARTS$\_$V2. These common edges serve as a solid proof that operations with high importance are preserved by the search space approximation scheme. Differences also exist between these discovered architectures, which we believe is the key factor that affects the capability of these architectures. Architectures generated by shallow search networks tend to keep shallow connections, while with deeper search networks, the discovered architectures prefer to pick some preceding intermediate nodes as input, resulting in cells with deep connections. This is because it is harder to optimize a deep search network, so the algorithm has to explore more paths to find the optimum, which results in more complex and powerful architectures. Additionally, we perform quantitative analysis on the discovered architectures by P-DARTS of three individual runs and summarize the average levels of their connections in Figure \[ncells\](e). For comparison, we also add the architecture found by the second-order DARTS into this analysis. While the preserved edges of DARTS are almost all shallow ($7$ over $8$ of level $1$ and level $2$), P-DARTS tends to keep more deep edges. ### Effectiveness of Search Space Approximation The search process takes $\sim7$ hours on a single Nvidia Tesla P100 GPU with $16$GB memory to produce the final architectures. We monitor the GPU memory usage of the architecture search process for $3$ individual runs and collect the peak value to verify the effectiveness of the search space approximation scheme, which is shown in Table \[mem\_usage\]. The memory usage is stably under the limit of the adopted GPU, and out of memory error barely occurs, showing the validity of the search space approximation scheme on memory efficiency. We perform experiments to demonstrate the effectiveness of the search space approximation scheme on improving classification accuracy. Only the final stage of the search process is executed on two different search spaces with identical settings. The first search space is progressively approximated from previous search stages, and the other is randomly sampled from the full search space. To eliminate the influence of randomness, we repeat the whole process for the randomly sampled one for $3$ times with different seeds and pick the best one. The lowest test error for the randomly sampled search space is $3.43\%$, which is much higher than $2.58\%$, the one obtained with the approximated search space. Moreover, we performed an additional experiment with a fixed depth ($8$ cells) and shrunk sets of operations ($8\to5\to3$, as used in the paper), which results in a test error of $2.70\%$, significantly lower than the $3.00\%$ test error obtained by the first-order DARTS. Such results reveal the necessity of the search space approximation scheme. [lccc]{} &\ (lr)[2-4]{} & **Stage $1$** & **Stage $2$** & **Stage $3$**\ 1 & 9.8 & 14.0 & 14.2\ 2 & 9.8 & 14.4 & 14.5\ 3 & 9.8 & 14.2 & 14.3\ \[mem\_usage\] \[t\] ---------------------------------- ----------- ------------ ----------- ------------------------------------------------------------------------------------------------- ---------------- -- **** **Params** $\times+$ **Search Cost** **** (lr)[2-3]{} **top-1** **top-5** **(M)** **(M)** **(GPU-days)** Inception V1 [@szegedy2015going] 30.2 10.1 6.6 1[,]{}448 & - & manual\ MobileNet [@howard2017mobilenets] & 29.4 & 10.5 & 4.2 & 569 & - & manual\ ShuffleNet V1 2$\times$ [@zhang2018shufflenet] & 26.4 & 10.2 & $\sim$5 & 524 & - & manual\ ShuffleNet V2 2$\times$ [@ma2018shufflenet] & 25.1 & - & $\sim$5 & 591 & - & manual\ NASNet-A [@zoph2018learning] & 26.0 & 8.4 & 5.3 & 564 & 1[,]{}800 & RL\ NASNet-B [@zoph2018learning] & 27.2 & 8.7 & 5.3 & 488 & 1[,]{}800 & RL\ NASNet-C [@zoph2018learning] & 27.5 & 9.0 & 4.9 & 558 & 1[,]{}800 & RL\ AmoebaNet-A [@real2018regularized] & 25.5 & 8.0 & 5.1 & 555 & 3[,]{}150 & evolution\ AmoebaNet-B [@real2018regularized] & 26.0 & 8.5 & 5.3 & 555 & 3[,]{}150 & evolution\ AmoebaNet-C [@real2018regularized] & 24.3 & 7.6 & 6.4 & 570 & 3[,]{}150 & evolution\ PNAS [@liu2018progressive] & 25.8 & 8.1 & 5.1 & 588 & 225 & SMBO\ MnasNet-92 [@tan2019mnasnet] & 25.2 & 8.0 & 4.4 & 388 & - & RL\ DARTS (second order) [@liu2018darts] & 26.7$^\dagger$ & 8.7 & 4.7 & 574 & 4.0 & gradient-based\ SNAS (mild constraint) [@xie2018snas] & 27.3 & 9.2 & 4.3 & 522 & 1.5 & gradient-based\ PC-DARTS (searched on CIFAR10) [@xu2019pc] & 25.1 & 7.8 & 5.3 & 586 & 0.1 & gradient-based\ PC-DARTS (searched on ImageNet) [@xu2019pc] & 24.2 & 7.3 & 5.3 & 597 & 3.8 & gradient-based\ ProxylessNAS (GPU) [@cai2018proxylessnas] & 24.9 & 7.5 & 7.1 & 465 & 8.3 & gradient-based\ P-DARTS (searched on CIFAR10) & 24.4 & 7.4 & 4.9 & 557 & 0.3 & gradient-based\ P-DARTS (searched on ImageNet) & 24.1 & 7.3 & 5.4 & 597 & 2.0 & gradient-based\ ---------------------------------- ----------- ------------ ----------- ------------------------------------------------------------------------------------------------- ---------------- -- \[ev\_imagenet\] ### Effectiveness of Search Space Regularization We perform experiments to validate the effectiveness of search space regularization, *i.e.*, operation-level Dropout, and architecture refinement. **Effectiveness of operation-level Dropout**. Firstly, experiments are conducted to test the influence of the operation-level Dropout scheme. Two sets of initial Dropout rates are adopted, *i.e.*, $0.0$, $0.0$, $0.0$ (without Dropout) and $0.0$, $0.3$, $0.6$ (with Dropout) for stage $1$, $2$ and $3$, respectively. To eliminate the potential influence of the number of *skip-connects*, the comparison is made across multiple values of $M$. Test errors for architectures discovered without Dropout are $2.93\%$, $3.28\%$ and $3.51\%$ for $M=2$, $3$ and $4$, respectively. When operation-level Dropout is applied, the corresponding test errors are $2.69\%$, $2.84\%$ and $2.97\%$, significantly outperforming those without Dropout. According to the experimental results, all $8$ preserved operations in the normal cell of the architecture discovered without Dropout are *skip-connects* before architecture refinement, while the number is $4$ for the architecture discovered with Dropout. The diminishing on the number of *skip-connect* operations verifies the effectiveness of search space regularization on stabilizing the search process. **Effectiveness of architecture refinement**. During experiments, we observe strong coincidence between the classification accuracy of architectures and the number of *skip-connect* operations in them. We perform a quantitative experiment to analyze it. Architecture refinement is applied to the same search process to produce multiple architectures where the number of preserved *skip-connect* operations in the normal cell varies from $0$ to $4$. The test errors are positively correlated to the number of *skip-connects* except for $M=0$, [*i.e*]{}, $2.78\%$, $2.68\%$, $2.69\%$, $2.84\%$ and $2.97\%$ for $M=0$ to $4$, while the parameters count is inversely proportional to the *skip-connect* count, *i.e.*, $4.1$M, $3.7$M, $3.3$M, $3.0$M and $2.7$M, respectively. The reason lies in that, with a fixed number of operations in a cell, the eliminated parameter-free *skip-connects* are replaced by operations with trainable parameters, [*e.g.*]{}, *convolution*, resulting in more complex and powerful architectures. The above observation inspired us to propose the second search space regularization scheme, architecture refinement, whose capability is validated by the following experiments. We run another $3$ architecture search experiments, all with initial Dropout rates of $0.0$, $0.3$, and $0.6$ for stage $1$, $2$, and $3$, respectively. Before architecture refinement, the test error is $2.79\pm0.16\%$ and the discovered architectures are with $2$, $3$ and $4$ *skip-connects* in normal cells. After architecture refinement, all three searched architectures are with $2$ *skip-connects* in normal cells, resulting in a diminished test error of $2.65\pm0.05\%$. The reduction of the average test error and standard deviation reveals the improvement of the stability for the search process. **Applying search space regularization to DARTS**. We apply our proposed search space regularization scheme to the original first-order DARTS, and the test error on CIFAR10 is reduced to $2.78\%$, significantly lower than the original $3.00\%$ but still considerably higher than P-DARTS ($2.50\%$). This reveals that the proposed regularization scheme is also effective in searching for relatively shallower architectures, yet another source of improvement comes from increasing search depth. The positive results indicate that the proposed search space regularization can also be plugged into other DARTS-based approaches. ### Discussion: Other Optimization Gaps Apart from depth gap that we have addressed in this paper, other aspects of the optimization gap can also affect the search process of super-network-based NAS approaches. Here, we briefly discuss two aspects of them. **The width gap.** One of the straightforward option comes from the width of the network. Note that during the search stage on CIFAR, the base channel number is set to be $16$, while that is enlarged as $48$ when the searched architecture is transferred to ImageNet (see the experimental settings in the following section). This also claims a significant optimization gap. Therefore, it is natural to progressively increase the network width during the search stage, just like what we have done for the network depth. However, we find that the performance gain brought by this strategy is limited. Digging into the searched architecture, we find that when an increased network width is used, the search algorithm tends to find architectures with small ($3\times3$) convolutional kernels, while the original version tends to find architectures with a considerable portion of big ($5\times5$) kernels. Consequently, the comparison between these two options is not fair on CIFAR10, as the original (not progressively widened) version often has a larger number of parameters. This also delivers an important message: the value of shrinking the optimization gap will be enlightened in a relatively “fair” [@chu2019fairnas] search environment. **The gap brought by other hyper-parameters.** In the search setting of DARTS, all the affine parameters of batch normalization are discarded because the architectural parameters are dynamically learned across the whole search process, and the affine parameters will rescale the output of each operation according to incorrect statistics. On the contrary, the affine option of batch normalization is switched on to recover the data distribution during the evaluation scenario, which forms another aspect of the optimization gap. However, this gap is hard to address because a bunch of additional issues may arise if we switch it on. Furthermore, the data augmentation gap, including the inconsistent settings Cutout is another inconsistency between search and evaluation. There also may exist other aspects of the optimization gap, [*e.g.*]{}, Dropout, auxiliary loss tower, [*etc.*]{}.  [@bi2019stabilizing] briefly discussed some aspects of the above-mentioned ones, while the influence of these options was not clearly stated. In fact, a different setting on these aspects may cause other additional problems to disrupt qualitative and quantitative analysis on them. Additionally, the fluctuation on small scale datasets like CIFAR10 may also cause dramatic impacts on the analysis, while the computational burden obstructs the analysis on large-scale datasets. The ImageNet Dataset -------------------- We also search architectures directly on ImageNet to validate the applicability of our search algorithm to large-scale datasets. Experiments are performed on ILSVRC2012 [@russakovsky2015imagenet], a subset of ImageNet [@deng2009imagenet] which contains $1\rm{,}000$ object categories and $1.28$M training and $50$K validation images. Following the conventions [@zoph2018learning; @liu2018darts; @wu2019fbnet], we randomly sample a $100$-class subset of the training images for architecture search. Similar to CIFAR10, all images and standard dataset partition are adopted during architecture evaluation. ### Architecture Search {#architecture-search} We use a similar configuration to the one used on CIFAR10 except for some minor changes. We set the numbers of cells to be $5$, $8$ and $11$ and adjust the dropout rate to $0.0$, $0.3$, $0.6$. Meanwhile, we increase the number of initial channels from $16$ to $28$, and $40$ for stage $1$, $2$, and $3$, respectively. Architecture search on ImageNet is executed with $8$ Nvidia Tesla V100, which takes around $6$ hours, thus a search cost of $2$ GPU-days. The search cost of P-DARTS on ImageNet is even smaller than PC-DARTS[@xu2019pc], a memory-efficient differentiable approach proposed recently, which demonstrates the efficiency of our proposed search space approximation scheme. During the search process, the “over-fitting" phenomenon is largely alleviated and the number of *skip-connect* operation is well controlled. This comes from two aspects. On the one hand, gradients assigned to *skip-connects* is successfully suppressed by the first search space regularization method, [*i.e.*]{}, adding operation level dropout on *skip-connect* operations. On the other hand, the variety and complexity of ImageNet make it more difficult to fit with those parameter-free operations than CIFAR10 and CIFAR100, forcing the network to train those operations with learnable parameters. Moreover, the discovered architecture is also with plenty of deep connections, even deeper than those in the one searched on CIFAR10. Such a character guarantees a favorable classification performance. We have also attempted to search architectures without progressively increasing the network width, but the discovered architectures resulted in worse classification performance, which demonstrates the usefulness of the progressive width scheme. [lccccccccc]{} **Network** &**Input Size**&**Backbone** &**$\times+$**&**$\mathrm{AP}$**&**$\mathrm{AP}_{50}$** &**$\mathrm{AP}_{75}$**&**$\mathrm{AP}_\mathrm{S}$**&**$\mathrm{AP}_\mathrm{M}$** &**$\mathrm{AP}_\mathrm{L}$**\ SSD300 [@liu2016ssd] &300$\times$300&VGG-16&31.4B&23.2&41.2&23.4&5.3 &23.2& 39.6\ SSD512 [@liu2016ssd] &512$\times$512&VGG-16&80.4B&26.8&46.5&27.8&9.0 &28.9& 41.9\ SSD513 [@liu2016ssd]$^\dagger$ &513$\times$513&ResNet-101& 43.4B &31.2&50.4&33.3&10.2&34.5&49.8\ SSDLiteV1 [@howard2017mobilenets] &320$\times$320&MobileNetV1&1.2B&22.2&-&-&-&-&-\ SSDLiteV2 [@sandler2018mobilenetv2]&320$\times$320&MobileNetV2&0.7B&22.1&-&-&-&-&-\ SSDLiteV3 [@tan2019mnasnet] &320$\times$320&MnasNet-A1&0.6B &23.0 &-& -&3.6 &20.5 &43.2\ SSD320 [@liu2016ssd] &320$\times$320& DARTS &1.1B &27.3&45.0&28.3&7.6&30.2&46.0\ SSD320 [@liu2016ssd] &320$\times$320&P-DARTS (CIFAR10) &1.1B &28.9&46.8&30.2&7.3&32.2&48.2\ SSD320 [@liu2016ssd] &320$\times$320&P-DARTS (ImageNet)&1.2B &29.9&47.8&31.5&9.0&33.2&50.0\ SSD512 [@liu2016ssd] &512$\times$512& DARTS &2.9B &31.8&50.3&33.8&11.7&37.1&49.7\ SSD512 [@liu2016ssd] &512$\times$512&P-DARTS (CIFAR10) &2.9B &33.6&52.8&35.6&13.3&39.7&51.1\ SSD512 [@liu2016ssd] &512$\times$512&P-DARTS (ImageNet)&3.1B &34.1&52.9&36.3&14.3&40.0&52.1\ \[tab\_od\] ### Architecture Evaluation The transferability to large-scale recognition datasets of architecture discovered on CIFAR10 is firstly tested on The ILSVRC2012. Concurrently, the capability of the architecture directly searched on ImageNet is also evaluated. We apply the mobile setting for the evaluation scenario where the input image size is $224\times224$, and the number of multi-add operations is restricted to be less than $600$M. A network configuration identical to DARTS is adopted, *i.e.*, an evaluation network of $14$ cells and $48$ initial channels. We train each network from scratch for $250$ epochs with batch size $1\rm{,}024$ on $8$ Nvidia Tesla V100 GPUs, which takes about $3$ days with our PyTorch [@paszke2017automatic] implementation. The network parameters are optimized using an SGD optimizer with an initial learning rate of $0.5$ (decayed linearly after each epoch), a momentum of $0.9$, and a weight decay of $3\times 10^{-5}$. Additional enhancements, including label smoothing [@szegedy2016rethinking] and auxiliary loss tower, are applied during training. Since large batch size and learning rate are adopted, we apply learning rate warmup [@goyal2017accurate] for the first $5$ epochs. Evaluation results and comparison with state-of-the-art approaches are summarized in Table \[ev\_imagenet\]. The architecture transferred from CIFAR10 outperforms DARTS, PC-DARTS and SNAS by a large margin in terms of classification performance, which demonstrates superior transfer capability of the discovered architectures. Notably, architectures discovered by P-DARTS on CIFAR10 and CIFAR100 achieve lower test error than MnasNet [@tan2019mnasnet] and ProxylessNAS [@cai2018proxylessnas], whose search space is carefully designed for ImageNet. The architecture directly searched on ImageNet achieves superior performance compared to those transferred ones and is comparable to the state-of-the-art directly-searched models in the DARTS-based search space. Evaluation in the Wild ---------------------- To further test the transferability of the discovered architectures to scenarios in the wild, we embed our discovered architectures as backbones into two other vision tasks, *i.e.*, object detection and person re-identification. On both tasks, we have observed superior performance compared to both baseline methods and the DARTS backbone, which reveals that the desirable characters obtained on image recognition by P-DARTS can be well transferred to scenarios in the wild. ### Transferring to Object Detection {#transfer_od} \[t\] [lccccccccc]{} **** & **** & **** & **$\times+$** & & &\ (lr)[5-6]{} (lr)[7-8]{} (lr)[9-10]{} & & & **(M)** &**Rank-$1$** & **mAP** & **Rank-$1$** & **mAP** & **Rank-$1$** & **mAP**\ ResNet-50 & 1 & 2,048 & 4120 & 87.86 & 72.8 & 71.99 & 57.21 & 48.33 & 24.62\ DARTS & 1 & 768 & 573 & 91.90 & 79.3 & 82.09 & 66.74 & 61.50 & 37.93\ P-DARTS (CIFAR10) & 1 & 768 & 556 & 92.99 & 81.37 & 83.75 & 68.71 & 68.98 & 41.99\ P-DARTS (ImageNet) & 1 & 768 & 596 & 92.01 & 78.41 & 83.55 & 67.84 & 66.70 & 39.64\ ResNet-50 & 3 & 2,048 & 4120 & 92.81 & 80.34 & 84.61 & 71.51 & 71.64 & 46.23\ DARTS & 3 & 768 & 573 & 94.18 & 83.63 & 86.22 & 74.68 & 77.37 & 53.02\ P-DARTS (CIFAR10) & 3 & 768 & 556 & 94.59 & 84.78 & 87.25 & 75.53 & 79.52 & 55.98\ P-DARTS (ImageNet) & 3 & 768 & 596 & 93.67 & 83.85 & 89.98 & 75.27 & 77.19 & 53.37\ ResNet-50 & 6 & 2,048 & 4120 & 93.08 & 81.02 & 86.13 & 74.03 & 71.12 & 46.00\ DARTS & 6 & 768 & 573 & 93.40 & 83.23 & 86.35 & 74.22 & 77.14 & 53.84\ P-DARTS (CIFAR10) & 6 & 768 & 556 & 93.61 & 83.37 & 87.25 & 74.6 & 79.24 & 56.41\ P-DARTS (ImageNet) & 6 & 768 & 596 & 92.99 & 82.98 & 86.71 & 73.96 & 76.75 & 53.62\ \[tab\_reid\] Object detection is also a fundamental task in the vision community and also an important task of the scenario in the wild [@Liu2019]. We plug the discovered architectures and corresponding weights pre-trained on ImageNet into Single-Shot Detectors (SSD) [@liu2016ssd], a popular light-weight object detection framework. The capability of our backbones is tested on the benchmark dataset MS-COCO [@lin2014microsoft], which contains $80$ object categories and more than $1.5$M object instances. We train the pipeline with the “trainval$35$K” set, *i.e.*, a combination of the $80$k training and $35$k validation images. The performance is tested on the *test-dev* set. Results are summarized in Table \[tab\_od\]. Equipped with the P-DARTS backbone searched on CIFAR10, the P-DARTS-SSD$320$ achieves an superior AP of $28.9\%$ with only $1.1$B FLOPs, which is $5.7\%$ higher in AP with $29\times$ fewer FLOPs than SSD$300$, and even $2.1$% higher in AP with $73\times$ fewer FLOPS than the SSD$512$. With similar FLOPs, P-DARTS-SSD$320$ outperforms the DARTS-SSD$320$ by $1.6\%$ in AP. Compared to those light-weight backbones, [*i.e.*]{}, backbones belong to the MobileNet family, our P-DARTS-SSD$320$ enjoys a superior AP by a large margin, while with an acceptable amount of extra FLOPs than these light-weight backbones. With larger input image size, the P-DARTS-SSD$512$ surpasses the SSD$513$ by an AP of $2.4\%$, while the FLOPs count of the P-DARTS backbone is $14\times$ smaller than their ResNet-$101$ backbone. The results with the backbone searched by P-DARTS on ImageNet are further impressive, which achieves an AP of $29.9\%$ with the backbone searched on CIFAR10 for P-DARTS-SSD$320$, and $34.1\%$ for P-DARTS-SSD$512$. All the above results indicate that the discovered architectures by P-DARTS are well transferred to object detection and produce superior performance. ### Transferring to Person Re-Identification Person re-identification is an important practical vision task and has been attracting more attention from both academia and industry [@wang2018person; @li2018semi] because of its broad applications in surveillance and security. Apart from those task-specific modules, the backbone architecture is a critical factor for performance promotion. We replace the previous backbones with our P-DARTS architectures (searched on both CIFAR10 and ImageNet) and test the transferability on three benchmark datasets, *i.e.*, *Market-1501* [@zheng2015scalable], *DukeMTMC-reID* [@zheng2017unlabeled] and *MSMT17* [@wei2018person]. Experiments are executed with the pipeline of Part-based Convolutional Baseline (PCB) [@sun2018beyond], and all backbones are pre-trained on ImageNet. We set the numbers of parts to be $1$, $3$, and $6$ to make an exhaustive comparison. Results are summarized in Table \[tab\_reid\]. The P-DARTS backbones outperform the ResNet-$50$ backbone by a large margin with fewer FLOPs and a smaller feature dimensionality. With a similar backbone size, P-DARTS (CIFAR10) surpasses DARTS on all three datasets with different part numbers, suggesting a superior transferability of our searched architecture. However, with the P-DARTS (ImageNet) backbone, the performance is only comparable to the DARTS backbone and worse than the P-DARTS (CIFAR10) backbone. It is worth noting that the preferences to CIFAR-searched and ImageNet-searched backbones are different between object detection and person re-identification tasks. This is due to the domain gap between the architecture search task and the target tasks. While the original images used in ImageNet classification and COCO object detection are similarly with high resolution and data distribution, images used in ReID are in worse condition, which is more similar to the situation in CIFAR10. We showcase in Figure \[data\_samples\] samples from ImageNet, COCO, CIFAR10, and Market-1501, where the domain gap between them can be visually observed. A notable phenomenon is that with the ResNet-$50$ backbone, performance keeps rising when increasing the part number, while the best performance is reached to the peak when the part number is $3$ with both DARTS and P-DARTS backbones. This is arguably because of the larger feature dimensionality adopted in ResNet-$50$ backbone, which also implies the potential of further performance promotion on P-DARTS backbones with a larger feature dimensionality and part number. Conclusions {#conclusion} =========== In this work, we propose a progressive version of differentiable architecture search to bridge the optimization gap between search and evaluation scenarios for NAS in the wild. The core idea, based on that optimization gap is caused by the difference between the policies of search and evaluation, is to gradually increase the depth of the super-network during the search process. To alleviate the issues of computational overhead and instability, we design two practical techniques to approximate and regularize the search process, respectively. Our approach reports superior performance in both proxy datasets (CIFAR and ImageNet) and target datasets (object detection and person re-identification added) with significantly reduced search overheads. Our research puts forward the optimization gap in super-network-based NAS and highlights the significance of the consistency between search and evaluation scenarios. To solve it in terms of network depth and width, the P-DARTS algorithm paves a new way by approximating the search space. We expect that our initial work serves as a modest spur to induce more researchers to contribute their ideas to further alleviate the optimization gap and design effective and generalized NAS algorithms. Acknowledgement {#acknowledgement .unnumbered} =============== We thank Chen Wei, Jian Zhang, Kaiwen Duan, Longhui Wei, Tianyu Zhang, Yuhui Xu and Zhengsu Chen for their valuable suggestions. [^1]: We also tried to start with architectural parameters learned from the previous stage, $\mathfrak{S}_{k-1}$, and adjust them according to Eq. \[eq1\] to ensure that the weights of preserved operations should still sum to one. This strategy reported slightly lower accuracy. Actually, we find that only an average of $5.3$ (out of $14$ normal edges) most significant operations in $\mathfrak{S}_1$ continue to have the largest weight in $\mathfrak{S}_2$, and the number is only slightly increased to $6.7$ from $\mathfrak{S}_2$ to $\mathfrak{S}_3$ – this is to say, deeper architectures may have altered preferences.
--- abstract: 'In this paper, we are interested in the global persistence regularity for the 2D incompressible Euler equations in some function spaces allowing unbounded vorticities. More precisely, we prove the global propagation of the vorticity in some weighted Morrey-Campanato spaces and in this framework the velocity field is not necessarily Lipschitz but belongs to the log-Lipschitz class $L^\alpha L,$ for some $\alpha\in (0,1).$' address: - | CNRS - Université de Nantes\ Laboratoire de Mathématiques Jean Leray\ 2, Rue de la Houssinière F-44322 Nantes Cedex 03, France - | IRMAR, Université de Rennes 1\ Campus de Beaulieu\ 35 042 Rennes cedex, France author: - Frédéric Bernicot - Taoufik Hmidi date: - - title: 'On the global well-posedness for Euler equations with unbounded vorticity' --- Introduction ============ The motion of incompressible perfect flows evolving in the whole space is governed by the Euler system described by the equations $$\label{E} \left\{ \begin{array}{ll} \partial_t u+u\cdot\nabla u+\nabla P=0,\qquad x\in \mathbb R^d, t>0, \\ \textnormal{div }u=0,\\ u_{\mid t=0}= u_{0}. \end{array} \right.$$ Here, the vector field denotes the velocity of the fluid particles and the scalar function stands for the pressure. It is a classical fact that the incompressibility condition leads to a closed system and the pressure can be recovered from the velocity through some singular operator. The literature on the well-posedness theory for Euler system is very abundant and a lot of results were obtained in various function spaces. For instance, it is well-known according to the work of Kato and Ponce [@Kato] that the system admits a maximal unique solution in the framework of Sobolev spaces, namely $u_0\in W^{s,p},$ This result was extended to Hölder spaces $\mathcal{C}^s, s>1$ by Chemin [@Ch1] and later by Chae [@Cha2] in the critical and sub-critical Besov spaces, see also [@Pak]. We point out that the common technical ingredient of these contributions is the use of the commutator theory but with slightly different difficulties. Even though, the local theory for classical solutions is well-achieved, the global existence of such solutions is still now an outstanding open problem due to the poor knowledge of the conservation laws. However this problem is affirmatively solved for some special cases like the dimension two and the axisymmetric flows without swirl. It is worthy pointing out that for these known cases the geometry of the initial data plays a central role through the special structure of their vorticities. Historically, we can fairly say that Helmholtz was the first to point out in the seminal paper [@Helm] the importance of the vorticity $\omega\triangleq \textrm{curl}\, u$ in the study of the incompressible inviscid flows. In that paper he provided the foundations of the vortex motion theory by the establishment of some basic laws governing the vorticity. Some decades later in the thirties of the last century, Wolibner proved in [@Wolib] the global existence of sufficiently smooth solutions in space dimension two. Very later in the mid of the eighties, a rigorous connection between the vorticity and the global existence was performed by Beale, Kato and Majda in [@Beale]. They proved the following blow up criterion: let $ u_0\in H^s,$ with $ s>\frac{d}{2}+1$ and denote by $T^\star$ the lifespan of the solution, then $$T^\star<+\infty\Longrightarrow \int_0^{T^\star}\|\omega(\tau)\|_{L^\infty}d\tau=+\infty.$$ An immediate consequence of this criterion is the global existence of Kato’s solutions in space dimension two. This follows from the conservation of the vorticity along the particle trajectories, namely the vorticity satisfies the Helmholtz equation $$\label{vorts1} \partial_t\omega+u\cdot\nabla \omega=0.$$ Recall that in this case the vorticity can be assimilated to the scalar $\omega=\partial_1 u^2-\partial_2 u^1$ and we derive from the equation an infinite family of conservation laws. For instance, for $$\forall t\geq 0,\quad\|\omega(t)\|_{L^p}=\|\omega_0\|_{L^p}.$$ It seems that the standard methods used for the local theory cease to work in the limiting space $H^{\frac{d}{2}+1}$ due to the lack of embedding in the Lipschitz class. Nevertheless the well-posedness theory can be successfully implemented in a slight modification of this space in order to guarantee this embedding, take for example Besov spaces of type $B_{p,1}^{\frac{d}{p}+1}$, for more details see for instance [@Cha2]. In this critical framework the BKM criterion cited before is not known to work and should be replaced by the following one, $$T^\star<+\infty\Longrightarrow \int_0^{T^\star}\|\omega(\tau)\|_{B_{\infty,1}^0}d\tau=+\infty.$$ In this class of initial data the global well-posedness in dimension two is not a trivial task and was proved by Vishik in [@Vishik2] through the use in an elegant way of the conservation of the Lebesgue measure by the flow. We mention that a simple proof of Vishik’s result, which has the advantage to work in the viscous case, was given in [@HK]. By using the formal $L^p$ conservation laws it seems that we can go beyond the limitation fixed by the general theory of hyperbolic systems and construct global weak solution for $p>1$ but for the uniqueness we require in general the vorticity to be bounded. This was carefully done by Yudovich in his paper [@Y1] following the tricky remark that the gradient of the velocity belongs to all $L^p$ with slow growth with respect to $p$: $$\sup_{p\geq2}\ \frac{\|\nabla v(t)\|_{L^p}}{p}<\infty.$$ The uniqueness part is obtained by performing energy estimate and choosing suitably the parameter $p$. In this new pattern the velocity belongs to the class of log-Lipschitz functions and this is sufficient to establish the existence and uniqueness of the flow map, see for instance [@Ch1]. The real matter at this level of regularity concerns only the uniqueness part which requires minimal regularity for the velocity and the assumption of bounded vorticity is almost necessary in the scale of Lebesgue spaces. However slight improvements have been carried out during the last decades by allowing the vorticity to be unbounded. For example, in [@Y2] Yudovich proved the uniqueness when the $L^p$-norms of the initial vorticity do not grow much faster than $\ln p:$ $$\sup_{p\geq2}\ \frac{\|\omega_0\|_{L^p}}{\,\ln p}<\infty.$$ We refer also to [@De; @DM] for other extensions on the construction of global weak solutions. In [@Vishik1], Vishik accomplished significant studies for the existence and uniqueness problem with unbounded vorticities. He gave various results when the vorticity lies in the space $B_\Gamma\cap L^{p_0}\cap L^{p_1}$, where $p_0<2<p_1$ and $B_\Gamma$ is the borderline Besov spaces defined by $$\label{space-vis} \sup_{n\geq 1}\ \frac{1}{\Gamma(n)}{\sum_{q=-1}^{n}\|\Delta_q\omega_0\|_{L^\infty}}<\infty.$$ As an example, it was shown that for $\Gamma(n)=O(\ln n)$ there exits a unique local existence but the global existence is only proved when $\Gamma(n)=O(\ln^{\frac12} n).$ Nevertheless the propagation of the initial regularity is not well understood and Vishik were only able to prove that for the positive times the vorticity belongs to the big class $B_{\Gamma_1}$ We point out that the persistence regularity for spaces which are not embedded either in the Lipschitz class or in the spaces of conservation laws is in general a difficult subject. Recently, in [@BK2] the first author and Keraani were able to find a suitable space of initial data called log-$BMO$ space for which there is global existence and uniqueness without any loss of regularity. This space is strictly larger than the $L^\infty$ space and much smaller than the usual $BMO$ space. The main goal of this paper is to continue this investigation and try to generalize the result of [@BK2] to a new collection of spaces which are not comparable to the bounded class. To state our main result we need to introduce the following spaces. Let $\alpha \geq0$ and $f:{{\mathbb R}}^2\to {{\mathbb R}}$ be a locally integrable function. 1. We say that $f$ belongs to the space ${{ {\it L^{\alpha}mo}}}$ if $$\|f\|_{{{ {\it L^{\alpha}mo}}}}\triangleq\sup_{B\, \hbox{\tiny ball}\\ \atop 0<r\le \frac12}|\ln {r}|^\alpha \av_{B} \left|f-\av_{B}f \right|<\infty.$$ 2. Let $F:[1,+\infty[\to[0,+\infty[$. We say that $f$ belongs to ${{{\it L^{\alpha}mo}_F}}$ if $$\|f\|_{{{{\it L^{\alpha}mo}_F}}}\triangleq\|f\|_{{{ {\it L^{\alpha}mo}}}}+\sup_{B_1, B_2\\ \atop 2r_2\le r_1\le\frac12}\frac{|\av_{B_2}f-\av_{B_1}f|}{ F\left(\frac{|\ln(r_2)|}{|\ln(r_1)|}\right)}<\infty,$$ where $ r_i$ denotes the radius of the ball $B_i$, $|B|$ denotes the Lebesgue measure of the ball $B$ and the average $\displaystyle{\av_{B}f}$ is defined by $$\av_{B}f\triangleq\frac1{|B|}\int_Bf(x)dx.$$ For the sake of a clear presentation we will first state a partial result and the general one will be given in Section \[sec:per\], Theorem \[apriori\]. \[main\] Take $F(x)=\ln x$ and assume that [$\omega_0\in L^p\cap {{{\it L^{\alpha}mo}_F}}$]{} with $p\in ]1,2[$ Then the $2d$ Euler equations admit a unique global solution $$\omega\in L^\infty_{loc}([0,+\infty[,L^p\cap{{{\it L^{\alpha}mo}_{1+F}}}).$$ Some remarks are in order. The regularity of the initial vorticity measured in the space ${{ {\it L^{\alpha}mo}}}$ is preserved globally in time. However we bring up a slight loss of regularity in the second part of the ${{{\it L^{\alpha}mo}_F}}$ norm. Instead of $F$ we need $1+F.$ This appears as a technical artefact and we believe that we can remove it. The case $\alpha=0$ is not included in our statement since it corresponds to the result of [@BK2]. However for $\alpha>1$ the vorticity must be bounded and the velocity is Lipschitz and in this case the propagation in the space ${{ {\it L^{\alpha}mo}}}$ can be done without the use of the second part of the space ${{{\it L^{\alpha}mo}_F}}$. The limiting case $\alpha=1$ is omitted in our main result for the sake of simplicity but our computations can be performed as well with slight modifications especially when we deal with the regularity of the flow in Proposition \[prop\]. The proof of Theorem \[main\] will be done in the spirit of the work of [@BK2]. We establish a crucial logarithmic estimate for the composition in the space ${{{\it L^{\alpha}mo}_F}}$ with a flow which preserves Lebesgue measure. We prove in particular the key estimate $$\|\omega(t)\|_{{{ {\it L^{\alpha}mo}}}}\le C\|\omega_0\|_{{{{\it L^{\alpha}mo}_F}}}\left(1+V(t) \right)\ln (2+V(t)),$$ where $\displaystyle{ V(t)=\int_0^t\|u(\tau)\|_{L^{1-\alpha}L}d\tau}$ and the space $L^{1-\alpha}L$ is defined in Section \[log-lip\]. We observe from the preceding estimate that we can propagate globally in time the regularity in the space ${{ {\it L^{\alpha}mo}}}$ and the second part of the space ${{{\it L^{\alpha}mo}_F}}$ is not involved for the positive times.\ The remainder of this paper is organized as follows. In the next section we introduce some functional spaces and prove some of their basic properties. We shall also examine the regularity of the flow map associated to a vector field belonging to the class $L^\alpha L.$ In Section $3$ we shall establish a logarithmic estimate for a transport model and we will see how to derive some of their consequences in the study of the inviscid flows. The proof of the main results will be given at the end of this section. We close this paper with an appendix covering the proof of some technical lemmata. Functional tools ================ This section is devoted to some useful tools. We will firstly recall some classical spaces like Besov spaces and BMO spaces and give a short presentation of Littlewood-Paley operators. Secondly, we introduce the spaces ${{ {\it L^{\alpha}mo}}}$ and ${{{\it L^{\alpha}mo}_F}}$ and discuss some of their important properties. We end this section with the study of log-Lipschitz spaces.\ In the sequel we denote by $C$ any positive constant that may change from line to line and $C_{0}$ a real positive constant depending on the size of the initial data. We will use the following notations: for any non-negative real numbers $A$ and $B$, the notation $A\lesssim B$ means that there exists a positive constant $C$ independent of $A$ and $B$ and such\ Littlewood-Paley operators {#subsection1} -------------------------- To define Besov spaces we first introduce the dyadic partition of the unity, for more details see for instance [@Ch1]. There are two non-negative radial functions $\chi\in\mathcal{D}(\mathbb{R}^{2})$ and $\varphi\in\mathcal{D}(\mathbb{R}^{2}\backslash\{ 0\})$ such that $$\chi(\xi)+ \displaystyle \sum_{q\ge 0}\varphi(2^{-q}\xi)=1, \quad\forall \xi\in\mathbb{R}^{2},$$ $$\displaystyle\sum_{q\in\mathbb{Z}}\varphi(2^{-q}\xi)=1, \quad\forall \xi\in\mathbb{R}^{2}\backslash\{0\},$$ $$\vert p-q\vert\ge 2\Rightarrow\mbox{supp }{\varphi}(2^{-p}\cdot)\cap\mbox{supp }{\varphi}(2^{-q}\cdot)=\emptyset,$$ $$q\ge 1\Rightarrow \mbox{supp }{\chi}\cap\mbox{supp }{\varphi}(2^{-q}\cdot)=\emptyset.$$ Let $u\in\mathcal{S}^{\prime}({{\mathbb R}}^2)$, the Littlewood-Paley operators are defined by $$\begin{aligned} \Delta_{-1}u=\chi({\textnormal{D}})u,\;\;\forall q\ge 0,\;\;\Delta_{q}u=\varphi(2^{-q}{\textnormal{D}})u\;\;\textnormal{and}\;\;S_{q}u=\displaystyle \sum_{-1\le p\le q-1}\Delta_{p}u.\end{aligned}$$ We can easily check that in the distribution sense we have the identity $$u=\sum_{q\in {\mathbb{Z}}}\Delta_{q}u,\;\;\forall u \in \mathcal{S}^{\prime}({{\mathbb R}}^{2}).$$ Moreover, the Littlewood-Paley decomposition satisfies the property of almost orthogonality: for any $u,v\in\mathcal{S}^{\prime}({{\mathbb R}}^2),$ $$\Delta_{p}\Delta_{q}u=0\qquad \textnormal{if} \qquad \vert p-q \vert \geqslant 2 \qquad$$ $$\Delta_{p}(S_{q-1}u\Delta_{q}v)=0 \qquad \textnormal{if} \qquad \vert p-q \vert \geqslant 5.$$\ Let us note that the above operators $\Delta_{q}$ and $S_{q}$ map continuously $L^{p}$ into itself uniformly with respect to $q$ and $p$. We also notice that these operators are of convolution type. For example for $q\in{\mathbb{Z}},\,$ we have $$\Delta_{-1}u=h\ast u,\quad \Delta_{q}u=2^{2q}g(2^q\cdot)\ast u,\quad\hbox{with}\quad g,h\in\mathcal{S},\quad \widehat{h}(\xi)=\chi(\xi),\quad \widehat{g}(\xi)=\varphi(\xi).$$ Now we recall Bernstein inequalities, see for example [@Ch1]. \[ber\] There exists a constant $C>0$ such that for all $q\in{\mathbb{N}}\,,\,k \in {\mathbb{N}}$ and for any tempered distribution $u$ we have $$\begin{aligned} \sup_{\vert\alpha\vert=k}\Vert\partial^{\alpha}S_{q}u\Vert_{L^{b}}\leqslant C^{k}2^{q\big(k+2\big(\frac{1}{a}-\frac{1}{b}\big)\big)}\Vert S_{q}u\Vert_{L^{a}}\quad \textnormal{for}\quad \; b\geqslant a\geqslant 1\\ C^{-k}2^{qk}\Vert{\Delta}_{q}u\Vert_{L^{a}}\leqslant \sup_{\vert\alpha\vert=k}\Vert\partial^{\alpha}{\Delta}_{q}u\Vert_{L^{a}}\leqslant C^{k}2^{qk}\Vert {\Delta}_{q}u\Vert_{L^{a}}.\end{aligned}$$ Using Littlewood-Paley operators, we can define Besov spaces as follows. For $(p,r)\in[1,+\infty]^2$ and $s\in\mathbb R,$ the Besov is the set of tempered distributions $u$ such that $$\|u\|_{B_{p,r}^s}:=\Big( 2^{qs} \|\Delta_q u\|_{L^{p}}\Big)_{\ell^{r}}<+\infty.$$ We remark that the usual Sobolev space $H^s$ coincides with $B_{2,2}^s$ for $s\in{{\mathbb R}}$ and the Hölder space $C^s$ coincides with $B_{\infty,\infty}^s$ when $s$ is not an integer. The following embeddings are an easy consequence of Bernstein inequalities, $$B^s_{p_1,r_1}\hookrightarrow B^{s+2({1\over p_2}-{1\over p_1})}_{p_2,r_2}, \qquad p_1\leq p_2\quad and \quad r_1\leq r_2.$$ Our next task is to introduce some new function spaces and to study some of their useful properties that will be frequently used along this paper. The ${ {\it L^{\alpha}mo}}$ space --------------------------------- Here the abbreviation $Lmo$ stands for [*logarithmic bounded mean oscillation*]{}. Let $\alpha \in[0,1]$ and $f:{{\mathbb R}}^2\to {{\mathbb R}}$ be a locally integrable function. We say that $f$ belongs to ${ {\it L^{\alpha}mo}}$ if $$\|f\|_{{ {\it L^{\alpha}mo}}}:=\sup_{0<r\le \frac12}|\ln {r}|^\alpha \av_{B} \left|f-\av_{B}f \right|+\left(\sup_{|B|=1}\int_{B}|f(x)|dx \right)<\infty,$$ where the supremum is taken over all the balls $B$ of radius $r\leq \frac{1}{2}$. We observe that for $\alpha=0$ the space ${ {\it L^{\alpha}mo}}$ reduces to the usual ${\it{Bmo}}$ space (the local version of ${\it{BMO}}$). It is also plain that the space ${ {\it L^{\alpha}mo}}$ contains the class of continuous functions $f$ such that $$\sup_{0<|x-y|\le\frac12}{|\ln|x-y||^\alpha\,{|f(x)-f(y)|}}<+\infty,$$ that is the functions of modulus of continuity $\mu(r)=|\ln r|^{-\alpha}.$ There are two elementary properties that we wish to mention: - For $\alpha\in]0,1[$, consider a ball $B$ of radius $r$ and take $k\geq 0$ with $2^k r\leq \frac{1}{2}$, then $$\begin{aligned} \nonumber \left| \av_{B} f - \av_{2^k B} f \right|& \lesssim& \|f\|_{{ {\it L^{\alpha}mo}}} \sum_{\ell=0}^{k} (|\ln r|-\ell)^{-\alpha}\\ & \lesssim& \|f\|_{{ {\it L^{\alpha}mo}}} |\ln r|^{1-\alpha }. \label{eq:ea} \end{aligned}$$ - For a ball $B$ of radius $1$ and $k\geq 1$, $2^k B$ can be covered by $2^{2k}$ balls of radius $1$, so $$\av_{2^k B} |f| \lesssim \|f\|_{{ {\it L^{\alpha}mo}}}. \label{eq:ea2}$$ Next, we discuss some relations between the ${{ {\it L^{\alpha}mo}}}$ spaces and the frequency cut-offs. \[prop:delta\] The following assertions hold true. 1. Let $f\in { {\it L^{\alpha}mo}},\,\alpha\in[0,1]$ and $n\in{\mathbb{N}}^*$, then $$\| \Delta_n f \|_{L^\infty} \lesssim n^{-\alpha} \|f\|_{{ {\it L^{\alpha}mo}}},$$ and if $\alpha \in(0,1)$ $$\| S_nf \|_{L^\infty} \lesssim n^{1-\alpha} \|f\|_{{ {\it L^{\alpha}mo}}}.$$ 2. We denote by $\mathcal{R}_{ij}:=\partial_{x_i}\partial_{x_j}\Delta^{-1}$ the “iterated” Riesz transform. Then for every function $f\in { {\it L^{\alpha}mo}}\cap L^p$, with $p\in(1,\infty)$ and $\alpha\in(0,1)$, $$\| S_n\mathcal{R}_{ij}f \|_{L^\infty} \lesssim n^{1-\alpha} \|f\|_{{ {\it L^{\alpha}mo}}\cap L^p}.$$ This proposition yields easily to the following corollary. We have the embedding ${{ {\it L^{\alpha}mo}}}\hookrightarrow B_\Gamma$, see the definition , with - $\Gamma(N)= \ln(N)$ if $\alpha=1$ - $\Gamma(N)=N^{1-\alpha}$ if $\alpha\in(0,1)$. $\bf{(1)}$ The Littlewood-Paley operator $\Delta_n$ corresponds to a convolution by $2^{2n}g(2^n\cdot)$ with $g$ a smooth function such that its Fourier transform is compactly supported away from zero. Therefore using the cancellation property of $g$, namely, $\int_{{{\mathbb R}}^2}g(x)dx=0,$ we obtain $$\begin{aligned} \Delta_nf(x) &=& \int_{{{\mathbb R}}^2} 2^{2n} g( 2^{n}(x-y)) f(y) dy \\ &=& \int_{{{\mathbb R}}^2} 2^{2n} g( 2^{n}(x-y)) \left[f(y)-\av_{B(x,2^{-n})} f \right] dy,\end{aligned}$$ Denote by $B\triangleq B(x,2^{-n})$ the ball of center $x$ and radius $2^{-n}$. Hence, due to the fast decay of $g$, it comes for every integer $M$ $$\begin{aligned} \left| \Delta_nf(x) \right| & \lesssim |B|^{-1} \sum_{k=0}^{n-1} 2^{-kM} \int_{2^k B} \left|f(y)-\av_{B} f \right| dy \\ &+ |B|^{-1} \sum_{k=-1}^\infty 2^{-(n+k)M} \int_{2^{n+k} B} \left|f(y)-\av_{B} f \right| dy\\ &\triangleq \hbox{I+II}. \end{aligned}$$ To estimate the first sum $I$ we use the first inequality of (\[eq:ea\]), $$\begin{aligned} \hbox{ I }&\leq \sum_{k=0}^{n-1} 2^{-k(M-2)} \av_{2^k B} \left|f-\av_{B} f \right| \\ & \lesssim \sum_{k=0}^{n-1} 2^{-k(M-2)} \av_{2^k B} \left|f-\av_{2^k B} f \right| + \sum_{k=0}^{n-1} 2^{-k(M-2)} \left|\av_{B} f-\av_{2^k B} f \right|\\ & \lesssim \sum_{k=0}^{n-1} 2^{-k(M-2)} (|\ln(2^{k-n})|)^{-\alpha}\|f\|_{{ {\it L^{\alpha}mo}}} + \sum_{k=0}^{n} 2^{-k(M-2)}\sum_{\ell=0}^k\frac{1}{|n-\ell|^\alpha} \|f\|_{{ {\it L^{\alpha}mo}}} \\ &\lesssim \sum_{k=0}^{n-1} 2^{-k(M-2)} \frac{1}{|k-n|^\alpha}\|f\|_{{ {\it L^{\alpha}mo}}} + \sum_{k=0}^{n-1} 2^{-k(M-2)} \sum_{\ell=0}^k\frac{1}{|n-\ell|^\alpha} \|f\|_{{ {\it L^{\alpha}mo}}}\\ & \lesssim (1+n)^{-\alpha} \|f\|_{{ {\it L^{\alpha}mo}}}. \end{aligned}$$ As to the second sum we combine (\[eq:ea\]) and (\[eq:ea2\]) $$\begin{aligned} \av_{2^{n+k} B} \left|f-\av_{B} f \right| &\le&\av_{2^{n+k} B} \left|f-\av_{2^nB} f \right|+\av_{2^{n} B} \left|f-\av_{B} f \right|\\ &\lesssim& \|f\|_{{ {\it L^{\alpha}mo}}}+n^{1-\alpha}\|f\|_{{ {\it L^{\alpha}mo}}}\\ &\lesssim& \|f\|_{{ {\it L^{\alpha}mo}}} n^{1-\alpha}.\end{aligned}$$ Consequently, $$\begin{aligned} \hbox{II} & \leq \|f\|_{{ {\it L^{\alpha}mo}}} \sum_{k\geq -1} 2^{-(n+k)(M-2)}n^{1-\alpha}\\ & \lesssim n^{-\alpha} \|f\|_{{ {\it L^{\alpha}mo}}}.\end{aligned}$$ The proof is now achieved by combining these two estimates. Now let us focus on the estimate of $S_n f$. We write according to the first estimate $(1)$ of the proposition $$\begin{aligned} \|S_n f\|_{L^\infty}&\le \|\Delta_{-1}f\|_{L^\infty}+\sum_{q=0}^{n-1}\|\Delta_q f\|_{L^\infty}\\ &\lesssim \|\Delta_{-1}f\|_{L^\infty}+\sum_{q=0}^{n-1}\frac{1}{ (1+q)^\alpha}\|f\|_{{ {\it L^{\alpha}mo}}}\\ &\lesssim \|\Delta_{-1}f\|_{L^\infty}+ n^{1-\alpha}\|f\|_{{ {\it L^{\alpha}mo}}}.\end{aligned}$$ So it remains to estimate the low frequency part. For this purpose we imitate the proof of $\|\Delta_n f\|_{L^\infty}$ with the following slight modification $$\begin{aligned} \Delta_{-1}(f)(x) &=& \int_{{{\mathbb R}}^2} h( x-y) f(y) dy\\ & =& \int_{{{\mathbb R}}^2} h(x-y) \Big(f(y)-\av_{B(x,1)} f \Big)dy+ \av_{B(x,1)} f.\end{aligned}$$ Therefore we get $$\begin{aligned} \|\Delta_{-1}f\|_{L^\infty}&\lesssim&\|f\|_{{ {\it L^{\alpha}mo}}}+\sup_{x\in\mathbb{R}^2} \av_{B(x,1)} |f| \\ &\lesssim&\|f\|_{{ {\it L^{\alpha}mo}}}.\end{aligned}$$ ${\bf(2)}$ This can be easily obtained by combining the first part of Proposition \[prop:delta\] with the continuity on the $L^p$ space of the localized Riesz transforms $\Delta_n \partial_i\partial_j\Delta^{-1}$ together with the help of Bernstein inequality, for $n\geq 1$: $$\begin{aligned} \|S_n \mathcal{R}_{ij}f\|_{L^\infty}&\lesssim& \|\Delta_{-1}\mathcal{R}_{ij}f\|_{L^\infty}+\sum_{q=0}^{n-1}\|\Delta_q f\|_{L^\infty}\\ &\lesssim& \|\mathcal{R}_{ij} f\|_{L^p}+\sum_{q=0}^{n-1}\frac{1}{ (1+q)^\alpha}\|f\|_{{ {\it L^{\alpha}mo}}}\\ &\lesssim& \|f\|_{L^p}+ n^{1-\alpha}\|f\|_{{ {\it L^{\alpha}mo}}}.\end{aligned}$$ The proof of the desired result is now completed. Now we will introduce closed subspaces of the space ${{ {\it L^{\alpha}mo}}}$ which play a crucial role in the study of Euler equations as we will see later in the concerned section. The ${{{\it L^{\alpha}mo}_F}}$ space ------------------------------------ It seems that the establishment of the local well-posedness for Euler equations in the framework of ${{ {\it L^{\alpha}mo}}}$ spaces is quite difficult and cannot be easily reached by the usual methods. What we are able to do here is to construct the solutions in some weighted ${{ {\it L^{\alpha}mo}}}$ spaces whose study will be the subject of this section. Before stating the definition of these spaces we need the following concepts. \[def657\] Let $F:[1,+\infty[\to[0,+\infty[$ be a non-decreasing continuous function. $\bullet$ We say that $F$ belongs to the class $\mathcal{A}$ if there exists $C>0$ such that: 1. Divergence at infinity: $\displaystyle{\lim_{x\to+\infty}F(x)=+\infty.}$ 2. Slow growth: $\forall x,y\geq 1$ $$F(x\,y)\le C\, (1+F(x))\,(1+F(y)).$$ 3. Lipschitz condition: $F$ is differentiable and $$\sup_{x>1}|F^\prime(x)|\le C.$$ 4. Cancellation at $1$: $$\forall x\in[0, 1],\quad F(1+x)\leq C x.$$ $\bullet$ We say that $F$ belongs to the class $\mathcal{A}^\prime$ if it belongs to $\mathcal{A}$ and satisfies $$\int_{2}^{+\infty}\frac{1}{x \,F(x)}dx=+\infty.$$ \[rmq23\] 1. From the slow growth assumption we see that necessarily the function $F$ should have at most a polynomial growth. 2. The assumption $(3)$ is only used through Lemma $\ref{maj12}$ and could be in fact relaxed for example to $\|F^{(k)}\|_{L^\infty}<\infty$ for some $k\in {\mathbb{N}}.$ But for the sake of simple presentation we limited our discussion to the case $k=1.$ <!-- --> 1. For any $\beta\in ]0,1]$, the function $x\mapsto x^\beta -1$ belongs to the class $\mathcal{A} \setminus {\mathcal A}^\prime.$ 2. For any $\beta\geq1$, the function $x\mapsto \ln^{\beta}(x)$ belongs to the class $\mathcal{A}$ and this function belongs to the class $\mathcal{A}^\prime$ only for $\beta=1.$ 3. The function $x\mapsto \ln x\,\ln\ln(e+x)$ belongs to the class $\mathcal{A}^\prime$. We can now introduce the weighted ${{ {\it L^{\alpha}mo}}}$ spaces. Let $\alpha \in[0,1]$ and $F$ be in the class $\mathcal{A}$. We define the space ${{{\it L^{\alpha}mo}_F}}$ as the set of locally integrable functions $f:{{\mathbb R}}^2\to{{\mathbb R}}$ such that $$\|f\|_{{{{\it L^{\alpha}mo}_F}}}\triangleq\|f\|_{{ {\it L^{\alpha}mo}}}+\sup_{B_1, B_2}\frac{|\av_{B_2}f-\av_{B_1}f|}{ F\left(\frac{|\ln(r_2)|}{|\ln(r_1)|}\right)}<+\infty,$$ where the supremum is taken over all the pairs of balls and in with and . Here, for a ball and , denotes the ball that is concentric with and whose radius is times the radius of . Now we list some useful properties of these spaces that will be used later. \[rmq67\] 1. The space ${{{\rm LBMO}}}$ introduced in [@BK2] corresponds to $\alpha=0$ and $F=\ln$. 2. Let $F_1, F_2\in \mathcal{A}$ such that $F_1\lesssim F_2$. Then we have the embedding $$\mathit{L^\alpha mo}_{F_1}\hookrightarrow \mathit{L^\alpha mo}_{F_2}.$$ 3. For every and one has $$\| g\ast f\|_{{{{\it L^{\alpha}mo}_F}}}\leq \|g\|_{L^1}\| f\|_{{{{\it L^{\alpha}mo}_F}}}.$$ Indeed, this property is just the consequence of Minkowski inequality and that the ${{{\it L^{\alpha}mo}_F}}$-norm is invariant by translation. The main goal of the following proposition is to discuss the link between the space of bounded functions and the space ${{{\it L^{\alpha}mo}_F}}$. We will see in particular that under suitable assumptions on $F$ these spaces are not comparable. More precisely we get the following. \[pro3\] Let $\alpha\in [0,1]$ and $f:{{\mathbb R}}^2\to{{\mathbb R}}$ be the radial function defined by $$f(x)=\left\{ \begin{array}{ll} \ln(1-\ln|x|) \qquad {\rm if}\quad |x|\leq 1\\ 0,\qquad \qquad {\rm if}\quad |x|\geq 1. \end{array} \right.$$ The following properties hold true.\ 1. The function $f$ belongs to ${{ {\it L^{\alpha}mo}}}$. 2. For $F(x)= \ln x, x\geq 1,$ then $f\in {{{\it L^{\alpha}mo}_F}}$. 3. For $\alpha\in ]0,1]$ and $F\in \mathcal{A}$ with $\ln\lesssim F$, the spaces $L^\infty$ and ${{{\it L^{\alpha}mo}_F}}$ are not comparable. ${\bf{(1)}}$ There are at least two ways to get this result. The first one uses Spanne’s criterion, see Theorem 2 of [@Spanne] and we omit here the details. However the second one is related to Poincaré inequality which states that for any ball for $B$ we have $$\av_{B} \left|f-\av_{B}f \right| \lesssim r\, \av_{B} |\nabla f|.$$ For the example, it is obvious that $|\nabla f(x)| \lesssim \frac{1}{|x|(1-\ln |x|)}\cdot$ So the quantity of the right-hand side in the Poincaré inequality is maximal for a ball $B$ centered at $0$ and consequently it comes $$\begin{aligned} r \,\av_{B} |\nabla f| &\lesssim &r^{-1} \,\int_0^{r} \frac{1}{1-\ln \eta} d\eta\\ &\lesssim &\frac{1}{1-\ln r}, \end{aligned}$$ which concludes the proof of $f\in {{ {\it L^{\alpha}mo}}}$.\ ${\bf{(2)}}$ We reproduce the arguments developed in [@BK2 Proposition 3], where it is proven that $$\left|\av_{B_2}f-\av_{B_1}f\right| \lesssim \ln\left(\frac{1+|\ln(r_2)|}{1+|\ln(r_1)|}\right) + \mathcal{O}( |\ln(r_1)|^{-1}) + \mathcal{O}( |\ln(r_2)|^{-1}). \label{eq:end}$$ If $A\triangleq\frac{1+|\ln(r_2)|}{1+|\ln(r_1)|} \geq 2$ then $\mathcal{O}( |\ln(r_1)|^{-1}) + \mathcal{O}( |\ln(r_2)|^{-1})$ is bounded by $\ln(A)$ and so $$\left|\av_{B_2}f-\av_{B_1}f\right| \lesssim \ln(A).$$ If $A\leq 2$, then $\ln(A)$ is equivalent to $A-1=\frac{\ln(r_1/r_2)}{1+|\ln(r_1)|} \gtrsim (1+|\ln(r_1)|)^{-1}.$ The latter inequality follows from the fact $r_2\leq r_1/2$. Therefore we get $$\begin{aligned} \mathcal{O}( |\ln(r_1)|^{-1}) + \mathcal{O}( |\ln(r_2)|^{-1}) \lesssim A-1 \approx \ln(A), \end{aligned}$$ which also gives $$\left|\av_{B_2}f-\av_{B_1}f\right| \lesssim \ln(A).$$ Finally this ensures that $f\in {{{\it L^{\alpha}mo}_F}}$, for every $\alpha \in[0,1]$. ${\bf{(3)}}$ According to Remark \[rmq67\] we get the embedding $\mathit{L^\alpha mo}_{\ln}\hookrightarrow {{{\it L^{\alpha}mo}_F}}$. Now by virtue of the second claim of Proposition \[pro3\] the function $f$ which is clearly not bounded belongs to the It remains to construct a function which is bounded but does not belong to the space ${{{\it L^{\alpha}mo}_F}}.$ Let $D_+$ be the upper half unit disc defined by $$D_+\triangleq\big\{(x,y); x^2+y^2\le1, y\geq0 \big\}.$$ By the same way we define the lower half unit disc $D_{-}.$ Let $r\le 1,$ denote by $B_r$ the disc of center zero and radius $r$ and let $g={\bf{1}}_{D_+}$ be the characteristic function of $D_+.$ Easy computations yield $$g(x)-\av_{B_{r}}g= \left\{ \begin{array}{ll} \frac12,\quad x\in B_r\cap D_+\\ -\frac12,\quad x\in B_r\cap D_{-} \end{array} \right.$$ Thus we find for every $r\in(0,1)$ $$\begin{aligned} \av_{B_r}|g-\av_{B_r}g|&=&\frac12\cdot \end{aligned}$$ This shows that the function $g$ does not belong to ${{ {\it L^{\alpha}mo}}}$ for every $\alpha>0.$ Our next aim is to go over some refined properties of the weighted [*[lmo]{}*]{} spaces. One result that we will proved and which seems to be surprising says that all the spaces ${{{\it L^{\alpha}mo}_F}}$ are contained in the space $\it{L^\alpha mo}_{\ln}$. This rigidity follows from the cancellation property of $F$ at the point $1$. More precisely, we shall show the following. \[prop:F\] Let $\alpha \in(0,1]$ and $F\in {\mathcal A}$. Then ${{{\it L^{\alpha}mo}_F}}\hookrightarrow {{ {\it L^{\alpha}mo}}}_{\ln}$. Let $F:[1,+\infty)\to {{\mathbb R}}_+$ defined by $F(x)=\ln(1+\ln x)$ then $F\in{\mathcal A}$ and ${{{\it L^{\alpha}mo}_F}}\subsetneq {{ {\it L^{\alpha}mo}}}_{\ln}$. Fix a function $f\in {{{\it L^{\alpha}mo}_F}}$ and a point $x$ and set $\phi(r) = \av_{B(x,r)} f$. From the definition of the space ${{{\it L^{\alpha}mo}_F}}$ combined with the cancellation property of $F$ and its polynomial growth we get that for every $r\in(0,\frac{1}{2})$ and $k\geq 1$ $$\begin{aligned} \left| \phi(r)-\phi(2^{-k} r) \right| & \leq \sum_{\ell = 0}^{k-1} \left| \phi(2^{-\ell} r)-\phi(2^{-\ell-1}r) \right| \\ & \lesssim \sum_{\ell = 0}^{k-1} F\left(\frac{1+\ell + |\ln r|}{\ell +|\ln r|}\right) \\ & \lesssim \sum_{\ell = 0}^{k-1} \frac{1}{\ell +|\ln r|}\\ & \lesssim \ln\left(\frac{k + |\ln r|}{|\ln r|}\right).\end{aligned}$$ Then for $s<\frac{r}{2}$ choose $k\geq 1$ such that $2^{-k-1} r \leq s <2^{-k} r$ and so $$\left| \phi(r)-\phi(s) \right| \leq \left| \phi(r)-\phi(2^{-k} r) \right| + \left| \phi(s)-\phi(2^{-k} r) \right|.$$ As we have just seen, the first term is bounded by $$\ln\left(\frac{k + |\ln r|}{|\ln r|}\right) \approx \ln\left(\frac{1+|\ln s|}{|\ln r|}\right).$$ The second term is bounded as follows : $$\begin{aligned} \left| \phi(s)-\phi(2^{-k} r) \right| & \leq \left| \phi(s)-\phi(2^{-k+1} r) \right| + \left| \phi(2^{-k-1}r)-\phi(2^{-k} r) \right| \\ & \lesssim F \left(\frac{\ln s}{\ln(2^{-k+1}r)}\right) + F \left(\frac{1+k + |\ln(r)|}{k +|\ln(r)|}\right) \\ & \lesssim \frac{1}{|\ln s|}\\ & \lesssim \ln\left(\frac{|\ln s|}{|\ln r|}\right),\end{aligned}$$ where we used the cancellation property of $F$ and the fact that both $B(x,2s)$ and $B(x,2.2^{-k}r)$ are included into $B(x,2^{-k+1}r)$. So combining these two previous estimates, it comes for every $x$, $r<\frac{1}{2}$ and $s\leq \frac{r}{2}$ $$\left| \av_{B(x,r)} f - \av_{B(x,s)} f\right| \lesssim \ln\left(\frac{|\ln s|}{|\ln r|}\right). \label{eq:boule}$$ Now let and two balls with and . We wish to estimate $\left| \av_{B_2} f - \av_{B_1} f \right|$. First, it is clear that the interesting case is when at least the radius $r_2$ is small, otherwise $r_1$ and $r_2$ are equivalent to $1$ and there is nothing to prove. So assume that $r_2\le \frac{1}{100}$, then it is only sufficient to study the case where $r_1\leq \frac{1}{10}$. So let us only consider this situation : $r_2 \leq \frac{1}{100}$ and $r_1\leq \frac{1}{10}$.\ Then we have $$\begin{aligned} \left| \av_{B_2} f - \av_{B_1} f \right| & \lesssim \left| \av_{B_2} f - \av_{\frac{r_1}{r_2} B_2} f \right| + \left| \av_{\frac{r_1}{r_2} B_2} f - \av_{B_1} f \right|.\end{aligned}$$ Applying (\[eq:boule\]), the first term is bounded by $\ln\left(\frac{\ln r_2}{\ln r_1} \right)$. The second term can be easily bounded by $|\ln(r_1)|^{-1}$. Indeed, the two balls $\frac{r_1}{r_2} B_2$ and $B_1$ are comparable and of radius $r_1\leq \frac{1}{10}$. So there exists a ball $B$ of radius $r=5 r_1$, such that $\frac{2r_1}{r_2} B_2 \cup 2B_1 \subset B$. Then we have $$\begin{aligned} \left| \av_{\frac{r_1}{r_2} B_2} f - \av_{B_1} f \right| & \leq \left| \av_{\frac{r_1}{r_2} B_2} f - \av_{B} f \right| + \left| \av_{B} f - \av_{B_1} f \right| \\ & \lesssim F\left(\frac{\ln r_1}{\ln r} \right) \lesssim \frac{1}{|\ln r_1|}. \end{aligned}$$ Now, since $r_2\le\frac12 r_1$ then $$\begin{aligned} \ln\left(\frac{\ln r_2}{\ln r_1}\right) & = & \ln\left(1+\frac{\ln(r_2/r_1)}{\ln r_1}\right)\\ &\geq&\ln\left(1+\frac{\ln 2}{|\ln r_1|}\right)\\ &\ge&C\frac{1}{|\ln r_1|}\cdot\end{aligned}$$ This concludes the proof of $$\left| \av_{B_2} f - \av_{B_1} f \right| \lesssim \ln\left(\frac{1-\ln(r_2)}{1-\ln(r_1)}\right).$$ Hence we get the inclusion ${{{\it L^{\alpha}mo}_F}}\subset {{ {\it L^{\alpha}mo}}}_{\ln}$.\ Then consider the specific function $F(\cdot)=\ln(1+\ln(|\cdot|))$. It is easy to check that $F\in{\mathcal A}$ and the function $f$ defined in Proposition \[pro3\] belongs to ${{ {\it L^{\alpha}mo}}}_{\ln} \setminus {{{\it L^{\alpha}mo}_F}}$. Indeed, (\[eq:end\]) becomes an equality for this specific function $f$ with balls $B_1,B_2$ centered at $0$. The next proposition shows that for $\alpha=1$, the cancellation property of $F$ at $1$ (in ${{{\it L^{\alpha}mo}_F}}$ space) can be “forgotten”, since it is already encoded in the ${\it Lmo}$ space: Let $\alpha =1$ and $F\in {\mathcal A}$. Then ${{{\it L^{\alpha}mo}_F}}= {\it Lmo}_{1+F}$. Moreover, we have ${\it Lmo}= {\it Lmo}_{\ln}$. Since $F \leq 1+F$, it follows that $ {{{\it L^{\alpha}mo}_F}}\subset {\it Lmo}_{1+F}$. Reciprocally, since for $t\geq 1$, $F(t) \simeq 1+F(t)$ (due to $F\in{\mathcal A}$), following the proof of Proposition \[prop:F\] to prove that ${\it Lmo}_{1+F} \subset {{{\it L^{\alpha}mo}_F}}$ it is sufficient to check that for every function $f\in {\it Lmo}$, every ball $B$ of radius $r<\frac{1}{4}$ then $$\left| \av_{B} f -\av_{2B} \right| \lesssim \frac{1}{|\ln(r)|}. \label{eq:to}$$ Indeed, the only difference between ${\it Lmo}_{1+F}$ and ${{{\it L^{\alpha}mo}_F}}$ (where $F$ is replaced by $1+F$) is the loss of the cancellation property of $F$ at the point $1$ and this property was used in the previous proposition to check (\[eq:to\]).\ However, here since $\alpha=1$ (\[eq:to\]) automatically holds since the function belongs to ${\it Lmo}$. Then producing the same reasoning as for Proposition \[prop:F\], we deduce that ${\it Lmo} \subset {\it Lmo}_{\ln}$, which yields ${\it Lmo}= {\it Lmo}_{\ln}$, since the other embedding is obvious. Regularity of the flow map {#log-lip} -------------------------- We shall continue in this section our excursion into function spaces by introducing the log-Lipschitz class with exponent $\beta \in (0,1]$, denoted by $L^\beta L$ and showing some links with the foregoing ${{ {\it L^{\alpha}mo}}}$ spaces. We next examine the regularity of the flow map associated to a vector field belonging to this We start with the following definition. We say that a function $f$ belongs to the class $L^\beta L$ if $$\|f\|_{L^\beta L}\triangleq\sup_{0<|x-y|<\frac12} \ \frac{|f(x)-f(y)|}{|x-y|\big|\ln|x-y|\big|^\beta}<\infty.$$ Take now a smooth divergence-free vector field $u=(u^1,u^2)$ on ${{\mathbb R}}^2$ and $\omega=\partial_1 u^2-\partial_2 u^1$ its vorticity. It is apparent from straightforward computations that $$\label{bs} \Delta u=\nabla^\perp\omega.$$ This identity leads through the use of the fundamental solution of the Laplacian to the so-called Biot-Savart law. Now we shall solve the when the source term belongs to the space $ {{ {\it L^{\alpha}mo}}}\cap L^p.$ Without going further into the details we restrict ourselves to the a priori estimates required for the resolution of this equation. \[coro\] Let $\alpha \in(0,1)$, $p\in(1,\infty)$ and $\omega\in { {\it L^{\alpha}mo}}\cap L^p$ be the vorticity of the velocity $u$ given by the equation $(\ref{bs})$. Then $u\in L^{1-\alpha} L$ and there exists an absolute constant $C>0$ such that $$\|u \|_{L^{1-\alpha} L} \leq C \|\omega\|_{{ {\it L^{\alpha}mo}}\cap L^p}.$$ Let $N\in \N^\star$ be a given number that will be fixed later and $0<|x-y|<\frac12$. Using the mean value theorem combined with Bernstein inequality give $$\begin{aligned} |u(x)-u(y)|&\le& |S_Nu(x)-S_Nu(y)|+2\sum_{q\geq N}\|\Delta_q u\|_{L^\infty}\\ &\lesssim& |x-y|\|\nabla S_N u\|_{L^\infty}+\sum_{q\geq N}2^{-q}\|\Delta_q \omega\|_{L^\infty}.\end{aligned}$$ From Proposition \[prop:delta\], it follows $$\begin{aligned} |u(x)-u(y)|&\lesssim& N^{1-\alpha}\|\omega\|_{{ {\it L^{\alpha}mo}}\cap L^p}|x-y|+\|\omega\|_{{ {\it L^{\alpha}mo}}}\sum_{q\geq N}2^{-q}q^{-\alpha}\\ &\lesssim&\|\omega\|_{{ {\it L^{\alpha}mo}}\cap L^p}\,N^{1-\alpha}\left(|x-y| +2^{-N}\right).\end{aligned}$$ By choosing $2^{-N}\approx |x-y|$ we find $$|u(x)-u(y)|\lesssim |x-y|\big|\ln|x-y|\big|^{1-\alpha}\,\|\omega\|_{{ {\it L^{\alpha}mo}}\cap L^p}.$$ This completes the proof of the proposition. We recall Osgood Lemma whose proof can be found for instance in [@bah-ch-dan], page 128. \[osgood1\] Let $a, A>0$, $\Gamma: [a,+\infty[\to{{\mathbb R}}_+$ be a non-decreasing function and $\gamma:[t_0, T]\to{{\mathbb R}}_+$ be a locally integrable function. Let $\rho:[t_0,T]\to [a,+\infty[$ be a measurable function such that $$\rho(t)\le A+\int_{t_0}^t \gamma(\tau) \Gamma(\rho(\tau))\,\rho(\tau) \, d\tau.$$ Let $\displaystyle{\mathcal{M}(y)=\int_{a}^{y} \frac{1}{x\Gamma(x)}dx}$ and assume that $\displaystyle{\lim_{y\to+\infty}\mathcal{M}(y)=+\infty}$. Then $$\forall t\in[t_0, T],\quad \rho(t)\le \mathcal{M}^{-1}\Big(\mathcal{M}(A)+ \int_{0}^t\gamma(\tau)d\tau\Big).$$ In what follows we discuss the regularity of the flow map associated to a vector field belonging to the log-Lipschitz class. This precise description will be of great interest in the proof of the main result. \[prop\] Let be a smooth divergence-free vector field belonging to $L^{1-\alpha} L,$ with $\alpha\in(0,1)$ and be its flow, that is the solution of the differential equation, $$\partial_t{\psi}(t,x)=u(t,\psi(t,x)),\qquad {\psi}(0,x)=x.$$ Then, there exists $C\triangleq C(\alpha)>1$ such that for every $$|x-y|< \ell(t)\Longrightarrow |\psi^{\pm1}(t,x)-\psi^{\pm1}(y)|\leq |x-y| e^{C V(t)|\ln|x-y||^{1-\alpha}},$$ where $\ell(t)\in(0,\frac12)$ is given by $$\ell(t) e^{C V(t)|\ln(\ell(t))|^{1-\alpha}}=\frac12\quad\hbox{and}\quad V(t)\triangleq\int_0^t \|u(\tau)\|_{L^{1-\alpha} L}d\tau.$$ Here we denote by $\psi^1$ the flow $\psi$ and $\psi^{-1}$ its inverse. It is well-known that for every the mapping is a Lebesgue measure preserving homeomorphism (see [@Ch1] for instance). We fix such that $|x-y|<\frac12$ and we define for $t\geq0$, $$z(t)\triangleq|\psi(t,x)-\psi(t,y)|.$$ Clearly the function is strictly positive and satisfies $$z(t)\leq z(0)+C\int_0^t \|u(\tau)\|_{L^{1-\alpha} L}|\ln z(\tau)|^{1-\alpha} z(\tau)d\tau,$$ as soon as $z(\tau)\leq \frac{1}{2}$, for all $\tau\in[0,t)$. Let $T>0$ and $I\triangleq\big\{t\in [0,\ell(T)]\backslash \,\forall \tau\in[0,t], z(\tau)\leq \frac12\big\},$ where the value of $\ell(T)$ has been defined in Proposition \[prop\]. We aim to show that the set $I$ is the full interval $[0,\ell(T)]$. First $I$ is a non-empty set since $0\in I$ and it is an interval according to its definition. The continuity in time of the flow guarantees that $I$ is closed. It remains to show that $I$ is an open set of $[0,\ell(T)]$. From the differential equation, $$\forall t\in I,\quad z(t)\leq z(0)+C\int_0^t \|u(\tau)\|_{L^{1-\alpha} L}(-\ln z(\tau))^{1-\alpha} z(\tau)d\tau.$$ Accordingly, we infer $$-|\ln z(t)|^{\alpha}+|\ln z(0)|^{\alpha}\le C\alpha V(t),$$ and this yields $$|\ln z(t)|\geq \big(|\ln z(0)|^{\alpha}-C\alpha V(t)\big)^{\frac1\alpha}.$$ despite that $$\label {c01} C\alpha V(t)\le|\ln z(0)|^\alpha.$$ Consequently $$z(t)\le e^{-\big(|\ln z(0)|^{\alpha}-C\alpha V(t)\big)^{\frac1\alpha}}.$$ By virtue of Taylor formula and since $\frac1\alpha-1>0$ we get $$\begin{aligned} -\big(|\ln z(0)|^{\alpha}-C\alpha V(t)\big)^{\frac1\alpha}&=&-|\ln z(0)|+\frac1\alpha\int_0^{C\alpha V(t)}\big(|\ln z(0)|^{\alpha}-x\big)^{\frac1\alpha-1}dx\\ &\le&\ln z(0)+ CV(t)|\ln z(0)|^{1-\alpha}.\end{aligned}$$ It follows that $$z(t)\le z(0) e^{CV(t)|\ln z(0)|^{1-\alpha}}.$$ Therefore to show that $I$ is open it suffices to make the assumption $$z(0) e^{CV(t)|\ln z(0)|^{1-\alpha}}<\frac12,$$ which is satisfied when $z(0)< \ell(T).$ This last claim follows from the increasing property of the function $x\mapsto x e^{CV(t)|\ln x|^{1-\alpha}}$ on the interval $[0, x_c]$ where $x_c<1$ is the unique real number satisfying $|\ln x_c|^\alpha=C(1-\alpha) V(t).$ From the definition of $\ell(t)$ we can easily check that $\ell(T)<x_c$ and is satisfied. The proof of the assertion for $\psi^{-1}$ can be derived by performing similar computations for the generalized flow defined by $$\partial_t\psi(t,s,x)=u(t,\psi(t,s,x)),\quad\psi(s,s,x)=x$$ and the flow $\psi^{-1}$ is nothing but $x\mapsto\psi(0,t,x).$ Regularity persistence {#sec:per} ====================== The main object of this section is to examine the propagation of the initial regularity measured in the spaces ${{{\it L^{\alpha}mo}_F}}$ for the following transport model governed by a divergence-free vector field, $$\label{T23} \left\{ \begin{array}{ll} \partial_t {{w}}+u\cdot\nabla w=0,\qquad x\in \mathbb R^2, t>0, \\ \textnormal{div }u=0,\\ w_{\mid t=0}=f. \end{array} \right.$$ Along the first part of this study we shall not prescribe any relationship between the solution $w$ and the vector field $u$. Once this study is achieved, we will apply this result for the inviscid vorticity where the vector field is induced by the vorticity. This will enable us not only to prove Theorem \[main\] but also to state more general results on the local and global theory extending the special case of $F(x)=\ln x$. Composition in the space ${{{\it L^{\alpha}mo}_F}}$ ---------------------------------------------------- We begin with the following observation concerning the structure of the solutions to . Under reasonable assumptions on the regularity of the velocity, the solution can be recovered from its initial data and the flow $\psi$ according to the formula $ w(t)=f\circ\psi^{-1}(t)$. Thus the study of the propagation in the space ${{ {\it L^{\alpha}mo}}}$ reduces to the composition by a measure preserving map We should note that this latter problem can be easily solved as soon as the map is bi-Lipschitz (see [@BK] for composition in some BMO-type spaces by a bi-Lispchitz measure preserving map). In our context the flow is not necessarily Lipschitz but in some sense very close to this class. It is apparent according to Proposition \[prop\] that $\psi$ belongs to the class $C^{s}$ for every $s<1.$ It turns out that working with a flow under the Lipschitz class has a profound effect and makes the composition in the space ${{ {\it L^{\alpha}mo}}}$ very hard to get. This is the principal reason why we need to use the weighted subspace ${{{\it L^{\alpha}mo}_F}}$ in order to compensate this weak regularity and consequently to well-define the composition. Our result reads as follows. \[decom\] Let $\alpha\in(0,1),\, F\in \mathcal{A}$ and consider a smooth solution $w$ of the equation defined on $[0,T]$. Then there exists a constant $C\triangleq C(\alpha)>0$ such that the following holds true: 1. For every $t\in [0,T]$ $$\| w(t)\|_{{ {\it L^{\alpha}mo}}}\leq C\|f\|_{{{{\it L^{\alpha}mo}_F}}} \big(1+V(t)\big) F(2+V^{\frac1\alpha}(t)),$$ with $V(t)\triangleq \displaystyle{\int_0^t\|u(\tau)\|_{L^{1-\alpha}L}d\tau}$. 2. For every $ t\in [0,T]$ $$\|w(t)\|_{{ {\it L^{\alpha}mo}}_{1+F}}\leq C\|f\|_{{{{\it L^{\alpha}mo}_F}}} F(2+V^{\frac1\alpha}(t)).$$ Before giving the proof, some remarks are in order. 1. According to the first result of the foregoing theorem, the estimate of the solution in the space ${{ {\it L^{\alpha}mo}}}$ does not involve the weighted part of the space ${{{\it L^{\alpha}mo}_F}},$ which is only required for the initial data. 2. The estimate of the second part of Theorem $\ref{decom}$ is subjected to a slight loss. Indeed, instead of $F$ we put $1+F$. This is due to the fact that we need some cancellations for the difference of two averages and to avoid this loss more sophisticated analysis should be carried out and we believe that this loss is a technical artifact. [**$(1)$**]{} The proof will be done in the spirit of the recent work [@BK2]. First we observe that the solution is given by $w(t)=f\circ\psi^{-1}(t)$, where $\psi$ is the flow associated to the vector field $u.$ Therefore the estimate in the reduces to the stability by the right composition with a homeomorphism preserving Lebesgue measure with the prescribed regularity given in Proposition \[prop\]. Since the and its inverse share the same properties and the estimates that will be involved along the proof, we prefer for the sake of simple notation to use in the composition with $\psi$ instead of $\psi^{-1}.$ be the ball of center $x_0$ and radius $r\in(0,\frac12).$ We intend to give a suitable estimate for the quantity $$\mathcal{I}_r\triangleq |\ln r|^\alpha \fint_{B}|f\circ\psi- \fint_{B}(f\circ\psi)|dx.$$ To reach this goal we use in a crucial way the local regularity of the flow stated before in The estimate of $\mathcal{I}_r$ will require some discussions depending on a threshold value for $r$ denoted by $r_t$. The identification of $r_t$ is related to hidden arguments that will be clarified during the proof. To begin with, fix a sufficiently large constant $\delta > \max(\sqrt{2}, 2C)$ (where $C$ is given by Proposition \[prop\]) and define $r_t$ as the unique solution in the interval $(0,\frac12)$ of the following equation $$\label{eqss:01} \delta r_te^{ \delta V(t) |\ln(r_t)|^{1-\alpha}} =r_t^{ \frac12}.$$ The existence and uniqueness can be easily proven by studying the variations of the function $$\label{hh1}h(r)\triangleq \delta r^{\frac12}\,e^{ \delta V(t) |\ln(r)|^{1-\alpha}},\, r\in(0,1/2)$$ and using the fact that $h(\frac12)>1$ since $\delta>{\sqrt{2}}.$ We point out that $h$ is non-decreasing in the interval $[0,r_t]$ and $h(r)\le 1$ in this range. We have also the bound $$\label{m12} C^\prime+C^\prime V^{\frac1\alpha}(t)\le |\ln r_t|\le C+C V^{\frac1\alpha}(t),\quad\hbox{for some}\quad C, C^\prime>0.$$ Indeed, set $X=-\ln r_t$ then from and Young inequality $$\begin{aligned} X&=&2\ln \delta +2\delta V(t) X^{1-\alpha}\\ &\le&C_1+C_1 V^{\frac1\alpha}(t)+\frac12 X.\end{aligned}$$ This gives the estimate of the right-hand side of . For the left one it is apparent from the equation on $X$ and its positivity that $$\begin{aligned} X&\geq &2\ln \delta \quad\hbox{and}\quad X\geq 2\delta V(t) X^{1-\alpha}. $$ Thus $$X\ge \ln \delta+ \frac{1}{2} (2\delta)^{\frac{1}{\alpha}} V^{\frac1\alpha}(t),$$ which concludes the proof of (\[m12\]). Before starting the computations for $\mathcal{I}_r$ we need to introduce the radius $$\label{rnot} r_\psi\triangleq \delta\,r\, e^{\delta V(t) |\ln(r)|^{1-\alpha}}.$$ Now we will check that for $r\in(0,r_t]$ $$\label{eqss:1} 1\le\frac{|\ln r|}{|\ln r_\psi|}\le 2.$$ The inequality of the left-hand side can be deduced as follows. First it is obvious that $0<r<r_\psi$ and it remains to show that $r_\psi<\frac12$ whenever $r\in]0,r_t].$ For this purpose we show by using simple arguments that the function ${k}: r\mapsto r_\psi$ is non-decreasing in the From this latter fact and we find $r_\psi\le k(r_t)=r_t^{\frac12}\le \frac12$. Let us now move to the second inequality of and for this aim we start with studying the function $$g(x)=\frac{-x}{-x+a+b x^{1-\alpha}}, x\geq -\ln r_t ;\quad a\triangleq\ln \delta,\, b\triangleq\delta V(t).$$ We observe that the quotient $\frac{|\ln r|}{|\ln r_\psi|}$ coincides with $g(-\ln r)$. By easy computations we get $$g^\prime(x)=-\frac{a+b\alpha\,x^{1-\alpha}}{\big(-x+a+b x^{1-\alpha}\big)^2}<0.$$ This yields in view of and $$\begin{aligned} g(x)&\le & g(-\ln r_t) \le \frac{\ln r_t}{\frac{\ln r_t}{ 2}} \le 2.\end{aligned}$$ The estimate of $\mathcal{I}_r$ depends whether the radius $r$ is smaller or larger than the critical $0<r\le r_t .$ As is a homeomorphism which preserves Lebesgue measure then is an open connected set with . Let us consider a Whitney covering of this open set $\psi(B)$, that consists in a collection of such that: - The collection of double balls is a bounded covering: $$\psi(B)\subset \bigcup_j 2O_j.$$ - The collection is disjoint and for all , $$O_j\subset \psi(B).$$ - The Whitney property is verified: the radius $r_j$ of $O_j$ satisfies $$r_{_j}\approx d(O_j, \psi(B)^c).$$ We set then according to Proposition \[prop\] (and $\delta>2\rho(\alpha)$) we have $\psi(B)\subset \tilde B$. It is easy to see from the invariance of the Lebesgue measure by the flow that $$\begin{aligned} \fint_{B}\Big|f\circ\psi- \fint_{B}(f\circ\psi)\Big|dx&=&\fint_{\psi(B)}\Big|f- \fint_{\psi(B)}f\Big|dx \\ &\leq & 2 \fint_{\psi(B)}\Big|f- \fint_{\tilde B}f\Big|dx.\end{aligned}$$ Using the preceding notations $$\begin{aligned} {{|\ln r|^\alpha}}\fint_{\psi(B)}\Big|f- \fint_{\tilde B}f\Big|&\lesssim &\frac{{{|\ln r|^\alpha}}}{|B|}\sum_j |O_j| \, \fint_{2O_j}\Big|f- \fint_{\tilde B}f\Big| \\ &\lesssim & \hbox{I}_1+\hbox{I}_2,\end{aligned}$$ with $$\begin{aligned} \hbox{I}_1&\triangleq& \frac{{{|\ln r|^\alpha}}}{|B|}\sum_j |O_j| \, \fint_{2O_j}\Big|f- \fint_{2O_j} f\Big|\\ \hbox{I}_2&\triangleq& \frac{{{|\ln r|^\alpha}}}{|B|}\sum_j |O_j| \Big|\fint_{2O_j}f- \fint_{\tilde B} f\Big|.\end{aligned}$$ On one hand, since and $r_j\le r<\frac12$ (due to $|O_j|\leq |\psi(B)|=|B|$) then $$\begin{aligned} \hbox{I}_1&\leq& \frac{1}{|B|}\sum_j |O_j|\frac{{{|\ln r|^\alpha}}}{|\ln(r_j)|^\alpha}\|f\|_{{{{ {\it L^{\alpha}mo}}}}} \\ &\leq & \|f\|_{{{ {\it L^{\alpha}mo}}}} . \end{aligned}$$ On the other hand, since $d(O_j, \tilde B) \leq r_\psi$ and $r_{\tilde B}=r_\psi$, it ensures that $$O_j \subset \frac{r_\psi}{r_j} O_j$$ and hence the two balls $Q_1\triangleq\frac{r_\psi}{r_j} O_j$ and $ \tilde{B}$ are comparable[^1]. This entails $$\Big|\fint_{Q_1} f - \av_{\tilde B}f \Big| \lesssim \frac{1}{|\ln(r_\psi)|^\alpha} \|f\|_{{ {\it L^{\alpha}mo}}}.$$ We point out that we have used the fact that for $0<r<r_t$ the radius $r_\psi$ of $\tilde B$ is smaller than $\frac12$. Moreover according to the definition of the space ${{{\it L^{\alpha}mo}_F}}$, it comes since $r_{Q_1}\leq \frac12$ $$\begin{aligned} \Big| \av_{ 2O_j} f - \av_{Q_1} f \Big| &\lesssim& \|f\|_{{{{\it L^{\alpha}mo}_F}}} F\left(\frac{\ln r_j}{\ln r_{Q_1}}\right) \\ &\lesssim& \|f\|_{{{{\it L^{\alpha}mo}_F}}} F\left(\frac{\ln r_j}{ \ln r_{\psi}}\right).\end{aligned}$$ It follows that $$\begin{aligned} \Big| \av_{ 2O_j} f - \av_{\tilde B} f \Big| &\lesssim& \|f\|_{{{{\it L^{\alpha}mo}_F}}} \left( |\ln r_\psi|^{-\alpha}+ F\left(\frac{\ln r_j}{ \ln r_{\psi}}\right) \right).\end{aligned}$$ Together with (\[eqss:1\]) this estimate yields $$\begin{aligned} \Big| \av_{ 2O_j} f - \av_{\tilde B} f \Big| &\lesssim& \|f\|_{{{{\it L^{\alpha}mo}_F}}} \left( |\ln r|^{-\alpha}+ F\left(\frac{\ln r_j}{ \ln r_{\psi}}\right) \right).\end{aligned}$$ Consequently, $$\begin{aligned} \hbox{I}_2&\lesssim& \|f\|_{{{{\it L^{\alpha}mo}_F}}}+\|f\|_{{{{\it L^{\alpha}mo}_F}}} \frac{|\ln r|^\alpha}{|B|} \left(\sum_j |O_j| F\left(\frac{\ln r_j}{\ln r_\psi}\right) \right).\end{aligned}$$ For every we set $$u_k\triangleq\sum_{e^{-(k+1)}r< r_j\leq e^{-k}r} |O_j|,$$ so that $$\begin{aligned} \label{eff} \hbox{I}_2&\lesssim& \|f\|_{{{{\it L^{\alpha}mo}_F}}} \left(1+ \frac{|\ln r|^\alpha}{|B|}\sum_{k\geq 0}u_k \ F\left(\frac{-1- k+\ln r}{\ln(r) +a+b|\ln r|^{1-\alpha}}\right)\right)\\ \nonumber&\triangleq& \|f\|_{{{{\it L^{\alpha}mo}_F}}}+ \hbox{I}_3.\end{aligned}$$ with $$\label{not11} a\triangleq\ln \delta,\quad b\triangleq\delta V(t).$$ The numbers $a$ and $b$ appeared before in the definition of $r_\psi$ given in . Let $N$ be a real number that will be judiciously fixed later. We split the sum in the right-hand side of into two parts $$\hbox{I}_3=\sum_{k\leq N}(...)+\sum_{k> N}(.....)\triangleq\hbox{II}_{1}+\hbox{II}_{2}.$$ Since and $F$ is non-decreasing then $$\begin{aligned} \label{ff} \hbox{II}_{1}\lesssim |\ln r|^\alpha\ F\left(\frac{1+N+|\ln r|}{|\ln(r)| -A-B|\ln r|^{1-\alpha}}\right).\end{aligned}$$ To estimate the term $\hbox{II}_2$ we need a refined bound for $u_k$ given below and whose proof will be postponed to the end of this paper in of the Appendix. $$\label{precise} u_k\lesssim \delta r^2 e^{-k} e^{\delta (V(t)+1)(k-\ln(r))^{1-\alpha}}.$$ By virtue of and Lemma \[maj12\] we get $$\begin{aligned} \label{fff} \nonumber \hbox{II}_{2}&\lesssim& |\ln r|^\alpha e^{-N} e^{b(N-\ln r)^{1-\alpha}} F\left(\frac{1+N+|\ln r|}{|\ln r| -a-b|\ln r|^{1-\alpha}}\right)\\ &+& e^{-N} e^{b(N-\ln r)^{1-\alpha}}\frac{|\ln r|^\alpha}{|\ln r| -a-b|\ln r|^{1-\alpha}}\cdot \end{aligned}$$ So we choose $N=N(r)$ such that $e^{-N} e^{b(N-\ln r)^{1-\alpha}}=1$. Under this assumption we get $$\hbox{II}_{1}+\hbox{II}_{2}\lesssim |\ln r|^\alpha\ F\left(\frac{1+N+|\ln r|}{|\ln r| -a-b|\ln r|^{1-\alpha}}\right)+\frac{|\ln r|^\alpha}{|\ln r| -a-b|\ln r|^{1-\alpha}}\cdot$$ The condition on $N$ is also equivalent to $$N= 1+ b(N+|\ln r|)^{1-\alpha}.$$ Then from Young inequality $$\begin{aligned} \label{ineqss} N&\lesssim& 1+b^{\frac1\alpha}+b|\ln r|^{1-\alpha} $$ and therefore $$\begin{aligned} \frac{1+N+|\ln r|}{|\ln r| -a-b|\ln r|^{1-\alpha}} -1& =\frac{1+N+a+b|\ln r|^{1-\alpha}}{|\ln r_\psi|}\\ &\lesssim \frac{1+b^{\frac1\alpha}+b|\ln r|^{1-\alpha}}{|\ln r_\psi|}\cdot\end{aligned}$$ Using the [*cancellation property*]{} of $F$ at the point $1$, that is $\displaystyle{\sup_{x\in(0,1)}\frac{F(1+x)}{x}<\infty}$, together with , it comes $$\begin{aligned} |\ln r|^\alpha F\left( \frac{1+N+|\ln r|}{|\ln r| -a-b|\ln r|^{1-\alpha}} \right) & \lesssim |\ln r|^\alpha \frac{1+b^{\frac1\alpha}+b|\ln r|^{1-\alpha}}{|\ln r_\psi|} \\ & \lesssim \frac{|\ln r|^\alpha+|\ln r|^\alpha b^{\frac1\alpha}+b|\ln r|}{|\ln r|} \\ &\lesssim1+b+b^{\frac1\alpha}|\ln r|^{\alpha-1}.$$ Since $r\in(0, r_t]$ and according to we find $$1+b+b^{\frac1\alpha}|\ln r|^{\alpha-1} \lesssim 1+V(t)$$and so $$\begin{aligned} |\ln r|^\alpha F\left( \frac{1+N+|\ln r|}{|\ln r| -a-b|\ln r|^{1-\alpha}} \right) & \lesssim \left(1+V(t)\right).$$ It follows that $$\hbox{II}_1+\hbox{II}_2 \lesssim 1+ V(t)+\frac{|\ln r|^\alpha}{|\ln r| -a-b|\ln r|^{1-\alpha}}.$$ To estimate the last term we use $$\begin{aligned} \frac{|\ln r|^\alpha}{|\ln r| -a-b|\ln r|^{1-\alpha}}&=&\frac{|\ln r|^\alpha}{|\ln r_\psi|}\\ &\leq&{|\ln r|^{\alpha-1}}\\ &\lesssim& 1.\end{aligned}$$ Finally, we get $$\label{basse3} \sup_{0< r\le r_t}\left({{|\ln r|^\alpha}}\fint_{\psi(B)}\Big|f- \fint_{\tilde B}f\Big|+\mathcal{I}_r\right)\lesssim \|f\|_{{{{\it L^{\alpha}mo}_F}}} \left(1+V(t)\right).$$ Let us now move to the second case. $r_t\le r\leq \frac12$. According to $$\begin{aligned} \label{tita1} \nonumber |\ln r|&\le& |\ln r_t|\\ &\lesssim& 1+V^{\frac1\alpha}(t)\end{aligned}$$ which yields in turn $$\label{basse1} {{|\ln r|^\alpha}}\fint_{B}\Big|f\circ\psi- \fint_{B}f\circ\psi\Big|\lesssim \big(1+V(t)\big)\fint_{\psi(B)}|f|.$$ Let $\tilde{O}_j $ denote the ball which is concentric to $O_j$ and whose radius is equal to $1/2$. We can write by the definitions, $$\begin{aligned} \fint_{\psi(B)}|f|&\le &\frac{1}{|B|}\sum_{j}|O_j|\fint_{2 O_j}\Big|f-\fint_{\tilde{O}_j }f\Big| +\frac{1}{|B|}\sum_{j\in\mathbb{N}}|O_j|\fint_{\tilde{O}_j} |f| \\ &\le& \frac{1}{|B|}\sum_{j}|O_j|\fint_{2 O_j}\Big|f-\fint_{\tilde{O}_j }f\Big| + \sup_{|B|=1}\fint_{B}|f| \\ &\le&\|f\|_{{{{\it L^{\alpha}mo}_F}}} \frac{1}{|B|}\sum_{j}|O_j|F(-\ln {r_j})+\|f\|_{{{ {\it L^{\alpha}mo}}}}. $$ Now reproducing the same computations as for the first case leads to $$\begin{aligned} \frac{1}{|B|}\sum_{j}|O_j| F(-\ln {r_j})&\le& \frac{1}{|B|}\sum_{k\in\mathbb{N}}u_k F(k-\ln {r})\\ &\lesssim& F(N-\ln{r})\Big(1+ e^{-N} e^{b(N-\ln(r))^{1-\alpha}}\Big).\end{aligned}$$ This computation still holds as soon as $e^{-N} r \leq \ell(t)$, since we use Proposition \[prop\] for the with $ k\geq N$. We choose $N$ such that $ e^{-N} e^{b(N-\ln(r))^{1-\alpha}}= 1$ and we check that this choice legitimates the previous estimate since $e^{-N}r \leq \ell(t)$. Consequently, we get from (\[ineqss\]) and $$N+|\ln r| \lesssim 1+V^{\frac1\alpha}(t).$$ We then obtain $$\label{basse113} \fint_{\psi(B)}|f|\lesssim F\big(2+V^{\frac1\alpha}(t)\big)\, \|f\|_{{{{\it L^{\alpha}mo}_F}}}$$ and $$\begin{aligned} \label{basse13} \nonumber\mathcal{I}_r&\lesssim&{{|\ln r|^\alpha}}\fint_{\psi(B)}\Big|f\circ\psi- \fint_{\tilde B}f\Big|\\ &\lesssim& \big(1+V(t)\big)F\big(2+V^{\frac1\alpha}(t)\big)\, \|f\|_{{{{\it L^{\alpha}mo}_F}}}.\end{aligned}$$ Finally, we have obtained for $r\in [r_t,1/2]$ $$\mathcal{I}_r \lesssim \big(1+V(t)\big) F\big(2+V^{\frac1\alpha}(t)\big)\, \|f\|_{{{{\it L^{\alpha}mo}_F}}}.$$ Putting together the estimates of the case $1$ and the case $2$ yields $$\label{basse213} \|f\circ \psi\|_{{ {\it L^{\alpha}mo}}}\lesssim \big(1+V(t)\big)F\big(2+V^{\frac1\alpha}(t)\big)\, \|f\|_{{{{\it L^{\alpha}mo}_F}}}.$$ [$\bf(2)$]{} To deal with the second term in the -norm we will make use of the arguments developed above for ${{ {\it L^{\alpha}mo}}}$ part. Take and two balls and and let us see how to estimate the quantity $$\mathcal{J}\triangleq\frac{|\av_{B_2}f\circ\psi-\av_{B_1}f\circ\psi|}{ 1+F(\frac{\ln r_2}{\ln r_1})}\cdot$$ There are different cases to consider. $r_t\le r_1\le \frac12$. Using $$\begin{aligned} \frac{|\av_{B_1}f\circ\psi|}{1+ F(\frac{\ln r_2}{\ln r_1})}&\le&\av_{\psi(B_1)}|f| \\ &\lesssim&F\big(2+V^{\frac1\alpha}(t)\big)\|f\|_{{{{\it L^{\alpha}mo}_F}}} .\end{aligned}$$ If $r_2> r_t$ then by repeating the same arguments for the quantity involving $B_2$, it comes $$\mathcal{J} \lesssim F\big(2+V^{\frac1\alpha}(t)\big)\|f\|_{{{{\it L^{\alpha}mo}_F}}}.$$ If $r_2\leq r_t$ then we estimate the average on $\psi(B_2)$ by using and $$\begin{aligned} \av_{\psi(B_2)}|f| & \leq \av_{\psi(B_2)}\Big|f-\av_{\tilde{B}_2} f\Big| + \Big|\av_{\tilde{B}_2} f \Big| \\ &\lesssim |\ln r_2|^{-\alpha}(1+V(t))\|f\|_{{{{\it L^{\alpha}mo}_F}}}+ \Big|\av_{\tilde{B}_2} f \Big| \\ &\lesssim \|f\|_{{{{\it L^{\alpha}mo}_F}}}+ \Big|\av_{\tilde{B}_2} f \Big|,\end{aligned}$$ where and $r_{i,\psi}$ is the radius associated to $r_i,$ which was introduced in . It remains to treat the last term of the above inequality. For this goal we write $$\begin{aligned} \Big|\av_{\tilde{B}_2} f \Big| & \lesssim \Big|\av_{\tilde{B}_2} f - \av_{{B}_{(\psi(x_2),1/2)}} f\Big|+\sup_{B, r=\frac12}\Big| \av_{B}f \Big| \\ & \lesssim F(|\ln r_{2,\psi}|) \|f\|_{{{{\it L^{\alpha}mo}_F}}}+\|f\|_{{{ {\it L^{\alpha}mo}}}}.\end{aligned}$$ This yields in view of and the Definition \[def657\] $$\begin{aligned} \frac{|\av_{B_2}f\circ\psi|}{1+ F(\frac{\ln r_2}{\ln r_1})} & \lesssim \Big(1+\frac{F(|\ln r_{2,\psi}|)}{1+F(\frac{\ln r_2}{\ln r_1})}\Big) \|f\|_{{{{\it L^{\alpha}mo}_F}}}\\ &\lesssim \Big(1+\frac{F(|\ln r_2|)}{1+F(\frac{\ln r_2}{\ln r_1})}\Big) \|f\|_{{{{\it L^{\alpha}mo}_F}}}\\ &\lesssim \big( 1+F(\ln r_1)\big)\|f\|_{{{{\it L^{\alpha}mo}_F}}}.\end{aligned}$$ Since $r_1\in (r_t, \frac12)$ then using we find $$\begin{aligned} \frac{|\av_{B_2}f\circ\psi|}{1+ F(\frac{\ln r_2}{\ln r_1})} &\lesssim F(2+V^{\frac1\alpha}(t))\|f\|_{{{{\it L^{\alpha}mo}_F}}}.\end{aligned}$$ Finally we get for $r_2\le r_t$ $$\mathcal{J} \lesssim F\big(2+V^{\frac1\alpha}(t)\big)\|f\|_{{{{\it L^{\alpha}mo}_F}}}.$$ To achieve the proof of the second part of Theorem \[decom\], it remains to analyze the last case: $0<r_1\le r_t$. We decompose $\mathcal{J}$ as follows: $$\mathcal{J}= \frac{\mathcal{J}_{1}+\mathcal{J}_{2}+\mathcal{J}_3}{{1+F(\frac{\ln r_2}{\ln r_1})}},$$ with $$\begin{aligned} \mathcal{J}_{1}&\triangleq& |\av_{\psi(B_2)}f-\av_{\tilde B_2}f|+ |\av_{\psi(B_1)}f-\av_{\tilde B_1}f| \\ \mathcal{J}_{2}&\triangleq&{|\av_{\tilde B_2}f-\av_{2\tilde B_1}f|} \\ \mathcal{J}_{3}&\triangleq&|\av_{\tilde B_1}f-\av_{2\tilde B_1}f|.\end{aligned}$$ The first term $J_1$ can be handled as for and we get by $$\begin{aligned} \Big|\av_{\psi(B_2)}f-\av_{\tilde B_2}f\Big|+\Big|\av_{\psi(B_1)}f-\av_{\tilde B_1}f\Big| &\lesssim& \|f\|_{{{{\it L^{\alpha}mo}_F}}}\big(1+V(t)\big)\big(|\ln r_2|^{-\alpha}+|\ln r_1|^{-\alpha}\big)\\ &\lesssim & \|f\|_{{{{\it L^{\alpha}mo}_F}}}\big(1+V(t)\big)|\ln r_t|^{-\alpha}\\ &\lesssim& \|f\|_{{{{\it L^{\alpha}mo}_F}}}\end{aligned}$$ which gives in turn $$\begin{aligned} \frac{\mathcal{J}_1}{ 1+F(\frac{\ln r_2}{\ln r_1})}& \lesssim& \|f\|_{{{{\it L^{\alpha}mo}_F}}}.\end{aligned}$$ Since and , then $$\mathcal{J}_2\lesssim F\left(\frac{\ln r_{2,\psi}}{\ln r_{1,\psi}}\right) \|f\|_{{{{\it L^{\alpha}mo}_F}}}.$$ Hence we get from the property $(2)$ of the Definition \[def657\] combined with $$\begin{aligned} {\frac{\mathcal{J}_2 }{1+ F(\frac{\ln r_2}{\ln r_1} )}}& \leq \frac{F\left(\frac{\ln r_{2,\psi}}{\ln r_{1,\psi}}\right)}{1+F(\frac{\ln r_2}{\ln r_1} )}\|f\|_{{{{\it L^{\alpha}mo}_F}}} \\ & \lesssim \left(1+F\Big(\frac{\ln r_{2,\psi}}{\ln r_2}\frac{\ln r_1}{\ln r_{1,\psi}}\Big)\right)\|f\|_{{{{\it L^{\alpha}mo}_F}}}\\ &\lesssim \|f\|_{{{{\it L^{\alpha}mo}_F}}}. $$ Since $\tilde{B_1}$ and $2\tilde{B_1}$ are comparable and $r_{2\tilde B_1}\le \frac12$ we easily have $$\frac{\mathcal{J}_3}{1+F(\frac{\ln r_2}{\ln r_1} )}\lesssim \mathcal{J}_3\lesssim \|f\|_{{{{{\it L^{\alpha}mo}_F}}}}.$$ The proof of Theorem \[decom\] is now achieved. Application to Euler equations ------------------------------ In this section we shall deal with the local and global well-posedness theory for the two dimensional Euler equations in the space ${{{\it L^{\alpha}mo}_F}}$. This project will be performed through the use of the logarithmic estimate developed in Theorem \[decom\]. We shall now state a more general result than Theorem \[main\]. \[apriori\] Let with $\alpha\in(0,1)$ and $p\in(1,2)$. Then, 1. If $F$ belongs to the class $\mathcal{A}$, there exists $T>0$ such that the system admits a unique local solution $$\omega \in L^\infty([0,T]; {\it L^\alpha mo}_{1+F}).$$ 2. If $F$ belongs to the class $\mathcal{A}^\prime$, the system admits a unique global solution $$\omega \in L^\infty_{\textnormal{loc}}({{\mathbb R}}_+; {\it L^\alpha mo}_{1+F}).$$ The proof is based on the establishment of the [*a priori estimates*]{} which are the cornerstone for the existence and the uniqueness parts. Here we omit the details about the existence and the uniqueness which are classical and some of their elements can be found for example in the paper [@BK2]. $ {\bf (1)}$ Using Theorem \[decom\] one has $$\label{formal1} \|\omega(t)\|_{{{ {\it L^{\alpha}mo}}}\cap L^p}\lesssim \|\omega_0\|_{{{{\it L^{\alpha}mo}_F}}\cap L^p}\left(1+V(t) F\big(2+V^{\frac{1}{\alpha}}(t) \big)\right)$$ with $ \displaystyle{V(t)=\int_0^t\|u(\tau)\|_{L^{1-\alpha} L}d\tau.}$ Combining this estimate with Proposition \[coro\] implies after integration in time $$\label{linea1} V(t)\lesssim\|\omega_0\|_{{{{\it L^{\alpha}mo}_F}}\cap L^p}\left(t+\int_0^t V(\tau) F(2+V^{\frac1\alpha}(\tau))d\tau\right).$$ According to the Remark \[rmq23\] the function $F$ has at most a polynomial growth: $F(2+x)\lesssim 1+x^{\beta}$. Therefore $$V(t)\lesssim\|\omega_0\|_{{{{\it L^{\alpha}mo}_F}}\cap L^p}\Big(t+ t\, V(t)+t \,V^{1+\frac{\beta}{\alpha}}(t) \Big)$$ and consequently we can find $T\triangleq T(\|\omega_0\|_{{{{\it L^{\alpha}mo}_F}}\cap L^p})>0$ such that $$\forall t\,\in [0,T],\quad V(t)\le 1.$$ Plugging this estimate into gives $$\|\omega(t)\|_{{{ {\it L^{\alpha}mo}}}\cap L^p}\lesssim \|\omega_0\|_{{{{\it L^{\alpha}mo}_F}}\cap L^p}.$$ Now from Theorem \[decom\] we get also $$\|\omega(t)\|_{{\it L^\alpha mo}_{1+F}\cap L^p}\lesssim \|\omega_0\|_{{{{\it L^{\alpha}mo}_F}}\cap L^p}.$$ ${\bf{(2)}}$ Fix $T>0$ an arbitrary number, then from we deduce $$\forall\, t\in [0,T],\quad V(t)\le C\|\omega_0\|_{{{{\it L^{\alpha}mo}_F}}\cap L^p}T+C\|\omega_0\|_{{{{\it L^{\alpha}mo}_F}}\cap L^p} \Big( \int_0^t V(\tau) F(2+V^{\frac1\alpha}(\tau))d\tau \Big)$$ and introduce the function $\mathcal{M}:[a,+\infty[\to [0,+\infty[$ defined by $$\mathcal{M}(y)=\int_{a}^{y}\frac{1}{x\, F(2+x^{\frac1\alpha})}dx,\quad a=\inf(A_T,1)\,\quad \, A_T=C\|\omega_0\|_{{{{\it L^{\alpha}mo}_F}}\cap L^p}\, T.$$ Since $F$ belongs to the class $\mathcal{A}^\prime$ we can easily check that $$\int_{A_T}^{+\infty}\frac{1}{x\, F(2+x^{\frac1\alpha})}dx=+\infty.$$ Therefore applying Lemma \[osgood1\] $$\forall t\in[0,T],\quad V(t)\le \mathcal{M}^{-1}\Big( \mathcal{M}(A_T)+C\|f\|_{{{{\it L^{\alpha}mo}_F}}} t \Big).$$ This gives the global a priori estimates $$\forall t\geq 0,\quad V(t)\le \mathcal{M}^{-1}\Big( \mathcal{M}(A_t)+C\|f\|_{{{{\it L^{\alpha}mo}_F}}} t\Big).$$ Inserting this estimate into allows to get a global estimate for vorticity. Hence, there exists a continuous function $G:{{\mathbb R}}_+\to {{\mathbb R}}_+$ related to $\mathcal{M}$ such that $$\label{basse313} \|\omega(t)\|_{{{ {\it L^{\alpha}mo}}}\cap L^p}\le G(t).$$ According to Theorem \[decom\] and the preceding estimate $$\|\omega(t)\|_{\it{L^\alpha mo}_{1+F}}\le G(t).$$ This concludes the a priori estimates. Technical lemmata ================= We will prove the following lemma used before in the inequality . \[equivalence\] There exists a universal implicit constant such that for $r\in(0,r_t]$ and for $k\in \mathbb{N}$,$$u_k\lesssim \delta r^2 e^{-k} e^{\delta (V(t)+1)(k-\ln(r))^{1-\alpha}}.$$ If we denote by the implicit constant appearing in Whitney Lemma, then $$u_k\leq \Big|\Big\{ y\in \psi(B)\ \backslash \ d(y, \psi(B)^c)\leq c_0e^{-k}r\Big\}\Big|.$$ The preservation of Lebesgue measure by yields $$\Big|\Big\{ y\in \psi(B) \ \backslash \ d(y, \psi(B)^c)\leq c_0 e^{-k}r\big\}\big|=\Big|\Big\{ x\in B \ \backslash \ d(\psi(x), \psi(B)^c)\leq c_0 e^{-k}r\Big\}\Big|.$$ Since then $$u_k\leq \Big|\Big\{ x\in B \ \backslash \ d(\psi(x), \psi(B^c))\leq c_0 e^{-k}r\Big\}\Big|.$$ We set $$D_k=\Big\{ x\in B \ \backslash \ d(\psi(x), \psi(B^c))\leq c_0 e^{-k}r\Big\}.$$ Since is the frontier of and then $$D_k\subset \Big\{ x\in B \ \backslash \ \exists y\in \partial B \;{\rm with}\; |\psi(x)- \psi(y)|\leq c_0 e^{-k}r\Big\}.$$ The regularity of $\psi^{-1}$ (Proposition \[prop\]) implies since $c_0 e^{-k}r\leq c_0r_t \lesssim \ell(t)$ $$D_k\subset \Big\{ x\in B \ \backslash \ \exists y\in \partial B: |x- y|\le c_0 e^{-k}r e^{ \delta(V(t)+1)(k-\ln(r))^{1-\alpha}}\Big\}.$$ Here we choose $\delta$ large enough such that $\delta>\ln(c_0)$ and $\delta>c_0$. Thus, is contained in the annulus $$\mathcal A=\Big\{ x\in B \ \backslash \ d(x,\partial B) \le \delta e^{-k}r e^{ \delta (V(t)+1)(k-\ln(r))^{1-\alpha}}\Big\}$$ and so (since we are in dimension $2$) $$u_k\leq |D_k|\lesssim \delta r^2 e^{-k} e^{\delta (V(t)+1)(k-\ln(r))^{1-\alpha}},$$ as claimed. We conclude this paper by the following result which has been used in several places. \[maj12\] Let $\alpha\in]0,1[, A,B,C,D>0$ and $F: [1,+\infty[\to{{\mathbb R}}_+$ be a differentiable nondecreasing function such that $$C>D\quad\hbox{and}\quad \|F^\prime\|_{L^\infty}\triangleq M<+\infty.$$ Consider the sequence $$w_n\triangleq\sum_{k\geq n}e^{-k+A(k+B)^{1-\alpha}} F\left(\frac{k+C}{D}\right).$$ Assume that $$\frac{A(1-\alpha)}{(n+B)^\alpha}\leq \frac14,$$ then $$w_n\le 4 e^{-n+A(n+B)^{1-\alpha}} F(\frac{n+C}{D})+ \frac{16M}{D} e^{-n+A(n+B)^{1-\alpha}}.$$ Let $R_n:=\sum_{k\geq n}e^{-k}$ and $v_n:=e^{A(n+B)^{1-\alpha}} F(\frac{n+C}{D})$. According to Abel’s formula $$w_n = R_n v_n + \sum_{k\geq n+1}R_k(v_k-v_{k-1}).$$ It is clear that $0<R_n \le\frac{e^{-n}}{1-e^{-1}}$ and $$w_n\le\frac{1}{{1-e^{-1}}} e^{-n+A(n+B)^{1-\alpha}} F(\frac{n+C}{D})+\frac{1}{1-e^{-1}}\sum_{k\geq n+1}e^{-k}|v_k-v_{k-1}|.$$ It remains to estimate the last sum. For this purpose we use the mean value theorem combined with the nondecreasing property of $F$ $$\begin{aligned} |v_k-v_{k-1}|\le \frac{A(1-\alpha)}{(k-1+B)^\alpha} e^{A(k+B)^{1-\alpha}} F(\frac{k+C}{D})+\frac{M}{D}e^{A(k+B)^{1-\alpha}}.\end{aligned}$$ Consequently $$\begin{aligned} \sum_{k\geq n+1}R_k|v_k-v_{k-1}|&\le& \frac{A(1-\alpha)}{(1-e^{-1})(n+B)^\alpha} w_{n+1}+\frac{M}{(1-e^{-1})D}\sum_{k\geq n+1} e^{-k+A(k+B)^{1-\alpha}}\\ &\le&\frac{A(1-\alpha)}{(1-e^{-1})(n+B)^\alpha} w_{n}+\frac{M}{(1-e^{-1})D}\sum_{k\geq n} e^{-k+A(k+B)^{1-\alpha}}.\end{aligned}$$ This leads to $$w_n\le \frac{1}{{1-e^{-1}}} e^{-n+A(n+B)^{1-\alpha}} F(\frac{n+C}{D})+\frac{A(1-\alpha)}{(1-e^{-1})(n+B)^\alpha} w_n+\frac{M}{(1-e^{-1})D}\sum_{k\geq n} e^{-k+A(k+B)^{1-\alpha}}.$$ Reproducing the same computations with $F(x)=1, M=0,$ we get $$\sum_{k\geq n} e^{-k+A(k+B)^{1-\alpha}}\le \frac{1}{{1-e^{-1}}} e^{-n+A(n+B)^{1-\alpha}} +\frac{A(1-\alpha)}{(1-e^{-1})(n+B)^\alpha} \sum_{k\geq n} e^{-k+A(k+B)^{1-\alpha}}.$$ Assuming that $$\frac{A(1-\alpha)}{(1-e^{-1})(n+B)^\alpha}\leq \frac12,$$ we get $$\sum_{k\geq n} e^{-k+A(k+B)^{1-\alpha}}\le \frac{2}{{1-e^{-1}}} e^{-n+A(n+B)^{1-\alpha}}$$ and $$w_n\le \frac{2}{{1-e^{-1}}} e^{-n+A(n+B)^{1-\alpha}} F(\frac{n+C}{D})+ \frac{4M}{(1-e^{-1})^2D} e^{-n+A(n+B)^{1-\alpha}}.$$ Since $\frac{1}{1-e^{-1}}\le 2$ we deduce $$w_n\le 4 e^{-n+A(n+B)^{1-\alpha}} F(\frac{n+C}{D})+ \frac{16M}{D} e^{-n+A(n+B)^{1-\alpha}} .$$ [9999]{} H.  Bahouri, J-Y.  Chemin and R.  Danchin, [*Fourier Analysis and Nonlinear Partial Differential Equations*]{}, Grundlehren der mathematischen Wissenschaften 343. H. Bahouri and J-Y. Chemin, [*Equations de transport relatives à des champs de vecteurs non-lipschitziens et mécanique des fluides*]{}, Arch. Rational Mech. Anal **127** (1994) 159–181. J. T.  Beale, T.  Kato and A.  Majda, [*Remarks on the Breakdown of Smooth Solutions for the Euler Equations*]{}, Comm. Math. Phys. [**94**]{} (1984) 61–66. F.  Bernicot and S.  Keraani, [*Sharp constants for composition with a measure-preserving map*]{}, preprint arXiv:1204.6008 (2012). F.  Bernicot and S.  Keraani, [*On the global well-posedness of the 2D Euler equations for a large class of Yudovich type data*]{}, preprint arXiv:1204.6006 (2012). D.  Chae, [*Weak solutions of 2D Euler equations with initial vorticity in LnL*]{}. J. Diff. Eqs., [**103**]{} (1993), 323–337. D.  Chae, [*Local existence and blow-up criterion for the Euler equations in the Besov spaces.*]{} Asymptotic Analysis [**38**]{} (2004) 339–358. J.-Y. Chemin, [*Fluides Parfaits Incompressibles*]{}, Astérisque 230 (1995); [*Perfect Incompressible Fluids*]{}, transl. by I. Gallagher and D. Iftimie, Oxford Lecture Series in Mathematics and Its Applications, Vol. [**14**]{}, Clarendon Press-Oxford University Press, New York (1998). J.-M.  Delort, [*Existence de nappes de tourbillon en dimension deux*]{}, J. Amer. Math. Sot., Vol. [**4**]{} (1991) 553–586. R.  DiPerna and A.  Madja, [*Concentrations in regularization for 2D incompressible flow*]{}. Comm. Pure Appl. Math. [**40**]{} (1987), 301–345. P. Gérard, [*Résultats récents sur les fluides parfaits incompressibles bidimensionnels \[d’après J.-Y. Chemin et J.-M. Delort\]*]{}, Séminire, Bourbaki, 1991/1992, no. 757, Astérisque, Vol. [**206**]{}, 1992,411–444. Y.  Giga, T.   Miyakawa and H.  Osada, [ *Navier-Stokes flow with measures as initial vorticity*]{}. Arch. Rat. Mech. Anal. [**104**]{} (1988), 223–250. H. von Helmholtz, [*Über Integrale der hydrodynamischen Gleichungen, welche der Wirbelbewegung entsprechen*]{}. J. reine angew. Math. [**55**]{} (1858), 25–55. T. Hmidi, S. Keraani, [*Incompressible viscous flows in borderline Besov spaces*]{}. Arch. Ration. Mech. Anal. [**189**]{} (2008), 283–300. T. Kato and G. Ponce, [*Well-posedness of the Euler and Navier-Stokes equations in the Lebesgue spaces $L^p_s({{\mathbb R}}^2)$*]{}. Rev. Mat. Iberoamericana [**2**]{} (1986), no. 1-2, 73–88. E. H. Lieb and M. Loss, [ *Analysis*]{}, Grad. Studies in Math. [**14**]{}, Amer. Math. Soc., Providence, RI, 1997. P.-L.  Lions, [*Mathematical topics in fluid mechanics. Vol. 1*]{}. The Clarendon Press Oxford University Press, New York, 1996. M. C. Lopes Filho, H. J. Nussenzveig Lopes and Z. Xin, [*Existence of vortex sheets with reflection symmetry in two space dimensions*]{}, Arch. Ration. Mech. Anal., [**158**]{}(3) (2001), 235–257. A. J.  Majda and A. L.  Bertozzi, [*Vorticity and incompressible flow*]{}, Cambridge Texts in Applied Mathematics, vol. [**27**]{}, Cambridge University Press, Cambridge, 2002. H. C. Pak, Y. J. Park, [*Existence of solution for the Euler equations in a critical Besov space* ]{}$B_{\infty,1}^1({{\mathbb R}}^n)$. Comm. Partial Differential Equations [**29**]{} (2004), no. 7-8, 1149–1166. J. Peetre, [*On convolution operators leaving $L^{p,\lambda}$ spaces invariant*]{}. Ann. Mat. Pura Appl. (4) 72 1966, 295–304. P.  Serfati, [*Structures holomorphes à faible régularité spatiale en mécanique des fluides*]{}. J. Math. Pures Appl. [**74**]{} (1995), 95–104. S. Spanne, [*Some function spaces defined using the mean oscillations over cubes*]{}. Annali della Scuola Normale Superiore di Pisa, Classe di Scienze 3e serie, tome 19, no 4 (1965), 593–608. Y.  Taniuchi, [*Uniformly local Lp Estimate for 2D Vorticity Equation and Its Application to Euler Equations with Initial Vorticity in [BMO]{}*]{}. Comm. Math Phys., [**248**]{} (2004), 169–186. M.  Vishik, [*Incompressible flows of an ideal fluid with vorticity in borderline spaces of Besov type*]{}. (English, French summary) Ann. Sci. École Norm. Sup. (4) [**32**]{} (1999), no. 6, 769–812. M. Vishik, [*Hydrodynamics in Besov Spaces*]{}, Arch. Rational Mech. Anal. [**[145]{}**]{} (1998), 197–214. Y.  Yudovich, [*Nonstationary flow of an ideal incompressible liquid*]{}. Zh. Vych. Mat., [**3**]{} (1963), 1032–1066. Y.  Yudovich, [*Uniqueness theorem for the basic nonstationary problem in the dynamics of an ideal incompressible fluid*]{}. Math. Res. Lett., [**2**]{} (1995), 27–38. W. Wolibner,[*Un théorème sur l’existence du mouvement plan d’un fluide parfait homogène, incompressible, pendant un temps infiniment long,*]{} Math. Z, Vol. 37, 1933, pp. 698–627. [^1]: Here we say that two balls $Q_1$ and $Q_2$ are comparable if $Q_1 \subset 4Q_2$ and $Q_2 \subset 4Q_1$.
--- abstract: 'Meshfree finite difference methods for the Poisson equation approximate the Laplace operator on a point cloud. Desirable are positive stencils, i.e. all neighbor entries are of the same sign. Classical least squares approaches yield large stencils that are in general not positive. We present an approach that yields stencils of minimal size, which are positive. We provide conditions on the point cloud geometry, so that positive stencils always exist. The new discretization method is compared to least squares approaches in terms of accuracy and computational performance.' author: - | Benjamin Seibold\ Department of Mathematics\ Massachusetts Institute of Technology\ 77 Massachusetts Avenue\ Cambridge MA 02139, USA bibliography: - 'meshfree\_poisson.bib' title: Minimal Positive Stencils in Meshfree Finite Difference Methods for the Poisson Equation --- Introduction ============ The numerical approximation of the Poisson equation is a fundamental task encountered in many applications. Often it appears as a subproblem in a more complex computation, for instance as a projection step in the simulation of incompressible flows [@Chorin1968]. Finite difference methods approximate the equation on a finite number of points. If the points can be placed on a regular grid, the approximation is simple and yields symmetric matrices. However, in many cases a regular point distribution is not possible or desired. Examples are the explicit representation of complex geometries, or the point positions may be given by the application, for instance from scattered measurements or in particle methods [@KuhnertTiwariMeshfree2002]. If the points are distributed irregularly, neighborhood relations have to be established. This could be done by constructing a mesh. However, meshing can be costly, and thus may not be desired in applications with time-dependent geometries. Instead, meshfree neighborhood criteria can be defined, and meshfree finite difference stencils constructed. Consistency conditions for stencils are derived in Sect. \[seibold:sec:fd\_poisson\]. We are interested in approaches that yield M-matrices, as explained in Sect. \[seibold:sec:m\_matrix\_structure\]. This requires the stencils to be positive. Least squares approaches, outlined in Sect. \[seibold:sec:least\_squares\_approaches\], in general fail to yield positive stencils. In Sect. \[seibold:sec:lp\_approach\] we present a new approach, based on sign constrained linear minimization, that yields positive stencils. In Sect. \[seibold:sec:lp\_conditions\_positive\_stencil\] we provide conditions on the point cloud geometry, so that positive stencils are guaranteed to exist. Further conditions, derived in Sect. \[seibold:sec:lp\_matrix\_connectivity\], ensure an M-matrix structure. In Sect. \[seibold:sec:numerics\] the new approach is compared to classical methods by numerical experiments. Meshfree Finite Differences for the Poisson Equation {#seibold:sec:fd_poisson} ==================================================== Consider the Poisson equation to be solved inside a domain $\Omega\subset\mathbb{R}^d$ $$\begin{cases} -\Delta u = f &\mathrm{in~}\Omega \\ u = g &\mathrm{on~}\Gamma_D \\ \frac{\partial u}{\partial n} = h &\mathrm{on~}\Gamma_N \end{cases} \label{seibold:eq:poisson_equation}$$ where $\Gamma_D\cup\Gamma_N=\partial\Omega$. Let a point cloud $X=\{{\mbox{\boldmath$x$}}_1,\dots,{\mbox{\boldmath$x$}}_n\}\subset\overline{\Omega}$ be given, which consists of interior points $X_i\subset\Omega$ and boundary points $X_b\subset\partial\Omega$. The point cloud is meshfree, i.e. no information about connection of points is provided. Meshfree finite difference approaches convert problem into a linear system $$A\cdot{\mbox{\boldmath$\hat u$}} = {\mbox{\boldmath$\hat f$}}\;, \label{seibold:eq:poisson_system}$$ where the vector ${\mbox{\boldmath$\hat u$}}$ contains approximations to the values $u({\mbox{\boldmath$x$}}_i)$. The $i$-th row of the matrix $A$ consists of the *stencil* corresponding to the point ${\mbox{\boldmath$x$}}_i$. We assume that admits a unique smooth solution. Consistent Derivative Approximation {#seibold:subsec:consistent_derivatives} ----------------------------------- Consider a function $u\in C^2(\Omega\subset\mathbb{R}^d,\mathbb{R})$. We wish to approximate $\Delta u({\mbox{\boldmath$x$}}_0)$ using the function values of a finite number of points in a circular neighborhood $({\mbox{\boldmath$x$}}_0,{\mbox{\boldmath$x$}}_1,\dots,{\mbox{\boldmath$x$}}_m)\in B({\mbox{\boldmath$x$}}_0,r)$, where $B({\mbox{\boldmath$x$}}_0,r) = \{{\mbox{\boldmath$x$}}\in\overline{\Omega}:\|{\mbox{\boldmath$x$}}-{\mbox{\boldmath$x$}}_0\|<r\}$. Define the distance vectors ${\mbox{\boldmath$\bar x$}}_i = {\mbox{\boldmath$x$}}_i-{\mbox{\boldmath$x$}}_0 \ \forall i=0,\dots,m$. The function value at each neighboring point $u({\mbox{\boldmath$x$}}_i)$ can be expressed by a Taylor expansion $$u({\mbox{\boldmath$x$}}_i) = u({\mbox{\boldmath$x$}}_0)+\nabla u({\mbox{\boldmath$x$}}_0)\cdot{\mbox{\boldmath$\bar x$}}_i +\tfrac{1}{2}\nabla^2 u({\mbox{\boldmath$x$}}_0):{\left({\mbox{\boldmath$\bar x$}}_i\cdot{\mbox{\boldmath$\bar x$}}_i^T\right)}+e_i\;.$$ We use the matrix scalar product $A:B=\sum_{i,j}A_{ij}B_{ij}$. The error in the expansion is of order $e_i = O(r^3)$. A linear combination with coefficients $(s_0,\dots,s_m)$ equals $$\begin{aligned} \sum_{i=0}^m s_i u({\mbox{\boldmath$x$}}_i) = u({\mbox{\boldmath$x$}}_0){\left(\sum_{i=0}^m s_i\right)} +\nabla u({\mbox{\boldmath$x$}}_0)\cdot{\left(\sum_{i=1}^m s_i{\mbox{\boldmath$\bar x$}}_i\right)}\;\; \\ +\nabla^2 u({\mbox{\boldmath$x$}}_0):{\left(\frac{1}{2}\sum_{i=1}^m s_i{\left({\mbox{\boldmath$\bar x$}}_i\cdot{\mbox{\boldmath$\bar x$}}_i^T\right)}\right)} +{\left(\sum_{i=1}^m s_ie_i\right)}\;.\end{aligned}$$ This approximates the Laplacian, i.e. $\sum_{i=0}^m s_i u({\mbox{\boldmath$x$}}_i) = \Delta u({\mbox{\boldmath$x$}}_0)+O(r^3)$, if exactness for constant, linear and quadratic functions is satisfied $$\sum_{i=0}^m s_i = 0 \quad , \quad \sum_{i=1}^m{\mbox{\boldmath$\bar x$}}_i s_i = 0 \quad , \quad \sum_{i=1}^m{\left({\mbox{\boldmath$\bar x$}}_i\cdot{\mbox{\boldmath$\bar x$}}_i^T\right)}s_i = 2I\;. \label{seibold:eq:constraints_laplace}$$ \[seibold:def:consistency\] A stencil $(s_0,\dots,s_m)$ to a set of points $({\mbox{\boldmath$x$}}_0,{\mbox{\boldmath$x$}}_1,\dots,{\mbox{\boldmath$x$}}_m)\in B({\mbox{\boldmath$x$}}_0,r)$ is called *consistent* (with the Laplace operator), if the constraints are satisfied. The linear and quadratic constraints in can be formulated as a linear system of equations $$V\cdot{\mbox{\boldmath$s$}} = {\mbox{\boldmath$b$}}\;, \label{seibold:eq:linear_system}$$ where $V\in\mathbb{R}^{k\times m}$ is the Vandermonde matrix given by ${\mbox{\boldmath$\bar x$}}_1,\dots,{\mbox{\boldmath$\bar x$}}_m$, and ${\mbox{\boldmath$s$}}\in\mathbb{R}^m$ is the stencil vector. In 2d, with ${\mbox{\boldmath$\bar x$}}_i=(\bar x_i,\bar y_i)$, the system reads as $$V = {\left(\begin{array}{ccc} \bar x_1 & \dots & \bar x_m \\ \bar y_1 & \dots & \bar y_m \\ \bar x_1\bar y_1 & \dots & \bar x_m\bar y_m \\ \bar x_1^2 & \dots & \bar x_m^2 \\ \bar y_1^2 & \dots & \bar y_m^2 \end{array}\right)} \ , \quad {\mbox{\boldmath$b$}} = {\left(\begin{array}{c} 0 \\ 0 \\ 0 \\ 2 \\ 2 \end{array}\right)}\;. \label{seibold:eq:der_consistent_system_2d}$$ The number of constraints is $k = \frac{d(d+3)}{2}$. The constant constraint in yields $s_0 = -\sum_{i=1}^m s_i$. Neumann boundary points can be treated in a similar manner. Approximating ${\frac{\partialu}{\partial{\mbox{\boldmath$n$}}}}({\mbox{\boldmath$x$}}_0)$ by $\sum_{i=0}^m s_i u({\mbox{\boldmath$x$}}_i)$ leads to the constraints $$\sum_{i=0}^m s_i = 0 \quad , \quad \sum_{i=1}^m{\mbox{\boldmath$\bar x$}}_i s_i = {\mbox{\boldmath$n$}}\;. \label{seibold:eq:constraints_Neumann}$$ For each point, a meshfree finite difference approximation consists of two steps: First, define which points are its neighbors. Typically, more neighbors than constraints are chosen. Second, select a stencil. If is underdetermined, a minimization problem is formulated to select a unique stencil. A set of neighbors around a central point is called in *general configuration*, if the Vandermonde matrix $V$ has full rank. For $m$ neighboring points, one has no solution if $m<k$, infinitely many solutions if $m>k$, and one solution if $m=k$. In this case, the stencil ${\mbox{\boldmath$s$}}=V^{-1}\cdot{\mbox{\boldmath$b$}}$ can be computed by elimination, or by formulas for the determinant of a multivariate Vandermonde matrix [@Lorentz1992]. If the points are not in general configuration, e.g. for regular grids, the above rules may fail. A solution can exist for $m<k$ (e.g. 5-point stencil in 2d), and no solution may exist for $m>k$ (see example in [@SeiboldDiss2006 p. 59]). The concept of general configuration is unhandy and too strict. Solutions may be acceptable, even if $V$ does not have full rank. The geometric condition presented in Sect. \[seibold:sec:lp\_conditions\_positive\_stencil\] ensures the existence of stencils. Minimal and Positive Stencils {#seibold:subsec:min_pos_stencils} ----------------------------- \[seibold:def:minimal\_stencil\] A consistent stencil $(s_0,\dots,s_m)$ is called *minimal*, if $m\le k$. Minimal stencils are beneficial for the sparsity of the system matrix, resulting in a lower memory consumption and a faster solution of system . The total number of neighboring points is proportional to the effort of applying the matrix to a vector, which is proportional to the time for one step of a (semi-)iterative linear solver. Of course, the few neighbors have to be chosen wisely, to preserve good convergence rates of iterative solvers. The results in [@SeiboldMeshfree2007; @SeiboldECMI2008] indicate that this is the case with the presented approach. \[seibold:rem:no\_continuous\_dependence\] For minimal stencils, it is impossible that the stencil values depend continuously on the point positions. Consider six points around a central point (in 2d), five of which are selected neighbors. Consider a continuous movement of one of the neighbors and the sixth point, such that at the end these two points have swapped their positions, without them ever being in the same place. At some instance during this movement, the sixth point has to become a neighbor, resulting in a jump in the stencil values. If used in a particle method, the lack of smoothness in minimal stencils (Rem. \[seibold:rem:no\_continuous\_dependence\]) may lead to a non-conservative scheme. In an isolated Poisson solver and in particle methods that are not conservative by construction (such as the finite pointset method [@KuhnertTiwariMeshfree2002]) the advantage of optimal sparsity often outweighs this drawback. \[seibold:def:positive\_stencil\] A consistent stencil $(s_0,\dots,s_m)$ is called *positive*, if $s_1,\dots,s_m\ge 0$. Due to and , this implies for the central point $s_0<0$. Positive stencils yields the system matrix in to be an L-matrix (Def. \[seibold:def:L\_matrix\]), which gives rise to an M-matrix structure (see Sect. \[seibold:sec:m\_matrix\_structure\]). The desirability of positive stencils has been pointed out by Demkowicz, Karafiat and Liszka [@DemkowiczKarafiatLiszka1984]. Classical approaches do in general not yield positive stencils. An “optimal star selection” [@DuarteLiszkaTworzyako1996] makes positive stencils likely, but they are not guaranteed (see Fig. \[seibold:fig\_neighborhood\_four\_quadrant\]). F[ü]{}rst and Sonar derive topological conditions on point clouds for positive least squares stencils in 1d [@FuerstSonar2001]. In Sect. \[seibold:sec:lp\_approach\] we present a strategy that approximates the Poisson equation on a point cloud by minimal positive stencils. In Sect. \[seibold:sec:lp\_conditions\_positive\_stencil\] conditions on a point cloud are presented (in 2d and 3d) which guarantee the existence of positive stencils. M-Matrices {#seibold:sec:m_matrix_structure} ========== Meshfree finite difference matrices are in general non-symmetric. Consider two points ${\mbox{\boldmath$x$}}_i$ and ${\mbox{\boldmath$x$}}_j$, each being a neighbor of the other, and a third point ${\mbox{\boldmath$x$}}_k$ which is a neighbor of ${\mbox{\boldmath$x$}}_i$ but not a neighbor of ${\mbox{\boldmath$x$}}_j$. Since each stencil entry depends on *all* its neighbors, the point ${\mbox{\boldmath$x$}}_k$ influences the matrix entry $a_{ij}$, but not the matrix entry $a_{ji}$. The negative Laplace operator in is positive definite. For non-symmetric matrices, we have to ask for slightly less than positive definiteness. A property which implies a maximum principle and the convergence of linear solvers, is the M-matrix structure. \[seibold:def:L\_matrix\] A square matrix $A=(a_{ij})_{ij}\in\mathbb{R}^{n\times n}$ is called *Z-matrix*, if $a_{ij}\le 0 \ \forall i\neq j$. A Z-matrix is called *L-matrix*, if $a_{ii}>0 \ \forall i$. We write $A\ge 0$ for $a_{ij}\ge 0 \ \forall i,j$. The same notation applies to vectors. A regular matrix $A$ is called *inverse positive*, if $A^{-1}\ge 0$. A Z-matrix is called *M-matrix*, if it is inverse positive. We use the M-matrix property, since it yields a sufficient condition for inverse positivity. There are inverse positive matrices that are not M-matrices, so another approach would be to employ alternative characterizations of inverse positive matrices [@FujimotoRanade2004]. Benefits of an M-Matrix Structure --------------------------------- The Poisson equation satisfies maximum principles. For instance, consider with Dirichlet boundary conditions only. If $f\le 0$ and $g\le 0$, then the solution satisfies $u\le 0$ [@Evans1998]. A discretization by an M-matrix mimics this property in a discrete maximum principle. \[thm:mmatrix\_discrete\_max\_principle\] Let $A$ be an M-matrix. Then $A{\mbox{\boldmath$x$}}\le 0$ implies ${\mbox{\boldmath$x$}}\le 0$. Conversely, a Z-matrix satisfying $A{\mbox{\boldmath$x$}}\le 0\Rightarrow{\mbox{\boldmath$x$}}\le 0$ is an M-matrix. A is an M-matrix, thus $A^{-1}\ge 0$ by definition. Let ${\mbox{\boldmath$y$}} = A{\mbox{\boldmath$x$}}$. Then ${\mbox{\boldmath$x$}} = A^{-1}{\mbox{\boldmath$y$}}$. The component-wise inequalities $A^{-1}\ge 0$ and ${\mbox{\boldmath$y$}}\le 0$ imply ${\mbox{\boldmath$x$}}\le 0$. The reverse statement is proved in [@QuarteroniSaccoSaleri2000 p. 29]. If $A$ is an M-matrix, and $D$ its diagonal part, then $\rho(I-D^{-1}A)<1$, thus the Jacobi and the Gau[ß]{}-Seidel iteration converge. The convergence of the Jacobi iteration is given in [@Hackbusch1994]. The Gau[ß]{}-Seidel convergence follows from the Stein-Rosenberg-Theorem [@Varga2000]. The performance of multigrid methods for meshfree finite difference matrices has been investigated in [@SeiboldMeshfree2007; @SeiboldECMI2008]. Further benefits of an M-matrix structure with respect to linear solvers can be found in [@Varga2000]. A Sufficient Condition for an M-Matrix Structure {#subsec:mmatrix_suff_cond} ------------------------------------------------ Conditions that imply the M-matrix property are required, since the inverse matrix is typically not directly available. Let the unknowns be labeled by an index set $I$. We consider square matrices $A\in\mathbb{R}^{I\times I}$. The *graph* $G(A)$ of a matrix $A$ is defined by $G(A) = \{(i,j)\in I\times I : a_{ij}\neq 0\}$. The index $i\in I$ is called *connected* to $j\in I$, if a chain $i=i_0,i_1,\dots,i_{k-1},i_k=j \ \in I$ exists, such that $(i_{\nu-1},i_\nu)\in G(A) \ \forall \nu=1,\dots,k$. For a finite difference matrix, each index $i\in I$ corresponds to a point ${\mbox{\boldmath$x$}}_i$. The index $i\in I$ being connected to $j\in I$ means that the point ${\mbox{\boldmath$x$}}_i$ connects (indirectly) to the point ${\mbox{\boldmath$x$}}_j$ via stencil entries. A finite difference matrix is called *essentially irreducible* if every point is connected to a Dirichlet boundary point. A finite difference matrix that is not essentially irreducible, is singular, since the points that are not connected to a Dirichlet point form a singular submatrix. A matrix $A\in\mathbb{R}^{I\times I}$ is called *essentially diagonally dominant*, if it is weakly diagonally dominant ($\forall i\in I: |a_{ii}|\ge\sum_{k\neq i}|a_{ik}|$), and every point $i\in I$ is connected to a point $j\in I$ which satisfies the strict diagonal dominance relation $|a_{jj}|>\sum_{k\neq j}|a_{jk}|$. \[seibold:thm:ess\_irred\_is\_ess\_diagdom\] An L-matrix arising as a finite difference discretization of is essentially diagonally dominant, if it is essentially irreducible. For an L-matrix the constant relation in implies the weak diagonal dominance relation for every interior and Neumann point. Each row corresponding to a Dirichlet point satisfies the strict diagonal dominance relation. \[seibold:thm:Lmatrix\_is\_Mmatrix\] An essentially diagonally dominant L-matrix is an M-matrix. The proof is given in [@Hackbusch1994 p. 153]. If problem can be discretized by positive stencils and every point is connected to a Dirichlet point, then the resulting matrix is an M-matrix. Least Squares Approaches {#seibold:sec:least_squares_approaches} ======================== Classical approaches for meshfree derivative approximation are moving least squares methods, based on scattered data interpolation [@LancasterSalkauskas1981], and local approximation methods, based on generalized finite difference methods [@LiszkaOrkisz1980]. Their application to meshfree settings has been analyzed in [@DuarteLiszkaTworzyako1996; @Levin1998]. Differences between moving and local approaches have been investigated in [@SeiboldDiss2006]. Around a central point ${\mbox{\boldmath$x$}}_0$, points inside a radius $r$ are considered. A distance weight function $w(\delta)$ is defined, which is small for $\delta>r$. We consider interpolating[^1] approaches with $w(\delta)=\delta^{-\alpha}$. Each neighboring point ${\mbox{\boldmath$x$}}_i$ is assigned a weight $w_i = w{\left(\|{\mbox{\boldmath$x$}}_i-{\mbox{\boldmath$x$}}_0\|_2\right)}$. A unique stencil is defined via a quadratic minimization problem $$\min\sum_{i=1}^n\frac{s_i^2}{w_i}, \ \mathrm{s.t.} \ V\cdot{\mbox{\boldmath$s$}} = {\mbox{\boldmath$b$}}\;. \label{seibold:eq:quadratic_minimization}$$ Using $W=\mathrm{diag}(w_i)$, its solution is $${\mbox{\boldmath$s$}} = WV^T(VWV^T)^{-1}\cdot{\mbox{\boldmath$b$}}\;. \label{seibold:eq:stencil_lsq}$$ After the weights $w_i$ are evaluated, the $k\times k$ matrix $VWV^T$ has to be set up, the linear system $(VWV^T)\cdot{\mbox{\boldmath$v$}}={\mbox{\boldmath$b$}}$ to be solved, and the product ${\mbox{\boldmath$s$}} = WV^T\cdot{\mbox{\boldmath$v$}}$ to be computed. This requires $k(k+1)m+\frac{k^3}{3}$ floating point operations [@SeiboldDiss2006 p. 150]. Least squares approaches do not yield minimal stencils, unless exactly $k$ neighbors are considered. In general, they also do not yield positive stencils. \[seibold:ex:qm\_nonpos\_stencil\] Consider ${\mbox{\boldmath$x$}}_0=(0,0)$ and 6 neighbors on the unit circle ${\mbox{\boldmath$x$}}_i=(\cos(\frac{\pi}{2}\varphi_i),\sin(\frac{\pi}{2}\varphi_i))$, where $(\varphi_1,\dots,\varphi_6)=(0,1,2,3,0.1,0.2)$ (see Fig. \[seibold:fig\_lsq\_nonpos\_stencil\_qm\]). Since all neighbors have the same distance from ${\mbox{\boldmath$x$}}_0$, the distance weight function does not play a role. Formula yields the non-positive least squares stencil ${\mbox{\boldmath$s$}}=(0.846,1.005,0.998,1.003,0.312,-0.164)$. However, the configuration admits a positive stencil, namely ${\mbox{\boldmath$s$}}=(1,1,1,1,0,0)$. Linear Minimization Approach {#seibold:sec:lp_approach} ============================ Least squares approaches do not guarantee positive stencils. As motivated in Sect. \[seibold:subsec:min\_pos\_stencils\], we wish to allow positive stencils only. Hence, we enforce positivity, i.e. we search for solutions in the polyhedron $$P = \{{\mbox{\boldmath$s$}}\in\mathbb{R}^m : V\cdot{\mbox{\boldmath$s$}} = {\mbox{\boldmath$b$}} , \ {\mbox{\boldmath$s$}}\ge 0\}\;. \label{seibold:eq:polyhedron}$$ This is the feasibility problem of linear optimization [@Vanderbei2001]. In Sect. \[seibold:sec:lp\_conditions\_positive\_stencil\] we show under which conditions solutions exist. If $P$ is nonvoid and not degenerate, there are infinitely many feasible stencils. To single out a unique stencil we formulate a *linear* minimization problem $$\min \sum_{i=1}^m \frac{s_i}{w_i}, \ \mathrm{s.t.} \ V\cdot{\mbox{\boldmath$s$}}={\mbox{\boldmath$b$}}, \ {\mbox{\boldmath$s$}}\ge 0\;, \label{seibold:eq:linear_minimization}$$ where the weights $w_i=w(\|{\mbox{\boldmath$x$}}_i-{\mbox{\boldmath$x$}}_0\|)$ are defined by an appropriately decaying (see Thm. \[seibold:thm:mps\_distance\_weight\_function\]) non-negative distance weight function $w$. Problem is a linear program (LP) in standard form. It is bounded, since we have imposed sign constraints and the weights $w_i$ are all non-negative. If the polyhedron is nonvoid, then the linear minimization approach yields minimal positive stencils. The sign constraints in ensure that the selected stencil is *positive*. The existence of a *minimal* solution is ensured by the *fundamental theorem of linear programming* [@Vanderbei2001]. If the LP has a solution, then it also has a basic solution, in which at most $k$ of the $m$ stencil entries $s_i$ are different from zero. In contrast to least squares methods, the LP approach yields nonzero values only for a few selected points, and a continuous dependence on the point positions is not possible (see Rem. \[seibold:rem:no\_continuous\_dependence\]). One could ask why not remain with a least squares problem, and additionally impose sign constraints. While this would be a valid approach (the solution is obtained by Karush-Kuhn-Tucker methods [@Vanderbei2001]), it would have the worst of both worlds. The solution would not depend continuously on the point cloud geometry whenever the sign constraints are active, and the resulting stencil would not be minimal. When sign constraints are imposed, linear minimization is preferable. ![Minimal positive stencil for various values of $\alpha$[]{data-label="seibold:fig_stencil_weight_min_alpha"}](fig_lsq_nonpos_stencil_qm){width="99.00000%"} ![Minimal positive stencil for various values of $\alpha$[]{data-label="seibold:fig_stencil_weight_min_alpha"}](fig_stencil_weight_min_alpha_1){width="99.00000%"} ![Minimal positive stencil for various values of $\alpha$[]{data-label="seibold:fig_stencil_weight_min_alpha"}](fig_stencil_weight_min_alpha_3){width="99.00000%"} ![Minimal positive stencil for various values of $\alpha$[]{data-label="seibold:fig_stencil_weight_min_alpha"}](fig_stencil_weight_min_alpha_5){width="99.00000%"} Solving the Linear Programs {#subsec:solving_lp} --------------------------- For every interior point consider a set of candidate points ($m>k$). A basic solution of is computed. Only the nonzero stencil values enter the Poisson matrix. We refer to this approach as *minimal positive stencil* (MPS) method. The LPs are small, but they have to be solved for every interior point. To our knowledge, there are no general results about efficient methods for such small LPs, especially considering the special structure of the Vandermonde matrix. A numerical comparison of various methods has been presented in [@SeiboldDiss2006 p. 148]. Simplex methods perform best for the arising LPs. A basis change corresponds to one stencil point replacing another. The theoretical worst case performance of simplex methods is not observed. Typical runs find the solution in about $1.5 k$ steps, resulting in a complexity of $O(k^2m)$, which equals the effort of least squares approaches (see Sect. \[seibold:sec:least\_squares\_approaches\]). Geometric Interpretation of Minimal Positive Stencils {#subsec:mps_laplace_geometric_interpretation} ----------------------------------------------------- The MPS method forms a compromise between selecting neighbors close by and distributed nicely (see Def. \[seibold:def:points\_distributed\_nicely\]) around the central point. How much preference is given to which objective depends on the locality parameter $\alpha$ in the distance weight function $w(\delta) = |\delta|^{-\alpha}$. \[seibold:thm:mps\_distance\_weight\_function\] The MPS method only leads to reasonable results, if the distance weight function decays faster than $|\delta|^{-2}$. Summing over the diagonal in the quadratic constraints in yields the relation $\sum_{i=1}^m\|{\mbox{\boldmath$\bar x$}}_i\|_2^2s_i = 2d$. If $w(\delta)$ decays faster than $|\delta|^{-2}$, points close to the central point are given preference. If $w(\delta)=|\delta|^{-2}$, the LP is degenerate. If $w(\delta)$ decays slower than $|\delta|^{-2}$, the approach selects points far away from the central point, possibly resulting in “checkerboard” instabilities. The dependence of the MPS stencil on $\alpha$ is shown in Fig. \[seibold:fig\_stencil\_weight\_min\_alpha\].[^2] From the candidate points in the circle, five neighbors are selected. While for $\alpha=1$ far away points are selected, $\alpha\in\{3,5\}$ yields nearby points. For $\alpha=3$ smaller angles are more important, for $\alpha=5$ smaller distances. Note that the MPS method never selects neighbors which are not distributed around the central point (as defined in Sect. \[seibold:sec:lp\_conditions\_positive\_stencil\]), even if those are the $k$ closest points. For regular grids, the MPS method selects standard finite difference stencils. For instance, for a regular Cartesian grid, the standard 5-point (2d), respectively 7-point (3d) stencils are obtained. In these cases, the basic solution is degenerate, i.e. some of the basis variables are zero. Neighborhood Criteria {#seibold:subsec:neighborhood} --------------------- The circular neighborhood criterion yields a large number of neighbors (unless $r$ is very small). Also, it does not guarantee positive stencils, as the example in Fig. \[seibold:fig\_neighborhood\_circular\] shows. The selected neighbors are marked bold. Neighbors with non-positive stencil values are indicated by a white center. The presented methods can also be based on other neighborhood criteria. Defining a neighborhood via the a Delaunay triangulation (tetrahedrization in 3d) [@Shewchuk1999], yields significantly fewer neighbors. However, the construction is a meshing procedure, hence it is often undesirable in a meshfree context.[^3] Fig. \[seibold:fig\_neighborhood\_delaunay\] shows that a Delaunay neighborhood constructed from a Voronoi tessellation may yield non-positive stencils. In addition, it is possible that not enough neighbors are selected. If, in the example, the two negative points were removed, only four neighbors would be defined. The *four quadrant criterion* [@DuarteLiszkaTworzyako1996] (eight sectors in 3d) defines a local coordinate system and selects the two closest points from each sector. It guarantees the neighbors to be distributed around the central point. However, it does not guarantee positive stencils, as the example in Fig. \[seibold:fig\_neighborhood\_four\_quadrant\] shows. ![MPS neighborhood[]{data-label="seibold:fig_neighborhood_pos_stencil"}](fig_neighborhood_circular){width="99.00000%"} ![MPS neighborhood[]{data-label="seibold:fig_neighborhood_pos_stencil"}](fig_neighborhood_delaunay){width="99.00000%"} ![MPS neighborhood[]{data-label="seibold:fig_neighborhood_pos_stencil"}](fig_neighborhood_four_quadrant){width="99.00000%"} ![MPS neighborhood[]{data-label="seibold:fig_neighborhood_pos_stencil"}](fig_neighborhood_pos_stencil){width="99.00000%"} The stencil selected by the MPS method is shown in Fig. \[seibold:fig\_neighborhood\_pos\_stencil\]. It is minimal and positive, here achieving this property by selecting one point further away. The MPS method can be interpreted as a neighborhood criterion that is optimal (i.e. minimal and positive) with respect to the Laplace operator. A deeper discussion of various neighborhood criteria in presented in [@SeiboldDiss2006]. Conditions for the Existence of Positive Stencils {#seibold:sec:lp_conditions_positive_stencil} ================================================= We investigate when the polyhedron is nonvoid, i.e. under which conditions positive stencils exist. We place the point of approximation in the origin ${\mbox{\boldmath$x$}}_0={\mbox{\boldmath$0$}}$. A set of neighbors $\{{\mbox{\boldmath$x$}}_1,\dots,{\mbox{\boldmath$x$}}_m\}\subset\mathbb{R}^d$ is given, where $m\ge k$. In order to establish a connection between the LP space $\mathbb{R}^m$ and the actual geometry space $\mathbb{R}^d$ we consider the dual problem, which is defined in $\mathbb{R}^k$. For a real matrix $A$ and a real vector ${\mbox{\boldmath$b$}}$, exactly one of the following two systems has a solution: - $A\cdot{\mbox{\boldmath$x$}}={\mbox{\boldmath$b$}}$ for some ${\mbox{\boldmath$x$}}\ge 0$, or - $A^T\cdot{\mbox{\boldmath$w$}}\ge 0$ for some ${\mbox{\boldmath$w$}}$ satisfying ${\mbox{\boldmath$b$}}^T\cdot{\mbox{\boldmath$w$}}<0$. The proof is given in [@Vanderbei2001]. Applying Farkas’ lemma to our problem yields that system $V\cdot{\mbox{\boldmath$s$}}={\mbox{\boldmath$b$}}$ has no solution ${\mbox{\boldmath$s$}}\ge 0$, if and only if system $V^T\cdot{\mbox{\boldmath$w$}}\ge 0$ has a solution satisfying ${\mbox{\boldmath$b$}}^T\cdot{\mbox{\boldmath$w$}}<0$. The $i^{th}$ component of $V^T\cdot{\mbox{\boldmath$w$}}$ can be written as $${\left(V^T\cdot{\mbox{\boldmath$w$}}\right)}_i = {\mbox{\boldmath$a$}}^T\cdot{\mbox{\boldmath$x$}}_i+{\mbox{\boldmath$x$}}_i^T\cdot A\cdot{\mbox{\boldmath$x$}}_i\;,$$ where ${\mbox{\boldmath$a$}}={\left(w_1,\dots,w_d\right)}^T$ and $A$ is the symmetric matrix $$A = {\left(\begin{array}{cc} w_4 & w_3 \\ w_3 & w_5 \\ \end{array}\right)} \ \mathrm{(2d)} \quad\mathrm{resp.}\quad A = {\left(\begin{array}{ccc} w_7 & w_4 & w_5 \\ w_4 & w_8 & w_6 \\ w_5 & w_6 & w_9 \\ \end{array}\right)} \ \mathrm{(3d)}\;.$$ Given ${\mbox{\boldmath$w$}}$ (respectively ${\mbox{\boldmath$a$}}$ and $A$), we consider the quadratic form $f({\mbox{\boldmath$x$}}) = {\mbox{\boldmath$a$}}^T\cdot{\mbox{\boldmath$x$}}+{\mbox{\boldmath$x$}}^T\cdot A\cdot{\mbox{\boldmath$x$}}$. Since $A$ is symmetric, an orthogonal matrix $S\in O(d)$ exists, such that $S^TAS = D$, where $D=\mathrm{diag}{\left(\lambda_1,\dots,\lambda_d\right)}$. In the new coordinates, with ${\mbox{\boldmath$d$}} = S^T{\mbox{\boldmath$a$}}$, we define $$g({\mbox{\boldmath$x$}}) = f(S{\mbox{\boldmath$x$}}) = {\mbox{\boldmath$d$}}^T\cdot{\mbox{\boldmath$x$}}+{\mbox{\boldmath$x$}}^T\cdot D\cdot{\mbox{\boldmath$x$}}\;. \label{seibold:eq:pos_stencil_function_d}$$ If all eigenvalues $\lambda_i\neq 0$, then $D$ is regular. With ${\mbox{\boldmath$c$}} = -\frac{1}{2}D^{-1}{\mbox{\boldmath$d$}}$ we can write $$g({\mbox{\boldmath$x$}}) = ({\mbox{\boldmath$x$}}-{\mbox{\boldmath$c$}})^T\cdot D \cdot ({\mbox{\boldmath$x$}}-{\mbox{\boldmath$c$}})-{\mbox{\boldmath$c$}}^T\cdot D\cdot{\mbox{\boldmath$c$}}\;,$$ If one or two $\lambda_i=0$, we are in a degenerate case, and stick to the representation with ${\mbox{\boldmath$d$}}$ as parameter. Choosing ${\mbox{\boldmath$w$}}\in\mathbb{R}^k$ arbitrarily is equivalent to choosing $S\in O(d)$, ${\mbox{\boldmath$\lambda$}}={\left(\lambda_1,\dots,\lambda_d\right)}\in\mathbb{R}^d$ and ${\mbox{\boldmath$c$}}\in\mathbb{R}^d$ (respectively ${\mbox{\boldmath$d$}}\in\mathbb{R}^d$) arbitrarily. For any ${\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}\in\mathbb{R}^d$ define the domain $$H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}} = \{{\mbox{\boldmath$x$}}\in\mathbb{R}^d:g({\mbox{\boldmath$x$}})\ge 0\}\;.$$ For a set of points $X=\{{\mbox{\boldmath$x$}}_1,\dots,{\mbox{\boldmath$x$}}_m\}$ define $SX=\{S{\mbox{\boldmath$x$}}_1,\dots,S{\mbox{\boldmath$x$}}_m\}$. Farkas’ lemma translates to \[seibold:cor:pos\_stencil\_geometric\_criterion\] System $V\cdot{\mbox{\boldmath$s$}}={\mbox{\boldmath$b$}}$ has no solution ${\mbox{\boldmath$s$}}\ge 0$, if and only if $S\in O(d)$, ${\mbox{\boldmath$c$}},{\mbox{\boldmath$\lambda$}}\in\mathbb{R}^d$ with $\sum_{i=1}^d\lambda_i<0$ exist, such that $SX\subset H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}$. In other words, no positive Laplace stencil exists, iff the set of points $X$ can be transformed (via $S\in O(d)$), such that it is contained in the set $H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}$ for some ${\mbox{\boldmath$c$}},{\mbox{\boldmath$\lambda$}}\in\mathbb{R}^d$ with $\sum_{i=1}^d\lambda_i<0$. ![Necessary criterion[]{data-label="seibold:fig_proof_pos_stencil_2d_necc"}](fig_proof_pos_stencil_2d_points_neg){width="81.00000%"} ![Necessary criterion[]{data-label="seibold:fig_proof_pos_stencil_2d_necc"}](fig_proof_pos_stencil_2d_points_pos){width="81.00000%"} ![Necessary criterion[]{data-label="seibold:fig_proof_pos_stencil_2d_necc"}](fig_proof_pos_stencil_2d_necc){width="81.00000%"} The setup in Fig. \[seibold:fig\_proof\_pos\_stencil\_2d\_points\_neg\] shows a set of points that is completely contained in a domain $H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}$. Due to Cor. \[seibold:cor:pos\_stencil\_geometric\_criterion\], no positive stencil exists. The setup in Fig. \[seibold:fig\_proof\_pos\_stencil\_2d\_points\_pos\] shows a point setup which has a positive stencil solution. It is impossible to find a domain $H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}$ and to rotate the set of points, such that all points are contained in the domain. We have derived a geometric condition, which is equivalent to the existence of positive stencils. However, due to the nonlinearity in $g$, it is difficult to translate into geometric means directly. Instead, we derive a necessary (but not sufficient) as well as a sufficient (but not necessary) criterion on the point geometry for the existence of a positive Laplace stencil. To our knowledge the latter has not been given yet. A Necessary Criterion for Positive Stencils ------------------------------------------- If $V\cdot{\mbox{\boldmath$s$}}={\mbox{\boldmath$b$}}$ has a solution ${\mbox{\boldmath$s$}}\ge 0$, then for any $S\in O(d)$, ${\mbox{\boldmath$c$}},{\mbox{\boldmath$\lambda$}}\in\mathbb{R}^d$ with $\sum_{i=1}^d\lambda_i<0$, there is a point ${\mbox{\boldmath$x$}}_i$ with $S{\mbox{\boldmath$x$}}_i\notin H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}$. For the particular choice $\lambda_1=-1$, $\lambda_i=0 \ \forall i>1$ and $c_1\gg\max_i\|{\mbox{\boldmath$x$}}_i\|$ it follows that for any $S\in O(d)$ at least one point must satisfy $x_1<0$. This yields the following \[seibold:thm:mps\_pos\_stencil\_necessary\] If a set of points $X\subset\mathbb{R}^d$ around the origin admits a positive Laplace stencil, then they must not lie in one and the same half space (with respect to an arbitrary hyperplane through the origin). This result is well known [@DuarteLiszkaTworzyako1996]. Due to the particular choice of ${\mbox{\boldmath$\lambda$}}$, this criterion is very crude, but easy to formulate in geometric means. More careful estimates of the condition of Cor. \[seibold:cor:pos\_stencil\_geometric\_criterion\] may yield stricter criteria. ![Case $(0-)$ type 2[]{data-label="fig_proof_pos_stencil_2d_zm2"}](fig_proof_pos_stencil_2d_mm){width="99.00000%"} ![Case $(0-)$ type 2[]{data-label="fig_proof_pos_stencil_2d_zm2"}](fig_proof_pos_stencil_2d_pm){width="99.00000%"} ![Case $(0-)$ type 2[]{data-label="fig_proof_pos_stencil_2d_zm2"}](fig_proof_pos_stencil_2d_zm1){width="99.00000%"} ![Case $(0-)$ type 2[]{data-label="fig_proof_pos_stencil_2d_zm2"}](fig_proof_pos_stencil_2d_zm2){width="99.00000%"} A Sufficient Criterion for Positive Stencils -------------------------------------------- For any ${\mbox{\boldmath$c$}},{\mbox{\boldmath$\lambda$}}\in\mathbb{R}^d$ with $\sum_{i=1}^d\lambda_i<0$ we construct a domain $G_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}\supset H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}$, which is $\mathbb{R}^d$ aside from a cone centered at the origin. If for any ${\mbox{\boldmath$c$}},{\mbox{\boldmath$\lambda$}}\in\mathbb{R}^d$, $S\in O(d)$ there is at least one point $S{\mbox{\boldmath$x$}}_i\notin G_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}$, then $S{\mbox{\boldmath$x$}}_i\notin H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}$, thus a positive Laplace stencil exists. We call this criterion *cone criterion*. \[seibold:thm:pos\_stencil\_domain2d\] Let ${\mbox{\boldmath$c$}},{\mbox{\boldmath$\lambda$}}\in\mathbb{R}^2$ with $\lambda_1+\lambda_2<0$. There exists always a cone $C_{{\mbox{\boldmath$v$}}}$ defined by ${\mbox{\boldmath$v$}}\cdot{\mbox{\boldmath$x$}}>\frac{1}{\sqrt{1+\beta^2}}\|{\mbox{\boldmath$x$}}\|$, where $\beta=\sqrt{2}-1$ (a cone with total opening angle 45$^\circ$, the direction vector ${\mbox{\boldmath$v$}}$ depends on ${\mbox{\boldmath$\lambda$}}$ and ${\mbox{\boldmath$c$}}$), such that $G_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}} = \mathbb{R}^d \setminus C_{{\mbox{\boldmath$v$}}}$ satisfies $H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}\subset G_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}$. We show that $H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}$ and $C_{{\mbox{\boldmath$v$}}}$ do not intersect. Since the problem is invariant under interchanging coordinates, we can w.l.o.g. assume that $\lambda_2<0$. Including the degenerate case, three cases need to be considered: - **Case $(--)$:** $\lambda_1<0$, $\lambda_2<0$\ The set $H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}$ is the interior of an ellipse centered at ${\mbox{\boldmath$c$}}$ with ${\mbox{\boldmath$0$}}\in\partial H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}$. The vector ${\mbox{\boldmath$v$}}=-(\frac{\lambda_2}{\lambda_1}c_1,\frac{\lambda_1}{\lambda_2}c_2)$ is an outer normal vector. The cone $C_{{\mbox{\boldmath$v$}}}$ touches the ellipse only at the origin, as shown in Fig. \[fig\_proof\_pos\_stencil\_2d\_mm\]. - **Case $(+-)$:** $\lambda_1>0$, $\lambda_2<0$\ Fig. \[fig\_proof\_pos\_stencil\_2d\_pm\] shows the geometry. Define $\mu_1 = \frac{|\lambda_1|}{|\lambda_2|}<1$. The domain $H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}$ is defined by $\tilde g(x_1,x_2) = \mu_1(x_1^2-2c_1x_1)-(x_2^2-2c_2x_2) \ge 0$. Due to symmetry we can assume $c_1,c_2\ge 0$. For all ${\mbox{\boldmath$x$}}\in B$, where $B=\{(x_1,x_2)|x_1>0,x_2<0,|x_1|<|x_2|\}$, the function $\tilde g$ satisfies $$\begin{aligned} \tilde g({\mbox{\boldmath$x$}}) &= \mu_1(|x_1|^2-2c_1|x_1|)-(|x_2|^2+2c_2|x_2|) \\ &< (\mu_1-1)|x_2|^2-2(\mu_1c_1|x_1|+c_2|x_2|) < 0\;,\end{aligned}$$ hence $H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}\cap B=\emptyset$. The domain $B$ is a 2d cone with opening angle 45$^\circ$, where ${\mbox{\boldmath$v$}}=(\frac{1}{2}\sqrt{2-\sqrt{2}},\frac{1}{2}\sqrt{2+\sqrt{2}})$, which proves the claim. - **Case $(0-)$:** $\lambda_1=0$, $\lambda_2<0$\ We use representation . Define $\mu_2=|\lambda_2|$. The domain $H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$d$}}}$ is defined by $g(x_1,x_2) = d_1x_1+d_2x_2-\mu_2x_2^2 \ge 0$. Due to symmetry we can assume that $d_1,d_2\ge 0$. Two subcases have to be distinguished: - **Case $d_1\neq 0$:**\ The setup is shown in Fig. \[fig\_proof\_pos\_stencil\_2d\_zm1\]. For all ${\mbox{\boldmath$x$}}\in B$, where $B=\{(x_1,x_2)|x_1<0,x_2<0,|x_1|<|x_2|\}$, one has $g(x_1,x_2) = -d_1|x_1|-d_2|x_2|-\mu_2x_2^2 < 0$, hence $H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$d$}}}\cap B=\emptyset$. As above, the domain $B$ is a 2d cone with opening angle 45$^\circ$. - **Case $d_1=0$:**\ The setup is shown in Fig. \[fig\_proof\_pos\_stencil\_2d\_zm2\]. The domain $g(x_1,x_2)\ge 0$ is the set $0\le x_2\le\frac{d_2}{\mu_2}$. Any cone contained in the domain $x_2<0$ proves the claim. In other words, if any angle between two neighboring points (seen from the central point) is no more than 45$^\circ$, then a positive stencil always exists. The 2d cone criterion is sharp: For any $\varepsilon>0$ a point setup can be constructed, such that all angles are less than $\frac{\pi}{4}+\varepsilon$, and a positive stencil does not exist. The construction is given in [@SeiboldDiss2006 p. 136]. Note that the resulting configurations are very unbalanced. Some points are very close to the central point, others are far away. In practice, such extreme cases are typically avoided by the construction and management of the point cloud, yielding positive stencils also for angles significantly larger than 45$^\circ$. \[seibold:thm:pos\_stencil\_domain3d\] Let ${\mbox{\boldmath$c$}},{\mbox{\boldmath$\lambda$}}\in\mathbb{R}^3$ with $\lambda_1+\lambda_2+\lambda_3<0$. There exists always a cone $C_{{\mbox{\boldmath$v$}}}$ defined by ${\mbox{\boldmath$v$}}\cdot{\mbox{\boldmath$x$}}>\frac{1}{\sqrt{1+\beta^2}}\|{\mbox{\boldmath$x$}}\|$, where $\beta=\sqrt{\frac{1}{6}(3-\sqrt{6})}$ (a cone with total opening angle 33.7$^\circ$), such that $G_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}} = \mathbb{R}^d \setminus C_{{\mbox{\boldmath$v$}}}$ satisfies $H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}\subset G_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}$. The following cases need to be considered: - **Cases $(---)$, $(0--)$, $(00-)$:** $\lambda_1\le 0$, $\lambda_2\le 0$, $\lambda_3<0$\ As in 2d, we use representation . Define $\mu_i=|\lambda_i|\ \forall i=1,2,3$. Allowing $\mu_1$ and $\mu_2$ to be zero, the domain $H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$d$}}}$ is defined by $$g(x_1,x_2,x_3) = d_1x_1+d_2x_2+d_3x_3-\mu_1x_1^2-\mu_2x_2^2-\mu_3x_3^2 \ge 0\;,$$ First assume that if one $\mu_i=0$, then the corresponding $d_i\neq 0$. Due to symmetry we can w.l.o.g. assume that $d_1,d_2,d_3\ge 0$. For all ${\mbox{\boldmath$x$}}\in B$, where $B=\{(x_1,x_2,x_3)|x_1,x_2,x_3<0\}$, the function $g$ satisfies $$g(x_1,x_2,x_3) = -d_1|x_1|-d_2|x_2|-d_3|x_3|-\mu_2|x_2|^2-\mu_3|x_3|^2 < 0\;.$$ The domain $B$ is not a cone, but the corresponding domain from the case $(++-)$ is contained in it. Hence, the cone constructed in that case can be used here. In the case that $\mu_i=0$ and $d_i=0$, the geometry reduces to the 2d (or trivial 1d) case. Since in 2d the desired estimates have been shown for a larger opening angle, the constructions transfer to the 3d case. - **Case $(++-)$:** $\lambda_1>0$, $\lambda_2>0$, $\lambda_3<0$\ Define $\mu_1 = \frac{|\lambda_1|}{|\lambda_3|}<1$ and $\mu_2 = \frac{|\lambda_2|}{|\lambda_3|}<1$. Then $H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}$ is defined by $$\tilde g(x_1,x_2,x_3) = \mu_1(x_1^2-2c_1x_1) +\mu_2(x_2^2-2c_2x_2)-(x_3^2-2c_3x_3) \ge 0\;.$$ Due to symmetry we can assume $c_1,c_2,c_3\ge 0$. For all ${\mbox{\boldmath$x$}}\in B$, where $B=\{(x_1,x_2,x_3)|x_1,x_2>0,x_3<0,|x_1|,|x_2|<\sqrt{\frac{1}{2}}|x_3|\}$, we have $$\begin{aligned} \tilde g({\mbox{\boldmath$x$}}) &= \mu_1(|x_1|^2-2c_1|x_1|)+\mu_2(|x_2|^2-2c_2|x_2|)-(|x_3|^2+2c_3|x_3|) \\ &< (\tfrac{1}{2}(\mu_1+\mu_2)-1)|x_3|^2-2(\mu_1c_1|x_1|+\mu_2c_2|x_2|+c_3|x_3|) < 0\;.\end{aligned}$$ Note that $B$ is not a cone. However, a 3d cone can always be contained inside $B$. Some geometric considerations yield that the cone with maximum opening angle contained inside $B$ is given by $\beta=\sqrt{\frac{1}{6}(3-\sqrt{6})}$ and ${\mbox{\boldmath$v$}}=\frac{1}{\sqrt{41-16\sqrt{6}}}{\left(2(\sqrt{3}-\sqrt{2}),2(\sqrt{3}-\sqrt{2}),1\right)}$. - **Case $(+0-)$:** $\lambda_1>0$, $\lambda_2=0$, $\lambda_3<0$\ As in the preceding degenerate cases, we describe the domain by . Define $\mu_1=|\lambda_1|$ and $\mu_3=|\lambda_3|$. The case $d_2=0$ reduces to the 2d case $(+-)$. Hence, w.l.o.g. we consider $d_1\ge 0$, $d_2>0$, $d_3\ge 0$. For all ${\mbox{\boldmath$x$}}\in B$, where $B=\{(x_1,x_2,x_3)|x_1,x_2,x_3<0,|x_1|<|x_3|\}$, we have $$g(x_1,x_2,x_3) = -d_1|x_1|-d_2|x_2|-d_3|x_3|+\mu_1|x_1|^2-\mu_3|x_3|^2 < 0\;.$$ The estimate holds, since $\mu_1<\mu_3$ and $|x_1|<|x_3|$. As before, a 3d cone with desired opening angle can be contained in $B$. The cone from the case $(++-)$ can be used here. - **Case $(+--)$:** $\lambda_1>0$, $\lambda_2<0$, $\lambda_3<0$\ Define $\mu_2=\frac{|\lambda_2|}{|\lambda_1|}$ and $\mu_3=\frac{|\lambda_3|}{|\lambda_1|}$. Since $\mu_2+\mu_3>1$, we assume w.l.o.g. $\mu_3\ge\frac{1}{2}$. The domain $H_{{\mbox{\boldmath$\lambda$}},{\mbox{\boldmath$c$}}}$ is defined by $$\tilde g(x_1,x_2,x_3) = (x_1^2-2c_1x_1)-\mu_2(x_2^2-2c_2x_2)-\mu_3(x_3^2-2c_3x_3) \ge 0\;.$$ Due to symmetry we can assume $c_1,c_2,c_3\ge 0$. For all ${\mbox{\boldmath$x$}}\in B$, where $B=\{(x_1,x_2,x_3)|x_1>0,x_2,x_3<0,|x_1|,|x_2|<\sqrt{\frac{1}{2}}|x_3|\}$ we have $$\begin{aligned} \tilde g({\mbox{\boldmath$x$}}) &= (|x_1|^2-2c_1|x_1|)-\mu_2(|x_2|^2+2c_2|x_2|)-\mu_3(|x_3|^2+2c_3|x_3|) \\ &= \underbrace{(|x_1|^2-\mu_2|x_2|^2-\mu_3|x_3|^2)}_{<(\frac{1}{2}-\mu_3)|x_3|^2\le 0} -2(\mu_1c_1|x_1|+\mu_2c_2|x_2|+c_3|x_3|) < 0\end{aligned}$$ The domain $B$ is the same as in case $(++-)$, merely reflected at the $x_1,x_3$ plane. Hence, a 3d cone can be placed in the same way. Unlike the 2d case, the 3d estimate is not sharp, due to the intermediate domain $B$. With significantly more algebra, it is possible to gain an opening angle that is a couple of degrees larger. The existence of a positive stencil implies the existence of a stencil. Configurations that yield an unsolvable Vandermonde system are automatically excluded by the cone criterion. \[seibold:def:points\_distributed\_nicely\] We call points *distributed nicely* around a central point, if in a test cone, with opening angle given by Thm. \[seibold:thm:pos\_stencil\_domain2d\] respectively Thm. \[seibold:thm:pos\_stencil\_domain3d\], always points are contained, for any possible direction the cone points to. Condition on Point Cloud Geometry {#subsec:mps_conditions_geometry} --------------------------------- The cone criterion guarantees positive stencils. We now provide conditions on the point cloud geometry and the choice of candidate points, such that the cone criterion is guaranteed to be satisfied. As in [@Levin1998], we define \[seibold:def:mesh\_size\] Let $\Omega\subset\mathbb{R}^d$ be a domain and $X=\{{\mbox{\boldmath$x$}}_1,\dots,{\mbox{\boldmath$x$}}_n\}$ a point cloud. The *mesh size* $h$ is defined as the minimal real number, such that $\bar\Omega\subset \bigcup_{i=1}^n \bar B{\left({\mbox{\boldmath$x$}}_i,\frac{h}{2}\right)}$, where $\bar B{\left({\mbox{\boldmath$x$}},r\right)}$ is the closed ball of radius $r$ centered in ${\mbox{\boldmath$x$}}$ and $\bar\Omega$ is the closure of $\Omega$. We assume that a desired maximum mesh size is preserved by management of the point cloud, e.g. by inserting points into large holes. \[seibold:thm:pos\_stencil\_mesh\_size\] Let the point cloud have mesh size $h$. Let $\gamma$ be the opening angle of the cone derived in Thm. \[seibold:thm:pos\_stencil\_domain2d\] respectively Thm. \[seibold:thm:pos\_stencil\_domain3d\]. If the radius of considered candidate points satisfies $r > \frac{1}{\sin(\gamma/2)}\frac{h}{2}$, then for every interior point which is sufficiently far from the boundary, a positive stencil exists. Having mesh size $h$ implies that there are no holes larger in diameter than $h$, i.e. $\forall{\mbox{\boldmath$x$}}\in\Omega\;\exists{\mbox{\boldmath$x$}}_i\in X:\|{\mbox{\boldmath$x$}}_i-{\mbox{\boldmath$x$}}\|<\frac{h}{2}$. Fig. \[seibold:fig\_cone\_meshsize\] shows a ball with radius $r$ around the central point and a cone with opening angle $\gamma$. If the cone contains no point, there must be a ball of radius $\frac{h}{2}$ which contains no points. The claim follows by considering the triangle ($0,{\mbox{\boldmath$x$}},{\mbox{\boldmath$x$}}_i$). The specific ratios of candidate radius to maximum hole size radius are $$\frac{r}{h/2} > \sqrt{1+\tfrac{1}{\beta^2}} = \begin{cases} \sqrt{4+2\sqrt{2}} &= 2.61 \quad\mathrm{in~2d} \\ \sqrt{7+2\sqrt{6}} &= 3.45 \quad\mathrm{in~3d} \end{cases}$$ Using sharper estimates, the 3d ratio can be lowered to $\sqrt{6+2\sqrt{6}} = 3.30$. In practice, point clouds are much nicer than the worst case scenario, so significantly smaller ratios lead to positive stencils. ![Guaranteeing the cone criterion close to the boundary[]{data-label="seibold:fig_cone_boundary"}](fig_cone_meshsize){width="90.00000%"} ![Guaranteeing the cone criterion close to the boundary[]{data-label="seibold:fig_cone_boundary"}](fig_cone_boundary){width="90.00000%"} Thm. \[seibold:thm:pos\_stencil\_mesh\_size\] is valid for any interior point which is far enough from the boundary, that the mesh size criterion guarantees points to lie between the point in consideration and the boundary. For a layer of interior points close to the boundary, the cone criterion can be enforced by the following construction (see Fig. \[seibold:fig\_cone\_boundary\]): First, place boundary points sufficiently dense. Let their maximum distance be $d_\mathrm{p}$. Second, ensure that every interior point has a minimum distance $d_\mathrm{b}$ from the boundary. In 2d, this is $d_\mathrm{b}>\frac{4}{\pi}d_\mathrm{p}$. Neumann Boundary Points ----------------------- Assume the boundary $\partial\Omega$ is $C^1$ around Neumann boundary points. Consider a local coordinate system, i.e. ${\mbox{\boldmath$n$}}=(1,0)$ in 2d, respectively ${\mbox{\boldmath$n$}}=(1,0,0)$ in 3d. We obtain Neumann stencils by solving the linear minimization problem $$\min \sum_{i=1}^m \frac{s_i}{w_i}, \ \mathrm{s.t.} \ V\cdot{\mbox{\boldmath$s$}}={\mbox{\boldmath$n$}}, \ {\mbox{\boldmath$s$}}\ge 0\;, \label{seibold:eq:mps_neumann_system}$$ where the matrix $V$ is given by as $$V = {\left(\begin{array}{ccc} x_1 & \dots & x_m \\ y_1 & \dots & y_m \end{array}\right)} \mbox{~in 2d, and~} V = {\left(\begin{array}{ccc} x_1 & \dots & x_m \\ y_1 & \dots & y_m \\ z_1 & \dots & z_m \end{array}\right)} \mbox{~in 3d.}$$ We consider the 3d case. The 2d geometry is contained as a special case. For an easier analysis, we consider a locally convex domain, i.e. $x_i\ge 0\ \forall i$. For a Neumann boundary point a positive stencil exists, iff the points’ projections onto the normal plane do not lie all in one and the same half space. Due to Farkas’ lemma, system has no solution ${\mbox{\boldmath$s$}}\ge 0$, iff the system $V^T\cdot{\mbox{\boldmath$w$}}\ge 0$ has a solution satisfying $w_x<0$, where ${\mbox{\boldmath$w$}}=(w_x,w_y,w_z)^T$. Let no positive stencil exist. Then ${\mbox{\boldmath$w$}}\in\mathbb{R}^d$ with $w_x<0$ exists, such that $V^T\cdot{\mbox{\boldmath$w$}}\ge 0$, i.e. $w_xx_i+w_yy_i+w_zz_i\ge 0 \ \forall i$. This is equivalent to ${\mbox{\boldmath$k$}}\cdot{\left(\begin{array}{c}y_i\\z_i\end{array}\right)}\ge x_i \ \forall i$, where ${\mbox{\boldmath$k$}}=(\frac{w_y}{|w_x|},\frac{w_z}{|w_x|})$. This means that the $y$-$z$ projection of all points lies in one and the same half space (in the direction of ${\mbox{\boldmath$k$}}$). Conversely, assume that the $y$-$z$ projection of all points lies in one and the same half space. Let $I$ be the indices of all points in consideration. Define $I_p=\{i\in I: \ x_i>0\}$. Consider w.l.o.g. the case $z_i\ge 0 \ \forall i$, where $z_i>0 \ \forall i\in I_p$. Choose ${\mbox{\boldmath$w$}}={\left(-1,0,\frac{\max_{i\in I} x_i}{\min_{i\in I_p} z_i}\right)}$. Then for all $i\in I_p$ it holds $${\mbox{\boldmath$w$}}^T\cdot{\mbox{\boldmath$x$}}_i = -x_i+\frac{\max_{i\in I} x_i}{\min_{i\in I_p} z_i}z_i \ge -x_i+\max_{i\in I} x_i\ge 0\;,$$ and for all $i\in I\setminus I_p$ one has ${\mbox{\boldmath$w$}}^T\cdot{\mbox{\boldmath$x$}}_i\ge 0$, since $x_i=0$. Thus, no positive stencil exists. \[seibold:rem:Neumann\_first\_order\] Construction yields a first order accurate approximation of the normal derivative. Second order accuracy could be achieved, by including quadratic terms into $V$. However, in this case no positive stencil exists, since the condition $\sum_{i=1}^m (x_i^2+y_i^2)s_i = 0$ cannot be satisfied. Treatment of Cracks ------------------- A non-convex part of the domain $\Omega$ requires a special treatment if it is thinner than the local neighborhood radius. Fig. \[seibold:fig\_crack\] shows a proper treatment of a crack. Stencils on one side of the crack must not use points on the other side. In the MPS method this property can be guaranteed by the following construction: For a central point ${\mbox{\boldmath$x$}}_0\in\Omega$, only circular neighbors inside the star shaped core $\bar{\Omega}_{{\mbox{\boldmath$x$}}_0} =\{{\mbox{\boldmath$x$}}\in\bar{\Omega}: (1-\alpha){\mbox{\boldmath$x$}}_0+\alpha{\mbox{\boldmath$x$}}\in\bar{\Omega}\ \forall\alpha\in [0,1]\}$ (bold dots in Fig. \[seibold:fig\_crack\]) are considered as candidates for the linear minimization . If the domain is defined implicitly $\Omega = \{{\mbox{\boldmath$x$}}:\phi({\mbox{\boldmath$x$}})<0\}$, the point ${\mbox{\boldmath$x$}}$ does not lie in $\bar{\Omega}_{{\mbox{\boldmath$x$}}_0}$ if a point ${\mbox{\boldmath$y$}}\in [{\mbox{\boldmath$x$}}_0,{\mbox{\boldmath$x$}}]$ on the connection line satisfies $\phi({\mbox{\boldmath$y$}})>0$. ![Computational domain in 2d and 3d[]{data-label="seibold:fig_domain"}](fig_crack){width="99.00000%"} ![Computational domain in 2d and 3d[]{data-label="seibold:fig_domain"}](fig_domain_2d "fig:"){width="50.00000%"} ![Computational domain in 2d and 3d[]{data-label="seibold:fig_domain"}](fig_domain_3d "fig:"){width="42.00000%"} Minimal Stencils and Matrix Connectivity {#seibold:sec:lp_matrix_connectivity} ======================================== Due to Thm. \[seibold:thm:ess\_irred\_is\_ess\_diagdom\] and Thm. \[seibold:thm:Lmatrix\_is\_Mmatrix\], the matrix composed of positive stencils is an M-matrix, if every interior and Neumann boundary point is connected to a Dirichlet boundary point. \[seibold:thm:mps\_reach\_boundary\] Consider the Poisson problem on a domain which has no holes (i.e. only an outer boundary). With a MPS discretization every interior point is connected to a boundary point. Assume there is a point $i\in I$ which is not connected to a boundary point. Define $I_i=\{j\in I:\mbox{$i$ is connected to $j$}\}$. Every point in $I_i$ is not connected to a boundary point. Hence, the set $I_i\subset I$ forms an island inside $\Omega$ which does not reach a the boundary. Consider a point that spans the convex hull of $I_i$. It only uses points in its stencil that lie inside the island, hence these lie in one and the same half space, which contradicts the necessary condition on positive stencils given by Thm. \[seibold:thm:mps\_pos\_stencil\_necessary\]. Although Thm. \[seibold:thm:mps\_reach\_boundary\] does not extend to interior boundaries, in practice the MPS works for these, given enough boundary points are placed. It remains to ensure that every point connects to a *Dirichlet* boundary point. Unfortunately, this cannot be concluded from the MPS method directly. It is possible that an isolated Dirichlet point is not used in the stencils of nearby interior points. Note that this phenomenon can also happen on regular grids. A single Dirichlet point in a corner of a domain may not be used by regular five-point stencils. If Dirichlet data is prescribed only in small regions, these regions have to be equipped with a sufficient number of boundary points. In addition, the MPS implementation has to ensure that these Dirichlet points are used by nearby points. If done so, the MPS method guarantees to generate M-matrices. ![Error convergence in 2d – LSQ vs. MPS method[]{data-label="seibold:fig_error_analysis_2d"}](fig_error_analysis_2d_LSQ){width="99.00000%"} ![Error convergence in 2d – LSQ vs. MPS method[]{data-label="seibold:fig_error_analysis_2d"}](fig_error_analysis_2d_MPS){width="99.00000%"} Numerical Experiments {#seibold:sec:numerics} ===================== We investigate the numerical accuracy of the MPS method in comparison to a least squares approach. As test problems we consider the Poisson equation in the unit box with a ball cut out $\Omega=[0,1]^d \setminus B({\left(\frac{1}{2},\dots,\frac{1}{2},1.1\right)},0.44)$. Fig. \[seibold:fig\_domain\] shows the computational domain in 2d and 3d. In one case, the boundary conditions are Dirichlet everywhere, in the other case, Neumann at the bottom $x_d = 0$, and Dirichlet everywhere else. Given $g$, we set $f=\Delta g$ and $h = {\frac{\partialg}{\partialn}}$, so has the solution $u=g$. Specifically, we choose $g(x_1,x_2) = \tfrac{1}{c_2}{\left(x_1\sin(x_2+2)+x_2\sin(2x_1+1)\right)}$ in 2d and $g(x_1,x_2,x_3) = \tfrac{1}{c_3}{\left(x_1\sin(x_2+2)+x_2\sin(2x_3+3)+x_3\sin(3x_1+1)\right)}$ in 3d, with $c_2$ and $c_3$ such that $\max g-\min g=1$. The problem is discretized by a sequence of point clouds. The point clouds have a uniform average density and a minimum separation [@Levin1998] of $\delta=0.05$. Each point cloud is managed to satisfy the conditions for the existence of positive stencils, as derived in Sect. \[subsec:mps\_conditions\_geometry\]. Since to one mesh size $h$, given by Def. \[seibold:def:mesh\_size\], many point clouds exist, we sample a number of experiments to obtain an average error convergence rate. To every point cloud we apply a weighted least squares method (Sect. \[seibold:sec:least\_squares\_approaches\]) and the MPS methods (Sect. \[seibold:sec:lp\_approach\]), both with $w(\delta)=\delta^{-4}$. ![Error convergence in 3d – LSQ vs. MPS method[]{data-label="seibold:fig_error_analysis_3d"}](fig_error_analysis_3d_LSQ){width="99.00000%"} ![Error convergence in 3d – LSQ vs. MPS method[]{data-label="seibold:fig_error_analysis_3d"}](fig_error_analysis_3d_MPS){width="99.00000%"} The numerical results are shown in Fig. \[seibold:fig\_error\_analysis\_2d\] for 2d, and Fig. \[seibold:fig\_error\_analysis\_3d\] for 3d. Plotted is the error measured in the maximum norm over the mesh size $h$. Solid dots represent the all Dirichlet version of a problem, while open circles show the error with partial Neumann boundary conditions. The reference lines are of slope one and two respectively. One can observe the following: - Both approaches show a second order convergence rate for the pure Dirichlet problem, and first order convergence if Neumann boundary conditions are involved. While the derivation in Sect. \[seibold:subsec:consistent\_derivatives\] enforces only a first order accurate approximation of the Laplacian at interior points, point clouds tend to possess enough averaged symmetry to actually yield second order error convergence. On the other hand, the first order accurate approximation (Rem. \[seibold:rem:Neumann\_first\_order\]) at Neumann boundary points carries through. - The MPS method shows a larger variation in error over the ensemble of experiments. One reason for this effect could be the discontinuous dependence on the point positions (see Rem. \[seibold:rem:no\_continuous\_dependence\]). For two similar point clouds, the MPS method may select very different stencils. - Both methods yield roughly the same error constant. The average error is slightly lower with the MPS method. Computational Cost ------------------ For a 3d test problem, the MPS method is compared with the LSQ method in terms of computational cost. For a sequence of point clouds, Fig. \[seibold:fig\_cputimes\] shows the CPU times for the setup of the system matrix (left plot), the solution of the arising system with a BiCG scheme (center plot), and the solution with an algebraic multigrid (AMG) scheme (right plot). The latter is performed using SAMG [@SAMG] by the *Fraunhofer Institute for Algorithms and Scientific Computing*. ![CPU times for setup and solve by BiCG and AMG[]{data-label="seibold:fig_cputimes"}](fig_cputimes_setup){width="99.00000%"} ![CPU times for setup and solve by BiCG and AMG[]{data-label="seibold:fig_cputimes"}](fig_cputimes_solve_bicg){width="99.00000%"} ![CPU times for setup and solve by BiCG and AMG[]{data-label="seibold:fig_cputimes"}](fig_cputimes_solve_amg){width="99.00000%"} As expected (Sect. \[subsec:solving\_lp\]), the cost of setting up the system matrix is roughly equal for MPS and LSQ method. In fact, MPS is slightly slower with the used simplex method. However, more efficient linear programming methods may turn the tide towards the MPS method. On the other hand, the cost for solving the large linear system is significantly reduced by the MPS method. The speedup factor equals the factor in sparsity, i.e. the MPS approximation does not modify the convergence rate. While the AMG solver shows a cost roughly linear in the number of unknowns, solvers further away from optimal effort (like BiCG) will greatly benefit from the sparsity of the MPS approximation as the number of unknowns increases. Conclusions and Outlook ======================= We have presented a meshfree approach that constructs minimal positive stencils for the Laplace operator on a cloud of points. We have shown that under moderate assumptions on the local resolution of the point cloud, positive stencils always exist. The method approximates the Poisson equation by M-matrices, which are optimally sparse. Both properties are beneficial, and they are in general not met by classical least squares approaches. Numerical tests show that with the presented approach, the approximation error roughly equals the error of classical approaches. The computational cost to construct the new approximation is comparable to the cost of least squares methods. On the other hand, for solving the arising linear system, both cost and memory requirements are reduced significantly due to the optimal sparsity. An efficient solution of the linear programs is a crucial point in the presented method, worth a deeper analysis. The application to particle methods shall be investigated. While the presented approach can stand as a method of its own, it may also increase the efficiency of other approaches. The minimal stencils can be augmented by additional neighbors and the final stencil be computed by a least squares method. For instance, a local neighborhood radius can be based on the farthest minimal stencil point, thus increasing sparsity and adding local adaptivity to existing meshfree codes. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank Axel Klar, J[ö]{}rg Kuhnert, Sven Krumke, Helmut Neunzert, and Sudarshan Tiwari for helpful discussions and comments. The support by the National Science Foundation is acknowledged. The author was partially supported by NSF grant DMS–0813648. [^1]: If $\lim_{\delta\to 0} w(\delta)$ exists, the approach is called *approximating*, otherwise *interpolating*. [^2]: The figures are 2d for simplicity of presentation. The MPS method applies directly to, and shows its strength, in 3d. [^3]: Hybrid approaches exist that use a Delaunay mesh combined with meshfree methods.
--- abstract: 'This work focuses on dynamics arising from reaction-diffusion equations, where the profile of propagation is no longer characterized by a single front, but by a layer of several fronts which we call a propagating terrace. This means, intuitively, that transition from one equilibrium to another may occur in several steps, that is, successive phases between some intermediate stationary states. We establish a number of properties on such propagating terraces in a one-dimensional periodic environment, under very wide and generic conditions. We are especially concerned with their existence, uniqueness, and their spatial structure. Our goal is to provide insight into the intricate dynamics arising from multistable nonlinearities.' author: - | <span style="font-variant:small-caps;">Thomas Giletti$^{a}$[^1] and Hiroshi Matano$^b$</span>\ $^{a}$[Univ. Lorraine, Institut Elie Cartan Lorraine, UMR 7502, Vandoeuvre-lès-Nancy, F-54506, France]{}\ $^b$[Graduate School of Mathematical Sciences, University of Tokyo, Komaba, Tokyo 153-8914, Japan]{} title: '**Existence and uniqueness of propagating terraces**' --- Introduction {#sec:intro} ============ We consider the following reaction-diffusion equation in one space dimension: $$\tag{$E$}\label{eqn1} \partial_t u(t,x)= \partial_{x} (a(x) \partial_x u(t,x))+ f(x,u(t,x)), \ \forall (t,x) \in {\mathbb{R}}\times {\mathbb{R}},$$ where, throughout the paper, the functions $a$ and $f$ satisfy the following regularity and periodicity assumptions: $$\label{eqn:a} 0< a \in C^2 ({\mathbb{R}},{\mathbb{R}}) \mbox{ and } a (x+L) \equiv a(x),$$ $$\label{eqn:f} f \in C^1 ({\mathbb{R}}^2,{\mathbb{R}}) \mbox{ and } f(x+L,u) \equiv f(x,u),$$ for some $L>0$.\ Furthermore, we will always assume that there exist at least two equilibrium states, namely 0 and $p$. More precisely, $$\label{eqn:f_0} f(x,0) \equiv 0,$$ and there exists an $L$-periodic and positive stationary solution: $$\label{eqn:f_p} \left\{ \begin{array}{l} \partial_{x} (a(x) \partial_x p) + f(x,p) =0, \ \forall x \in {\mathbb{R}}, \vspace{3pt}\\ p ( x+L) \equiv p (x), \ p(x)>0. \end{array} \right.$$ Clearly, the function $p$ is also a stationary solution of the following equation, $L$-periodic counterpart of (\[eqn1\]): $$\tag{$E_{per}$}\label{eqn1-per} \left\{ \begin{array}{l} \partial_t u(t,x)= \partial_{x} (a(x) \partial_x u(t,x)) + f(x,u(t,x)), \ \forall (t,x) \in {\mathbb{R}}\times {\mathbb{R}}, \vspace{3pt}\\ u (t, x+L) \equiv u (t,x), \ \forall (t,x) \in {\mathbb{R}}\times {\mathbb{R}}. \end{array} \right.$$ It is obvious that any solution of (\[eqn1-per\]) is also a solution of (\[eqn1\]).\ The goal of this paper is to investigate propagation dynamics between the two equilibria 0 and $p$, for very large classes of nonlinearities, including but not limited to the classical monostable, ignition or bistable cases. The present work is a continuation of our previous paper [@DGM], where we studied propagation from 0 to $p$ under an additional stability assumption on $p$. It also generalizes the earlier work of Fife and Mc Leod [@FifMcL77], in which some related problems were studied for spatially homogeneous multistable nonlinearities. Our analysis will highlight complex dynamics which, unlike in the standard cases, cannot be described by a single front. They will instead involve finite sequences of fronts, that we will call propagating terraces. We set forth this notion as a natural generalization of traveling waves, providing a new and robust framework for describing dynamics of a highly general class of periodic reaction-diffusion equations. Propagating terraces: some definitions -------------------------------------- The key concept throughout this paper is that of a propagating terrace, which we define below. It is a generalization of the classical notion of a traveling wave. Let us first recall the notion of pulsating traveling waves [@BH02; @Wein02]. A pulsating traveling wave solution of connecting two periodic stationary solutions $p_1$ and $p_2$ of with speed $c$ is a particular entire solution of the type $U(x-ct,x)$ where $U(z,x)$ is periodic in its second variable, and satisfies the asymptotics $U (+\infty,\cdot)=p_2 (\cdot)$ and $U(-\infty,\cdot) = p_1 (\cdot)$. Moreover, when the speed $c=0$, we say that it is a stationary wave. \[def:terrace\] A **propagating terrace** $\mathcal{T}$ connecting 0 to $p$ is a couple of two finite sequences $(p_k)_{0 \leq k \leq N}$ and $(U_k)_{1 \leq k \leq N}$ such that: - The $p_k$ are $L$-periodic stationary solutions of $($\[eqn1\]$)$ satisfying $$p=p_0 >p_1 >...>p_N =0.$$ - For any $1 \leq k \leq N$, $U_k$ is a pulsating traveling wave solution of $($\[eqn1\]$)$ connecting $p_k$ to $p_{k-1}$ with speed $c_k \in {\mathbb{R}}$. - The sequence $(c_k)_k$ satisfies $$c_1 \leq c_2 \leq ... \leq c_N .$$ Moreover, we will refer to the sequence $(p_k)_{1\leq k \leq N}$ as the **platforms** of the terrace $\mathcal{T}$, and hence to $N$ as the number of platforms of $\mathcal{T}$. Similarly, for any pair of $L$-periodic stationary solutions $p$, $q$ satisfying $p > q$, we can define a propagating terrace connecting $q$ to $p$ in a completely analogous manner.\ Propagating terraces were already introduced under this name in our previous work [@DGM]. The definition we use here is slightly more general because the speeds $c_k$ are no longer required to be nonnegative and, therefore, the upper and lower parts of the terrace may spread in opposite directions. This is natural in some situations, such as when 0 and $p$ are both unstable (for instance, in the so-called heterozygote superior case from genetics [@AW75]). Incidentally, as mentioned in our previous paper [@DGM], the notion of terrace already appears in the work of Fife and Mc Leod [@FifMcL77] under the name “minimal decomposition". However, their analysis was restricted to homogeneous multistable nonlinearities, for which phase plane methods suffice to determine the structure of the terrace. Our results will extend theirs to more general classes of nonlinearities with spatial periodicity, for which the usual ODE tools no longer work. We also refer to [@Risler1; @Risler2] for related results in the case of homogeneous systems with a gradient structure, and to [@Polacik] for convergence results in higher dimension by symmetrisation techniques. In the above definition, ordering the speeds of terraces is fundamental for their purpose. Indeed, by analogy with single traveling waves in the standard cases [@Bramson83; @FifMcL77; @Giletti], they are to describe dynamics arising in the large time behavior of solutions of the Cauchy problem associated with equation . For instance, convergence to a terrace from Heaviside type initial data $H(a-x) p(x)$, where $a \in {\mathbb{R}}$ and $H$ is the classical Heaviside function, was already proved in [@DGM] under the additional assumption that the solution converges locally uniformly to $p$ for some compactly supported initial datum. Such solutions are close to $0$ (respectively $p$) on the far right (respectively left) of the domain for any positive time, so that lower level sets must spread faster to the right than the upper ones. In other words, a terrace should be of an intuitively appropriate shape, as illustrated in Figure \[fig:propag\_terrace\], at least as $t \rightarrow +\infty$. ![(above) A three platforms terrace;\ (below) A terrace-shaped profile of propagation[]{data-label="fig:propag_terrace"}](propag_terrace2c.eps){width="100.00000%"} We point out to the reader that, for convenience, we always say that a wave connects $q_2$ to $q_1$ if $q_2$ is the limiting state on the right end of the spatial domain, and $q_1$ on the left. However, when the speed is negative, the front actually goes to $q_2$ in large positive time. ### Some further definitions {#some-further-definitions .unnumbered} Some particular terraces distinguish themselves and seem to play a more important role. This is to be expected from standard cases. Indeed, for a classical monostable KPP nonlinearity, it is known that there exists a continuum of admissible speeds $[c,+\infty)$ for traveling waves. However, only the wave with minimal speed attracts large classes of solutions, such as those with compactly supported initial data (see [@Bramson83; @Lau85; @Uchi78] in the simpler homogeneous case, and more recently [@Giletti; @HNRR] for the general periodic framework). Here we note that the wave with minimal speed can also be characterized as the “steepest" one among all the traveling waves connecting the same stationary states, in the sense to be defined in Definition \[def:steep\] below. The link between speed and steepness has been known in many typical cases, such as in the homogeneous framework (see for instance the seminal paper of Kolmogorov, Petrovsky and Piskunov [@KPP], as well as [@FifMcL77] for more general nonlinearities), and will be proved here in Section \[sec:minspeed\] for large classes of spatially periodic nonlinearities. Therefore, it is also natural to classify traveling waves (or more generally, in the present framework, terraces) according to their steepness, and to expect the steepest one to play the most important role in the dynamics. This was already fundamental in our previous work [@DGM], and will again be in the present work. \[def:steep\] Let two functions $u_1 (t,x)$ and $u_2 (t,x)$. We say that $u_1$ is **steeper than** $u_2$ if for any $x_1$, $t_1$ and $t_2$ in ${\mathbb{R}}$, $$u_1 (t_1, x_1 ) = u_2 (t_2,x_1) \Longrightarrow \left\{ \begin{array}{l} u_1 (t_1,x) \geq u_2 (t_2,x) \mbox{ for any } x \leq x_1 , \vspace{3pt} \\ u_1 (t_1,x) \leq u_2 (t_2,x) \mbox{ for any } x \geq x_1. \end{array} \right.$$ Here we remark that $u_1$ and $u_2$ are compared at two arbitrary time moments $t_1$ and $t_2$ not necessarily equal. Therefore, according to this definition, steepness is to be understood here up to any time shift. The definition is otherwise intuitive, as it simply states that a function is steeper than another if they (again, up to any time shift) can only intersect once, and that the steepest function then has to be below the other on the right and above on the left. Of course, it is trivial to extend this definition to functions that depend on $x$ only, seeing them as constant with respect to time. The reader should also note that according to this definition, if the images of $u_1$ and $u_2$ are disjoint, then they are steeper than each other.\ We now introduce, as announced above, some very steep particular propagating terraces: \[def:terrace\] A propagating terrace $\mathcal{T} = \left( (p_k)_{0 \leq k \leq N},(U_k)_{1 \leq k \leq N} \right)$ is said to be **semi-minimal** if, for each $k$, $U_k$ is steeper than any other traveling wave that connects $p_{k-1}$ to $p_k$. The terrace $\mathcal{T}$ is called **minimal** if it is semi-minimal and satisfies $$\{ p_k \} \subset \{q_k \}$$ for any other propagating terrace $( (q_k)_k , (V_k)_k )$ that connects 0 to $p$. The fact that the set $\{p_k\}$ of the platforms of a minimal terrace is included in any set of platforms of any other terrace can indeed be seen as a steepness property, as it implies that any $p_k$ is steeper than any part of any terrace. We will in fact prove that such terraces have even stronger steepness properties, such as being steeper than any other entire solution of . To convince oneself of the strength of such properties, one could already easily check that the minimal terrace, if it exists and if its speeds are all non zero, is unique up to time shifts. More precisely, all minimal terraces share the same platforms and, between each consecutive platforms, consist of identically equal up to time shift traveling waves. This is Theorem \[th:unique\_min\] below.\ Let us now give one last definition, which we draw from [@FifMcL77]. Unlike the two above, it is actually slightly weaker than our notion of propagating terrace. \[def:decomp\] A **decomposition** between 0 and $p$ is a finite sequence of $L$-periodic stationary solutions $(p_k)_{0 \leq k \leq N}$ such that $p_0 = p > p_1 > ... > p_N = 0$, and for any $1 \leq k \leq N$, there exists a pulsating traveling wave $U_k$ connecting $p_k$ to $p_{k-1}$. We say that 0 and $p$ are **connected** if there exists a single traveling wave connecting 0 to $p$. This is very similar to the definition of platforms of a propagating terrace, except that it does not require the speeds of the traveling waves to be ordered anymore. This means as explained above that a decomposition, unless of course it is also a terrace, does not provide a rightful candidate to look at the large time profile of solutions of . However, on the other hand, existence of a decomposition is much easier to check: for instance, it exists if $f$ is homogeneous and has a finite number of zeros as a function of $u$, or more generally, if $f$ can be rewritten as some concatenation of standard nonlinearities. We will show with Theorem \[th:exists\] that existence of a decomposition is actually enough to ensure existence of terraces. Some assumptions {#sec:assumptions} ---------------- We have already highlighted above the particular attention we will pay to the steepness of traveling waves and propagating terraces, and to how it relates to their speeds. We are only able to establish such a relation, which we will detail in Section \[sec:minspeed\] (see in particular Theorem \[th:lem\_speeds\]), under additional assumptions. Let us briefly enounce them below.\ First, we will denote, for any $L$-periodic function $g$, by $\mu_g$ the principal eigenvalue of the following problem: $$\left\{ \begin{array}{l} \displaystyle \partial_{x} (a \,\partial_x \phi ) + g \phi = \mu_g \phi \ \mbox{ in } {\mathbb{R}}, \vspace{5pt}\\ \phi >0 \mbox{ and $L$-periodic}. \end{array} \right.$$ When $q$ is a stationary solution of (\[eqn1-per\]), it is commonly said to be linearly stable (respectively unstable) when $\mu_g <0$ (respectively $\mu_g >0$) with the function $g(x) = \partial_u f(x,q (x))$. Occasionally, we will be led to consider the following two assumptions on the stability of equilibria: \[assumption-speed1\] For any $L$-periodic positive stationary solution $q$, with $0 \leq q \leq p$, that is stable from below with respect to $($\[eqn1-per\]$)$, there exist $\delta >0$ and an $L$-periodic function $g$ such that $\mu_g \leq 0$ and $$\partial_u f (x,u) \leq g (x) \; \mbox{ for all } x \in {\mathbb{R}}\mbox{ and } u \in \left[ q(x)-\delta, q(x)\right].$$ For any $L$-periodic positive stationary solution $q$, with $0 \leq q \leq p$, that is stable from above with respect to $($\[eqn1-per\]$)$, there exist $\delta >0$ and an $L$-periodic function $g$ such that $\mu_g \leq 0$ and $$\partial_u f (x,u) \leq g (x)\; \mbox{ for all } x \in {\mathbb{R}}\mbox{ and } u \in \left[ q(x), q(x)+\delta\right].$$ \[assumption-speed2\] For any $L$-periodic positive stationary solution $q$, with $0 < q < p$, that is stable with respect to $($\[eqn1-per\]$)$ from either below or above, there exist $\delta >0$ and an $L$-periodic function $g$ such that $\mu_g \leq 0$ and $$\partial_u f (x,u) \leq g (x)\; \mbox{ for all } x \in {\mathbb{R}}\mbox{ and } u \in \left[ q(x)-\delta, q(x)+\delta \right].$$ Furthermore, there exist $\delta >0$ and an $L$-periodic function $g$ such that $\mu_g \leq 0$ and $$\partial_u f (x,u) \leq g (x)\; \mbox{ for all } x \in {\mathbb{R}}\mbox{ and } u \in \left[ 0, \delta \right] \cup \left[p(x)-\delta,p(x) \right].$$ Let us remark that, when does not admit any degenerate equilibrium, then Assumption \[assumption-speed1\] is satisfied, as well as Assumption \[assumption-speed2\] provided that 0 and $p$ are linearly stable. Indeed, for any linearly stable stationary solution $q$ of , one can choose $\varepsilon$ small enough so that $\mu_g \leq 0$ with $g = \partial_u f (x,q) + \varepsilon$, and $\partial_u f (x,u) \leq g(x)$ in some set $(x,u) \in {\mathbb{R}}\times \left[q(x)-\delta,q(x)+\delta\right]$. By degenerate equilibrium, we mean here that it is neither linearly stable nor linearly unstable. Main results ------------ Before announcing our results, we remind the reader that the only standing hypotheses are the regularity and periodicity of $a$ and $f$ -, and the existence of two periodic stationary states and . Whenever we make any of the additional Assumptions \[assumption-speed1\] or \[assumption-speed2\], we will state it explicitly. We now proceed to the statements of our main results, some of which we have already evoked above. As announced, the purpose of our notion of propagating terraces is to give relevant insight on the dynamics of equation  for complex multistable nonlinearities. This approach was already justified in [@DGM] where convergence to a minimal terrace was shown from an Heaviside type initial datum, in the case where it spreads to the right with positive speed. Among our results, we will give a new and more general convergence theorem, that further strenghtens the reliability and robustness of the notion of terraces. However, in spite of its relevance, this notion needs to be better understood in view of its application. One could for instance wonder, for a particular $f$, what we can foreknow on the shape of terraces, and in particular whether we can predict which platforms it should contain. This challenge will be addressed through looking at important properties of terraces, in particular at their uniqueness and steepness. We will illustrate, later on this paper, how those universal results do provide the wanted tools, and how they can be applied to particular examples.\ We begin with an existence result for the propagating terrace. Apart from the standing hypotheses , , and , this result does not require any of the additional assumptions mentioned in subsection \[sec:assumptions\]: \[th:exists\] The following three properties are equivalent: 1. There exists a decomposition between 0 and $p$. 2. There exists a propagating terrace connecting 0 to $p$. 3. There exists a minimal propagating terrace $((p_k)_k,(U_k)_k)$ connecting 0 to $p$; moreover, any $p_k$ and $U_k$ is steeper than any other entire solution that lies between 0 and $p$. Note that it is immediate, from our definitions, that $(3) \Rightarrow (2) \Rightarrow (1)$. Therefore, the main point of this theorem is that existence of any decomposition is enough to guarantee existence of a minimal terrace, and that every component of this minimal terrace is steeper than any entire solution between 0 and $p$. Here, we highlight again the importance of such steepness. Indeed, the fact that any platform $p_k$ of a minimal terrace is steeper than any other entire solution implies that $p_k$ is included in any decomposition connecting 0 and $p$. In other words, it is an immediate corollary of the above theorem that, from any decomposition connecting 0 and $p$, one can extract the platforms of minimal terraces: \[th:exists\_cor\] If $((p_k),(U_k))$ is a minimal terrace connecting 0 to $p$, then for any decomposition $(q_k)$ between 0 and $p$, one has $$\{ p_k \} \subset \{q_k\}.$$ Note that although the minimal terrace may not be unique, it is trivial that all minimal terraces share the same platforms, so that this corollary indeed easily follows from the above theorem.\ In our previous work [@DGM], we only treated terraces that are moving to the right, that is, with $c_k >0$ for all $k$. Indeed we assumed the existence of a solution with compactly supported initial datum that converges to $p$ from below as $t \to +\infty$, locally uniformly on $\mathbb{R}$. In such a situation, it is clear that $c_k >0$ for every $k$. In the present paper, we do not make such an assumption. As a result, we allow $c_k <0$ for some integers $k$. Thus, our Theorem \[th:exists\] completely covers the earlier existence result of Fife and McLeod [@FifMcL77] for the homogeneous case, and generalizes their result to spatially periodic equations. Our result also covers such cases as multi-layered monostable nonlinearities that were not treated in [@FifMcL77]. It should be noted that there does not always exist a decomposition between 0 and $p$. A simple counter example is the case where $f \equiv 0$. A slightly less trivial example is a reversed combustion nonlinearity, that is $f(u) =0$ ($u=0$, $\theta \leq u \leq 1$) and $f(u) >0$ ($0< u < \theta$). On the other hand, apart from these highly degenerate nonlinearities, all the nonlinearities that we find in many standard physical models possess a decomposition. The following proposition, which is not entirely new, gives a simple sufficient condition for the existence of a decomposition. \[proposition\] Assume that there exist a finite number of stationary solutions $p =: q_0 > q_1 > ... > q_{m-1} > q_m := 0$ of such that there exists no stationary solution strictly between $q_{j-1}$ and $q_j$ for any $j=1,...,m$. Then there exists a decomposition between 0 and $p$, hence a minimal terrace. Since there exists no other stationary solution of between $q_{j-1}$ and $q_j$, we see from [@Matano79 Corollary 4.5] that either all solutions whose initial data lie between $q_{j-1}$ and $q_j$ converge to $q_{j-1}$ as $t \to +\infty$ or they converge to $q_j$ as $t \to +\infty$. Corollary 4.5 of [@Matano79] is for problems under the Dirichlet, Neumann or Robin boundary conditions but precisely the same argument carries over to the periodic boundary conditions. Thus by the result of Weinberger [@Wein02], there exists a pulsating traveling wave solution of  connecting $q_j$ to $q_{j-1}$ or vice versa, $j=1,...,m$. These traveling waves constitute a decomposition between $0$ and $p$. Hence a minimal terrace exists by Theorem \[th:exists\]. \ In the spatially homogeneous case where $a$ and $f$ are independent of $x$, the assumption in Proposition \[proposition\] can be restated that $f(u)$ has finitely many zeroes between 0 and $p$. We remark that this finiteness assumption is by no means necessary for the existence of a decomposition. A classical example is a combustion nonlinearity, that is, $f(u) = 0$ ($0 \leq u \leq \theta$, $u =1$) and $f (u) >0$ ($\theta < u < 1$), for which a traveling wave connecting 1 to 0 is known to exist [@AW75]. A more general sufficient condition for the existence of a decomposition in the spatially homogeneous case is found in [@Polacik2 Theorem 1.2]. In the spatially periodic case, our earlier paper [@DGM Theorem 1.10] gives another sufficient condition, namely the existence of a compactly supported initial datum from which the solution of converges to $p$ from below as $t \to +\infty$; this condition admits, for example, a spatially periodic combustion type nonlinearity, which does not satisfy the assumption of Proposition \[proposition\]. Nonetheless Proposition \[proposition\] covers many important examples and its assumption is simple and relatively easy to verify. The existence of a pulsating traveling wave for a spatially periodic bistable nonlinearity is a particular case to which Proposition \[proposition\] applies. This example will be treated in Section \[sec:exemples\] along with other examples.\ The main idea in the proof of Theorem \[th:exists\] is first to show that the solution starting from a Heaviside function like initial datum becomes less and less steep as time passes, while remaining steeper than any entire solution that lies between 0 and $p$, including traveling waves. This idea is basically along the same lines as in our previous paper [@DGM] and is somewhat inspired by the work of Kolmogorov, Petrovsky and Piskunov [@KPP]. By slightly modifying the argument, we obtain the following theorem on the long time behavior of more general solutions: \[th:CV\] Assume that there exists a minimal terrace $\mathcal{T} = ((p_k)_k,(U_k)_k)$ such that $c_k \neq 0$ for any $k$, and let $u_0$ be a piecewise continuous function such that $$u_0 \mbox{ is steeper than any $p_k$ and $U_k$},$$ $$u(x)-p(x) \rightarrow 0 \mbox{ as } x\rightarrow -\infty \mbox{, and } u(x) \rightarrow 0 \mbox{ as } x \rightarrow +\infty.$$ Then the associated solution $u$ of the Cauchy problem for converges to $\mathcal{T}$ in the following sense: 1. There exist functions $(m_k (t))_{1 \leq k \leq N}$ with $m_k (t) = o(t)$ as $t \rightarrow +\infty$ such that $$\label{eqn:CVcor1} \begin{split} u(t,x +c_k (t-m_k (t)) )- U_k (t-m_k (t),x + c_k (t-m_k (t))) \quad \\ \to 0 \ \ \ \hbox{as} \ \ t\to +\infty, \end{split}$$ locally uniformly on ${\mathbb{R}}$. 2. For any $1 \leq k \leq N-1$, we have that $$c_{k+1} ( t - m_{k+1} (t)) - c_k (t - m_k (t)) \to +\infty, \ \mbox{ as $ t \to +\infty$}.$$ Moreover, for any $\delta >0$, there exist $C>0$ and $T>0$ such that, for any $t\geq T$ and $1 \leq k \leq N-1$: $$\| u(t,\cdot )- p_k (\cdot) \|_{L^\infty ([c_k (t-m_k (t)) + C , c_{k+1} (t-m_{k+1} (t)) - C])} \leq \delta, $$ together with $$\| u(t,\cdot +c_1 (t-m_1 (t)) )- p(\cdot) \|_{L^\infty ((-\infty,-C])} \leq \delta,$$ $$\| u(t,\cdot +c_N (t-m_N (t)) ) \|_{L^\infty ([C, +\infty))} \leq \delta.$$ The above theorem extends our previous convergence result in [@DGM] to the case when the target terrace is not necessarily moving to the right. Some speeds $c_k$ may be negative while others are positive (in which case the terrace splits into two opposite directions), or it may also happen that all speeds $c_k$ are negative (in which case the terrace “retreats" entirely). It is also possible that some speeds may be equal, in which case the limiting traveling waves still drift away from each other, as implied by the first part of statement $(ii)$. Note, however, that the theorem needs to avoid the possibility of zero speeds, the reason for which we will explain briefly below. The above theorem also relaxes the assumptions on the initial data compared with [@DGM], by not requiring $u_0$ to be of the Heaviside function type, but only assuming it to be steep enough. This shows, as announced, the importance of the notion of terraces, in particular minimal ones, which succeed in capturing the dynamics of solutions of the Cauchy problem. It is not known, though, whether there also exist solutions converging in large time to a non minimal terrace. Although they do not directly answer this question, our next three theorems will yield some insight, by both providing new characterizations of minimal terraces, and showing that non minimal terraces may simply not exist. Note that the theorem above, as well as those below, avoid the possibility of zero speeds occurring in terraces. Some study could still be conducted in such a situation, but reveals itself to be much more complicated and to lead to an even wider range of dynamics. The main reason is that stationary traveling waves do not provide a complete information, as they may not exist around any level set and any point in space, due to the space heterogeneity. Therefore, one must involve some other entire solutions, that cannot be parts of a propagating terrace as we defined it. We will discuss this in Section \[sec:zero\] and chose to focus, as far as our main results are concerned, on the non zero speeds case.\ Let us conclude this section by turning to the question of uniqueness of minimal, semi-minimal or propagating terraces. This uniqueness will be meant here up to time shifts: that is, given two terraces $((p_k)_k,(U_k)_k)$ and $((p'_k)_k,(U'_k)_k)$, we say that they are equal up to time shifts if they share the same platforms and if, for any $k$, $U'_k$ is a time shift of $U_k$ which may depend on $k$. As stationary waves are obviously not unique up to time shifts, this clearly suggests that, as in our previous theorem, we will only consider terraces moving with non zero speeds, for instance those that we have already constructed in [@DGM]. Our three uniqueness theorems are the following, each one dealing with a different type of propagating terraces: \[th:unique\_min\] If there exists a minimal propagating terrace $\mathcal{T}= \left( (p_k)_{0 \leq k \leq N},(U_k)_{1 \leq k \leq N} \right)$ with non zero speeds, then it is the unique minimal propagating terrace up to time shifts. Theorem \[th:unique\_min\] is actually almost trivial. All minimal terraces indeed share the same platforms, and one can easily check that, between each consecutive platforms, the steepest traveling wave is unique up to time shift, as long as its speed is not zero. \[th:unique\_semi\] If there exists a semi-minimal propagating terrace $\mathcal{T} = \left( (p_k)_{0 \leq k \leq N},(U_k)_{1 \leq k \leq N} \right)$ with non zero speeds, and if either of the following two statements is satisfied: 1. the speeds of the $U_k$ are all different (i.e. $c_1 < c_2 < ... < c_N$); 2. Assumption \[assumption-speed1\] is true; then it is the unique minimal terrace up to time shifts. Under the Assumption \[assumption-speed1\], it is also the unique semi-minimal terrace. \[th:unique\] Under the Assumption \[assumption-speed2\], if there exists a propagating terrace $\mathcal{T}$ with non zero speeds, then it is the unique (hence minimal) propagating terrace up to time shifts. Proving Theorems \[th:unique\_semi\] and \[th:unique\] is slightly more intricate than for Theorem \[th:unique\_min\], and relies heavily on the link between steepness and moving speed. This link will be the subject of a result of independent interest, namely Theorem \[th:lem\_speeds\]. This theorem will roughly state that, if $q$ is stable from below in a sense following from Assumptions \[assumption-speed1\] or \[assumption-speed2\], then the steepest traveling wave connecting some other equilibrium to $q$ is also the unique slowest. This is classical for non degenerate equilibria (by looking at the asymptotic behavior of waves as in [@Hamel08]), but we prove it here with another method in order to deal with more general periodic nonlinearities. In the homogeneous case, this relation between steepness and speed is always known to hold, so that Assumptions \[assumption-speed1\] and \[assumption-speed2\] are actually not needed [@FifMcL77]. We believe that those hypotheses can also be relaxed in the periodic framework. On one hand, Theorem \[th:unique\_semi\] deals with a somehow general case, where the restriction of the reaction nonlinearity between consecutive platforms may be of various types, including monostable ones. In such a situation, terraces cannot be expected to be unique (the speed is not even unique for standard monostable nonlinearities), but our result still provide a slightly easier to check characterization of the minimal terrace. On the other hand, under the assumptions of Theorem \[th:unique\], all platforms of any terrace have to be strongly stable from both below and above, hence terraces “skip" unstable platforms in some sense. One should look at it as a generalization of the well-known uniqueness of traveling waves up to shift in the standard bistable or ignition cases. We remind that our underlying goal is to predict, in some sense, the shape of terraces, in particular minimal terraces as they seem to play the most important role. The following proposition shows how this may follow from our main results. \[proposition2\] Let $p >q>r$ be $L$-periodic stationary solutions of , such that $q$ and $p$, $r$ and $q$ are connected in the sense of Definition \[def:decomp\]. Denote by $U$ a steepest traveling wave connecting $q$ to $p$, and $V$ a steepest traveling wave connecting $r$ to $q$, with respective speeds $c \neq 0 $ and $c' \neq 0$. Then: - if $c < c'$, then $(U,V)$ is a minimal terrace, - if $c =c'$ and Assumption \[assumption-speed1\] holds, then $(U,V)$ is a minimal terrace. - if $c >c'$, then $p$ and $r$ are connected in the sense of Definition \[def:decomp\]. Note that, from our Theorem \[th:exists\], if two $L$-periodic stationary solutions $p>q$ are connected (in the sense of Definition \[def:decomp\] ), then there exists a steepest traveling wave among all traveling waves connecting $q$ to $p$, and it is unique up to time shift if its speed is not zero. The first two statements of Proposition \[proposition2\] immediately follow from Theorem \[th:unique\_semi\]. Then note that, by Corollary \[th:exists\_cor\], a minimal terrace should either have a single platform, or be made of two traveling waves connecting respectively $q$ and $p$, and $r$ and $q$. In the latter case, using again Theorem \[th:unique\_semi\] (separately between $p$ and $q$, $q$ and $r$), these two waves should be $U$ and $V$, hence $c \leq c'$. The third statement of Proposition \[proposition2\] follows. #### Plan of the paper Our paper is organized as follows. We will begin in Section \[sec:preliminaries\] by recalling some properties on the steepness of solutions of , namely how it is preserved through time. Those properties rely heavily on the so-called intersection number (or zero number) argument. Then, we use in Section \[sec:existence\] those preliminaries to look at the large time behavior of solutions of with very steep initial data, such as of the Heaviside type. We will see that it converges to a minimal terrace, proving both our existence and convergence results. Section \[sec:minspeed\] will deal with an important result of independent interest, addressing the link between steepness and speed of pulsating traveling waves: namely, when the upper stationary state is stable, the steepest traveling wave is also the slowest. Our uniqueness results will quickly follow as corollaries, as we will show in Section \[sec:uniqueness\]. After we have proved our main results, we will dedicate the two last sections on discussion in order to provide a better understanding of the notion of propagating terraces. First, in Section \[sec:zero\], we will investigate the case of terraces moving with zero speed, that we mostly avoided in our theorems above. Dynamics will reveal themselves to be much more complex, involving new entire solutions that are not traveling waves in the standard sense, and preventing any uniqueness result to hold in general. Lastly, in Section \[sec:exemples\], we will address a few examples ranging from standard cases to some multistable nonlinearities. Our goal will be to show how our results, in spite of their universal scope, can also easily be applied to particular cases. Steepness argument and fundamental lemma {#sec:preliminaries} ======================================== The proofs of our main results rely strongly on a zero-number argument. The application of this argument — or the “Sturmian principle" — to the convergence proof in semilinear parabolic equations first appeared in [@Matano78], and recently in [@DGM] to prove not only the convergence but also the existence of the target objects, namely the terrace and pulsating traveling waves. This theory states that the number of sign changes of a solution of a second-order parabolic equation, or more generally, the number of intersections of two solutions, is nonincreasing in time. When looking at the steepness of solutions, we will only need to consider situations where only one intersection occurs. We refer the reader to [@An88; @DuMatano10] for a better overview of the general argument and its applications.\ We recall here two lemmas from the zero-number theory that we will need here. The first one is a corollary of the semi-continuity of the number of intersections with respect to pointwise topology. \[lem:steep\_arg1\] Let $(w_n)_{n \in \mathbb{N}}: {\mathbb{R}}\to {\mathbb{R}}$ be a sequence of functions converging to $w: {\mathbb{R}}\to {\mathbb{R}}$ pointwise on ${\mathbb{R}}$, and $v$ such that $w_n$ is steeper than $v$ for any $n \in \mathbb{N}$. Then $w$ is steeper than $v$. The second lemma is a consequence of the fact that, as announced above, the number of intersections is nonincreasing in time, and also that the shape of an intersection remains the same as long as it still exists. \[lem:steep\_arg2\] Let $u_1$ and $u_2$ be solutions of for nonnegative times, such that $u_1(0,x)$ is a piecewise continuous bounded function on ${\mathbb{R}}$, steeper than $u_2(0,x)$ which is bounded and continuous on ${\mathbb{R}}$. Then for any $t >0$, the function $x \mapsto u_1 (t,x)$ is steeper than $x \mapsto u_2 (t,x)$ and furthermore, unless $u_1 \equiv u_2$, any zero of $(u_1 - u_2) (t,\cdot)$ is non degenerate. In both lemmas above, we compare steepness of functions which do not depend on time but on $x \in {\mathbb{R}}$ only: even in Lemma \[lem:steep\_arg2\], where the functions $u_1$ and $u_2$ depend on time, their steepness is only compared at fixed times. This can easily be understood as a particular case of Definition \[def:steep\]. The non degeneracy of any zero of $u_1 - u_2$ follows from the fact that, whenever a degenerate zero of $u_1 - u_2$ appears for some $t$, the number of sign changes is decreasing in time around $t$. This implies, if $t>0$, that $u_1 -u_2$ changes sign at least twice for anterior times, which contradicts the property that $u_1$ is steeper than $u_2$ for any given time. We refer the reader to [@DGM] for a more detailed proof of Lemma \[lem:steep\_arg2\] (note that the same argument applies to the letter here, even though the function $a$ may not be constant).\ Before we go on, let us recall a definition of the $\omega$-limit set of a solution $u$. This is slightly different from the standard one, as we add arbitrary spatial translations while taking the long time limit. The reason for adopting this definition is that, since each step of a terrace (that we want to capture by looking at the asymptotic profile of $u$) moves at a different speed, we will need multi-speed observations to be able to proceed. \[def:omegalimit\] Let $u(t,x)$ be a bounded solution of Cauchy problem . We call $v(t,x)$ an **$\boldsymbol{\omega}$-limit orbit** of $u$ if there exist two sequences $t_j \rightarrow +\infty$ and $k_j \in \mathbb{Z}$ such that $$u(t+t_j,x+k_j L) \rightarrow v(t,x)\ \mbox{ as } j \to +\infty \; \mbox{ locally uniformly on } {\mathbb{R}}^2.$$ By parabolic estimates, the above convergence takes place in $C^2$ in $x$ and $C^1$ in $t$. Hence one can easily check that any $\omega$-limit orbit of $u$ is an entire solution of . Moreover, if $v(t,x)$ is an $\omega$-limit orbit of $u$, then so is $v(t+\tau,x+kL)$ for any $\tau \in {\mathbb{R}}$ and $k\in \mathbb{Z}$, as well as any limit as $\tau \rightarrow \pm \infty$ or $k \rightarrow \pm \infty$, when it exists. Let us now state the fundamental lemma that will be used repeatedly throughout our paper, and whose proof will easily follow from the properties we recalled above: \[lem:omega-steep\] Let $u(t,x)$ be any piecewise continuous and bounded solution of Cauchy problem with an initial datum $0 \leq u_0 \leq p$. We assume that $u_0$ is steeper than any entire solution of lying between 0 and $p$. Then any $\omega$-limit orbit $v$ of $u$ is steeper than any other entire solution of  lying between $0$ and $p$. Note that it is not clear a priori which initial data are steeper than any other entire solution lying between 0 and $p$. However, one can immediately see that such a property is always satisfied with an Heaviside type initial datum, that is a function identically equal to $p$ on the interval $(-\infty,X]$, and identically equal to 0 on $(X,+\infty)$, for some $X \in {\mathbb{R}}$. Let the sequences $t_j \rightarrow + \infty$ and $k_j \in \mathbb{Z}$ be such that $u(t+t_j,x+k_jL)\to v (t,x)$ locally uniformly as $j \to + \infty$. Let $w$ be any entire solution lying between 0 and $p$, and any $t,t' \in {\mathbb{R}}$. Consider $j$ large enough so that $t_j > -t'$. By our assumption, we have that $x \mapsto u_0 (x)$ is steeper than $x \mapsto w (-t_j -t' + t, x)$, hence $x \mapsto u(t_j +t',x+k_j L)$ is steeper than $x\mapsto w (t,x)$ using Lemma \[lem:steep\_arg2\]. Using Lemma \[lem:steep\_arg1\], one can then pass to the limit as $j \rightarrow +\infty$ to get that $x\mapsto v(t',x)$ is steeper than $x \mapsto w (t,x)$, where $t$ and $t'$ are arbitrary. In other words, $(t,x) \mapsto v (t,x)$ is steeper than $(t,x) \mapsto w (t,x)$ in the sense of Definition \[def:steep\]. Existence and convergence to a minimal terrace {#sec:existence} ============================================== In this section, we will prove our existence Theorem \[th:exists\]. As mentioned before, we only need to prove that if there exists a decomposition connecting 0 and $p$, then there exists a minimal terrace connecting 0 to $p$, with the additional powerful property that each of its parts is steeper than any other entire solution of . The sketch of the proof is to exhibit a minimal terrace as a limiting profile of a solution associated to an Heaviside initial datum, which is the steepest of functions between 0 and $p$. As shown in the previous section, such extreme steepness should be preserved through time and even at the limit as $t \rightarrow +\infty$. More precisely, by Lemma \[lem:omega-steep\], we know that any $\omega$-limit orbit is steeper than any other entire solution. The fact that there exists a decomposition between 0 and $p$ will then insure that those limits must be either front-like or spatially periodic stationary solutions. Once we know that such a minimal terrace exists, one will easily be able to check that the proof can be extended to any steep enough initial datum. Provided that the minimal terrace is unique, we then get a full convergence result toward it, namely Theorem \[th:CV\]. Existence of a minimal terrace ------------------------------ Denote by $(q_k)_{0 \leq k \leq N}$ the decomposition between 0 and $p$, and for each $k$, by $V_k$ a pulsating traveling wave connecting $q_k$ to $q_{k-1}$. Let $u$ be the solution of with initial datum $$u_0 (x)= H(-x) p(x),$$ where $H$ is the Heaviside function. We already know from Lemma \[lem:omega-steep\] that any $\omega$-limit $v$ of $u$ is steeper than any other entire solution lying between $0$ and $p$. In particular, $v$ is steeper than any of its time shift, which easily implies that either $\partial_t v \equiv 0$, $\partial_t v >0$ or $\partial_t v <0$.\ First, thanks to standard parabolic estimates, take a sequence $t_k \rightarrow +\infty$ such that $u(t+t_k,x)$ converges locally uniformly to some $\omega$-limit $w (t,x)$. One can assume without loss of generality that it is stationary, that is $w(t,x) \equiv w(x)$ does not depend on time. Indeed, if not, one can use the fact that $w$ is monotonically increasing or decreasing with respect to time, so that $w(+\infty,x)$ exists and is also an $\omega$-limit of $u$ in the non moving frame. Because $w$ is steeper than the $q_k$ and $V_k$, it follows that $w$ is a stationary wave connecting some $q_{i}$ to $q_{j}$ with $i \geq j$ (if $i =j$, then it is trivial and $w \equiv q_{i}$). Our goal is to construct two terraces, connecting respectively $0$ to $q_{i}$ with nonnegative speeds, and $q_{j}$ to $p$ with nonpositive speeds.\ Assume that $q_{i} >0$. Let $\alpha_1$ such that $q_{i +1} (0) < \alpha_1 < q_{i} (0)$. One can then define, for any $n \in \mathbb{N}$: $$\tau_n (\alpha_1) = \min \{ t \geq 0 \; | \ u (t,nL) =\alpha_1 \}.$$ Let now the $\omega$-limit $$w_\infty (t,x;\alpha_1) = \lim_{n\rightarrow +\infty} u(t+\tau_n (\alpha_1),x+nL).$$ Let us check that this limit is well-defined. Note that the family of functions $\{ u(t+\tau_n (\alpha_1),x+nL) \}_{n \in \mathbb{N}}$ is relatively compact, so that we can assume that it converges, up to extraction of some subsequence, to some $w_\infty (t,x;\alpha_1)$, which is the unique steepest entire solution of satisfying $w_\infty (0,0;\alpha_1) =\alpha_1$. Hence, $w_\infty$ does not depend on the choice of the subsequence. It follows that the whole sequence indeed converges to $w_\infty$ as $n \rightarrow +\infty$. Moreover, from the definition of $\tau_n (\alpha_1)$ and the monotonicity of any $\omega$-limit, we have that $\partial_t w_\infty \geq 0$. One can also note that $$u(t,x+L) < u(t,x),$$ hence $$w_\infty (t,x+L) \leq w_\infty (t,x),$$ for any $(t,x) \in {\mathbb{R}}^2$. This follows from the comparison principle and the fact that $u(t,\cdot+L)$ is the solution of with initial datum $H(-L-x) p(x) \leq H(-x) p(x)$. It immediately implies that the sequence $\tau_n (\alpha_1)$ is increasing. Let us now prove the following claim: The sequence $\tau_{n+1} (\alpha_1) - \tau_n (\alpha_1)$ converges to either a positive constant $T>0$, or to $+\infty$. Assume that $\tau_{n+1} (\alpha_1) - \tau_n (\alpha_1)$ does not converge to $+\infty$. Then, up to extraction of some subsequence, $\tau_{n+1} (\alpha_1) - \tau_n (\alpha_1) \rightarrow T \geq 0$ as $n \rightarrow +\infty$. Therefore, for any $(t,x) \in {\mathbb{R}}^2$: $$\begin{aligned} w_\infty (t+T,x+L;\alpha_1)&=&\lim_{n\rightarrow +\infty} u(t+\tau_n (\alpha_1) + \tau_{n+1} (\alpha_1) - \tau_n(\alpha_1),x+(n+1)L)\\ &=&\lim_{n\rightarrow +\infty} u(t+ \tau_{n+1} (\alpha_1),x+(n+1)L)\\ &=&w_\infty (t,x;\alpha_1).\end{aligned}$$ If $T=0$, this means that $w_\infty$ is periodic in space. This is not possible. Indeed, one would then have that $w_\infty (0,nL;\alpha_1) =\alpha_1$ for any $n \in \mathbb{Z}$, which contradicts the fact that it is steeper than $V_{i +1}$ connecting $q_{i +1}$ to $q_{i}$. On the other hand, if $T >0$, then it is a monotonically increasing pulsating traveling wave, with speed $c = \frac{L}{T}$. By uniqueness of the limit and of the speed of a traveling wave, it follows that the whole sequence $\tau_{n+1} (\alpha_1) -\tau_n (\alpha_1)$ converges to $T$, which concludes the proof of this claim. \ We have shown, along with the above claim, that when $\tau_{n+1} (\alpha_1) - \tau_n (\alpha_1) \rightarrow T>0$, then $w_\infty$ is a pulsating wave. As it is steeper than any other entire solution of , and from the choice of $\alpha_1$, one can easily check that it connects some $q_{i_1}$ to $q_{i_0}$ with $i_1 > i$ and $i_0 \leq i$. As $w_\infty$ and $w$ (obtained above as an $\omega$-limit in the non moving frame) are both steeper than each other, they cannot intersect, nor can any of their translates by some space shift $kL$ with $k\in \mathbb{Z}$. Thus, one in fact has that $i_0 = i$. Now, consider the case when $\tau_{n+1} (\alpha_1) - \tau_n (\alpha_1)$ tends to $+\infty$. This means that $$w_\infty (t,L;\alpha_1) \leq \alpha_1, \ \forall t \geq 0.$$ By its monotonicity and boundedness, we know that $w_\infty$ converges to a stationary solution as $t \rightarrow +\infty$, such that $w_\infty (+\infty,0;\alpha_1) \geq \alpha_1$ and $w_\infty (+\infty,L;\alpha_1) \leq \alpha_1$, and which is still steeper than any other entire solution of . From this and the fact that $w_\infty (+\infty,x+L;\alpha_1) \leq w_\infty (+\infty,x;\alpha_1)$ for any $x\in \mathbb{R}$, one can check as before that $w_\infty (+\infty,x;\alpha_1)$ converges to some $q_{i_1}$ with $i_1>i$ as $x\rightarrow +\infty$, and to $q_{i}$ as $x \rightarrow -\infty$. Hence, it is a pulsating traveling wave with speed 0 connecting $q_{i_1}$ to $q_i$. In both cases, we have obtained a pulsating traveling wave with nonnegative speed connecting some $q_{i_1}$ with $i_1 >i$ to $q_{i}$. If $q_{i_1}$ is positive, we can reiterate by choosing $q_{i_1 +1} < \alpha_2 < q_{i_1} (0)$, and find a pulsating traveling wave connecting some $q_{i_2}$ with $i_2 > i_1$ to $q_{i_1}$. The iteration stops when $q_{i_n} \equiv 0$ for some $n \in \mathbb{N}$. This clearly happens in a finite number of steps, from the fact that there is a finite number of $q_k$. Therefore, we obtain a decreasing sequence of periodic stationary solutions $(p_k)_{0 \leq k \leq N_1}$, and for each $k$ a pulsating traveling wave $U_k$ connecting $p_{k+1}$ to $p_k$. Furthermore, from the construction of the $U_k$ above, we also have that $$\label{eqn:speed_ck} c_k = \lim_{n \to +\infty} \frac{L}{\tau_{n+1} (\alpha_k) - \tau_n (\alpha_k)} = \lim_{n \rightarrow +\infty} \frac{nL}{\tau_n (\alpha_k)}$$ for some well-chosen decreasing and finite sequence $\alpha_k$. Since $\tau_n (\alpha)$ is increasing with respect to $\alpha$, the sequence $c_k$ is nondecreasing. Finally, we have constructed a propagating terrace connecting 0 to $q_{i}$ with nonnegative speeds.\ One can proceed similarly to get a propagating terrace connecting $q_{j}$ to $p$ with nonpositive speeds. We immediately get a propagating terrace connecting 0 to $p$ by combining both terraces constructed above, as well as the stationary wave $w$ when it is not trivial. We end the proof of Theorem \[th:exists\] by noting that each part of this terrace is steeper than any other entire solution, as follows from the fact that each of those parts is also an $\omega$-limit orbit of the solution of with Heaviside type initial datum. One can easily check that this also immediately implies minimality of the terrace. Convergence to the unique minimal terrace ----------------------------------------- Let us now proceed to the proof of Theorem \[th:CV\], and assume that there exists a minimal terrace $\mathcal{T} =((p_k)_k,(U_k)_k)$ such that for any $k$, the speed $c_k$ of $U_k$ is not equal to 0. According to Theorem \[th:unique\_min\] (that we already proved in Section \[sec:intro\]), the minimal terrace is unique up to time shifts. Moreoever, thanks to Theorem \[th:exists\], we also know that each of the $p_k$ and $U_k$ is steeper than any other entire solution lying between 0 and $p$. Let $u$ be the solution of with initial datum $u_0$. We recall that we assume that $u_0$ satisfies the limiting conditions $u(x)-p(x) \rightarrow 0 $ as $x\rightarrow -\infty$, and $u(x) \rightarrow 0$ as $x \rightarrow +\infty$, and also that $u_0$ is steeper than any $p_k$ and $U_k$. First note that the latter assumption implies that $u_0$ is steeper than any entire solution lying between 0 and $p$. Indeed, let some entire solution $v(t,x)$ of , lying between 0 and $p$, such that for some $t$ and $x_1$, $u_0 (x_1) = v(t,x_1)$. Then, there exists some $k$ such that either $p_k (x_1)=u_0 (x_1)$, or $p_{k+1} (x_1) < u_0 (x_1) < p_k (x_1)$. In the former case, as $u_0$ is steeper than $p_k$ and $p_k$ is steeper than $v$, one gets that $u_0 (x) \geq v (t,x)$ for any $x \leq x_1$, and $u_0 (x) \leq v(t,x)$ for any $x \geq x_1$. In the latter case, there exists some $t'$ such that $U_k (t',x_1)=u_0 (x_1)=v(t,x_1)$, and one can again conclude by the same argument that $u_0$ is steeper than $v$.\ We now turn back to the proof of Theorem \[th:CV\], which will share many similarities with the previous subsection. The difficulty here is to make sure that the $\omega$-limit of $u$ around a given level set is unique, which is necessary to get the wanted convergence. As before, we know by Lemma \[lem:omega-steep\] that any $\omega$-limit $v$ of $u$ is steeper than any other entire solution lying between $0$ and $p$, and that it satisfies either $\partial_t v \equiv 0$, $\partial_t v >0$ or $\partial_t v <0$.\ We first prove the following claim: \[claim\_21\] The function $u(t,x)$ converges locally uniformly to the unique $p_i$ such that $c_k <0$ for all $k \leq i$ and $c_k >0$ for all $k>i$. First, take some sequence $t_j \rightarrow +\infty$ such that $u(t+t_j,x)$ converges locally uniformly to some $\omega$-limit $w (t,x)$. Recall that $w$ is either stationary, monotonically increasing or decreasing in time. If it is monotonically increasing, we can define $w_1(x):= w(-\infty,x) < w_2 (x) := w(+\infty,x)$, both being also $\omega$-limits of $u$ in the non moving frame. Let now $w_1 (0) < \alpha := w(0,0) < w_2 (0)$. Thus, there exists a sequence $t'_j \rightarrow +\infty$ such that for any $j \in \mathbb{N}$, $u(t'_j,0)=\alpha$ and $\partial_t u(t'_j,0) \leq 0$. Up to extraction of some subsequence, the sequence $u(t+t'_j,x)$ converges to an $\omega$-limit $v$ such that $v(0,0)=\alpha=w(0,0)$ and $\partial_t v(0,0) \leq 0 < \partial_t w (0,0)$. This is a contradiction as $v$ and $w$ are both steeper than each other. Similarly, one can prove that $w$ is not monotonically decreasing in time either, hence it is stationary. Moreover, there exists some $i$ such that either $w(0) =p_i (0)$, or $w(0)=U_i (t,0)$ for some $t \in \mathbb{R}$. Since $w$, $p_i$ and $U_i$ are all steeper than each other, it follows that either $w \equiv p_i$ or $w \equiv U_i$. The latter is not possible because $U_i$ is not stationary by assumption. Hence, $w \equiv p_i$. Using the same method as in the previous section, one can construct a minimal terrace $\mathcal{T}'$ including the platform $p_i$, so that any traveling wave below $p_i$ moves with nonnegative speed, and any wave above $p_i$ moves with nonpositive speed. By uniqueness, one immediately gets that $\mathcal{T}$ and $\mathcal{T}'$ are the same terrace up to time shifts, thus $i$ is the unique index such that $c_k < 0$ for all $k \leq i$ and $c_k >0$ for all $k>i$. In particular, the limit $w$ did not depend on the choice of the sequence $t_j \rightarrow +\infty$, so that $u(t,x)$ converges locally uniformly to $p_i$ as $t \rightarrow +\infty$. \ Let us now go back to the proof of Theorem \[th:CV\]. Take, for any $k$, $\alpha_k \in (p_k (0),p_{k-1} (0))$. Then define, for any $j \in \mathbb{N}$: $$\tau_j (\alpha_k) = \left\{ \begin{array}{l} \min \{ t \geq 0 \; | \ u (t,jL) =\alpha_k \} \mbox{ if } k > i\vspace{3pt},\\ \min \{ t \geq 0 \; | \ u (t,-jL) =\alpha_k \} \mbox{ if } k\leq i. \end{array} \right.$$ Note that those are well-defined, at least for $j$ large enough, from the assumption that $u_0 - p \rightarrow 0$ as $x \rightarrow -\infty$ and $u_0 \rightarrow 0$ as $x \rightarrow +\infty$, along with the locally uniform convergence to $p_i$ proved above. As in the previous section, we know that the following $\omega$-limit $$w_\infty (t,x;\alpha_k) = \lim_{j\rightarrow +\infty} u(t+\tau_j (\alpha_k),x+jL)$$ is well defined. Moreover, it is steeper than $U_k$ and conversely (recall that $U_k$ is steeper than other entire solutions). As $U_k$ moves with non zero speed, there exists some time $t$ such that $U_k (t,0)=\alpha_k = w_\infty (0,0;\alpha_k)$. We conclude that $U_k \equiv w_\infty (\cdot,\cdot;\alpha_k)$ up to some time shift, that is, without loss of generality: $$U_k (t,x) = \lim_{j\rightarrow +\infty} u(t+\tau_j (\alpha_k),x+jL),$$ where the convergence holds locally uniformly in time and space.\ We can now proceed to the proof of the wanted convergence. The proof is the same as in [@DGM], but we include it for the sake of completeness. Let us first deal with the locally uniform convergence to the pulsating traveling waves $U_k$ with $c_k \neq 0$ along the moving frames with speed $c_k$ and some sublinear drifts. Fix some $k$ such that $c_k >0$ and, for any large $t$, define $j(t) \in \mathbb{N}$ such that $$j(t) \frac{L}{c_k} \leq t < \left(j(t) +1\right) \frac{L}{c_k},$$ and $m_k (t)$ the piecewise affine function defined by $$m_k (t)= \tau_{j(t)} (\alpha_k) -t \ \ \text{ if } \ t =j(t)\frac{L}{c_k}.$$ Recalling , the sequence $\left\{\frac{\tau_{j} (\alpha_k)}{j}\right\}_j$ converges to $\frac{L}{c_k}$, so that $m_k(t) = o(j(t))=o(t)$ as $t \to +\infty$. Furthermore, since $$U_k (t,x)= w_\infty (t,x;\alpha_k) = \lim_{j \rightarrow +\infty} u (t+\tau_{j} (\alpha_k), x +jL)$$ where the convergence is understood to hold locally uniformly with respect to $(t,x)\in\mathbb R^2$, and since $t+m_k (t)-\tau_{j(t)} (\alpha_k) \sim (t-j(t) \frac{L}{c_k}) = O_{t \rightarrow +\infty} (1)$ and $x+c_k t - j(t) L$ stay bounded, one can check that $$u (t+m_k (t),x+c_k t) - U_k \left(t-j(t) \frac{L}{c_k},x - j(t)L +c_k t \right) \to 0 \ \ \text{ as } \ t \to +\infty.$$ Thus, $$u (t,x+c_k (t-m_k (t))) - U_k \left( t- m_k (t),x+c_k (t -m_k (t) )\right) \to 0 \ \ \text{ as } \ t \to +\infty,$$ wherein both of the above convergences hold locally uniformly with respect to $x \in {\mathbb{R}}$. The case $c_k <0$ can easily be dealt with the same way.\ It now remains to consider what happens “outside" of the moving frames with speed $(c_k)_{1 \leq k \leq N}$. We first claim the following monotonicity property: $$\label{eqn:mon} u (t,x) \geq u (t,x+L), \mbox{ for any } x \in {\mathbb{R}}\mbox{ and } t \geq 0.$$ From the periodicity of the equation and the parabolic comparison principle, one only has to check the above inequality for $t =0$. Let any $x \in {\mathbb{R}}$, and let $k$ such that either $u_0 (x)=p_k (x)$ or $u_0 (x) = U_k (t,x)$ for some $t \in {\mathbb{R}}$. Since $u_0$ is steeper than $p_k$ and $U_k$, one has that either $$u_0 (x+L) \leq p_k (x+L)= p_k (x) = u_0 (x),$$ or $$u_0 (x+L) \leq U_k (t,x+L) \leq U_k (t,x)=u_0 (x).$$ We conclude, as announced, that the inequality is satisfied. Let us go back to the proof of convergence “outside" the moving frames. We first look near $-\infty$, that is when $x+ c_1 (t-m_1 (t)) \to -\infty$. In that case, we use the fact that $$\label{eqn:asymp0} \lim_{x \rightarrow -\infty} U_1 (t,x+c_1t) - p(x) = 0,$$ uniformly with respect to time. Let any small $\delta >0$, and $x_\delta$ such that for all $t$: $$p(x)-\frac{\delta}{2} \leq U_1(t-m_1 (t),x+c_1 (t-m_1 (t))) \leq p(x) \text{ for all } x\leq -x_\delta +L.$$ Let now $t$ large enough so that for $x\in [-x_\delta , x_\delta]$, we have that $$| u (t,x+c_1 (t-m_1 (t))) - U_1 (t-m_1 (t),x+c_1 (t-m_1 (t)))| \leq \frac{\delta}{2}.$$ Then, using , one gets for all large $t$: $$p(x)-\delta \leq u (t,x+c_1 (t-m_1 (t))) \leq p(x) \text{ for all } x\leq -x_\delta +L.$$ One can proceed similarly to get that for any $\delta >0$, there exists $C$ such that for any $x \geq C$, $$| u (t,x+c_N (t-m_N (t))) | \leq \delta.$$ Lastly, let any integer $1 \leq k \leq N$. Then $$\lim_{x \rightarrow +\infty} U_k (t,x+c_k t) - p_k (x) =0 \mbox{ and } \lim_{x \rightarrow -\infty} U_{k+1} (t,x+c_{k+1}t ) - p_k (x) =0,$$ where the convergences are uniform in time. As above, one can use to show that there exists $C$ such that for large time $t$: $$p_k(x)+\delta \geq u (t,x+c_k (t-m_k (t)))\text{ for all } x\geq C.\vspace{3pt}$$ $$p_k (x)-\delta \leq u (t,x+c_{k+1} (t-m_{k+1} (t))) \text{ for all } x\leq -C.$$ This ends the proof of Theorem \[th:CV\]. In general, even when zero speeds may occur, there still exists some minimal and steepest terrace thanks to our existence Theorem \[th:exists\]. However, it is no longer unique. Although one could proceed as above to get a similar convergence result on the level sets with positive or negative speeds, the zero speed case is much more difficult to describe and would involve a wider range of entire solutions which may not be stationary waves. Minimality of the speeds of the minimal terrace {#sec:minspeed} =============================================== In this section, we first show a theorem of independent interest, namely Theorem \[th:lem\_speeds\] below, stating that the steepest traveling wave is also the slowest under some strong stability assumption on the upper state. We will then give as a corollary the minimality of the speeds of the minimal terrace. Furthermore, as we will see in the next section, most of our uniqueness properties on terraces will also follow from Theorem \[th:lem\_speeds\]. Steepness and minimality of the speeds -------------------------------------- Let us state the fundamental theorem of this section: \[th:lem\_speeds\] Let two traveling waves $U_1$ and $U_2$ connecting respectively $q_1$ and $q_2$ to $q$. Assume that there exist $\delta >0$ and $g$ such that $\mu_g \leq 0$ and $\partial_u f(x,u) \leq g$ for all $x \in {\mathbb{R}}$ and $u \in [q (x) - \delta, q(x)]$, and that $U_1$ is steeper than $U_2$. Then $c_1 \leq c_2$, where $c_1$ and $c_2$ are the speeds of respectively $U_1$ and $U_2$. Moreover, if $c_1 = c_2 \neq 0$, then $U_1 \equiv U_2$ up to some time shift. This means that, whenever the upper state is strongly stable in some sense (including but not limited to linear stability), the steepest traveling wave is the slowest and, if its speed is not zero, it is even the unique slowest traveling wave. Moreover, one can also easily check by turning the problem upside down (more precisely, by looking at the traveling waves $-U_1 (t,-x)$ and $-U_2 (t,-x)$), that if it is the lower state which is strongly stable, then the steepest traveling wave is the fastest. Therefore, in the bistable case, this already generalizes to the periodic framework the uniqueness of the traveling wave, which was well known for the homogeneous problem [@AW78; @FifMcL77].\ Before we prove Theorem \[th:lem\_speeds\], we immediately apply it to minimal terraces to get that, since they are the steepest, they are also the slowest in some sense: \[th:min\_speeds\] Let Assumption \[assumption-speed1\] be satisfied. Assume that there exists a minimal terrace $\mathcal{T}=((p_k)_k,(U_k)_k)$ and for each $k$, let $c_k$ the speed of $U_k$. Then: 1. If $c_k >0$, then $U_k$ is the unique traveling wave with minimal speed among all traveling waves connecting some $q < p_{k-1}$ to $p_{k-1}$. 2. If $c_k <0$, then $U_k$ is the unique traveling wave with maximal speed among all traveling waves connecting $p_{k}$ to some $q> p_k$. 3. If $c_k =0$, then $U_k$ is a traveling wave with either minimal or maximal speed among all traveling waves connecting $p_{k}$ to $p_{k-1}$. Let us first note that, when $c_k \neq 0$, the traveling wave $U_k$ is not only steeper than any other traveling wave connecting $p_{k}$ to $p_{k-1}$, but also than any other entire solution of , and in particular than any traveling wave connecting some $q < p_{k-1}$ to $p_{k-1}$, or $p_k$ to some $q > p_k$. Indeed, we know from Theorem \[th:exists\] that there exists a minimal terrace $\mathcal{T}' = ((p_k)_k,(U'_k)_k)$ such that each $U'_k$ is steeper than any other entire solution. Up to some time shift and as $c_k \neq 0$, we can assume without loss of generality that $U_k$ and $U'_k$ intersect and, as they are steeper than each other, we get that $U'_k \equiv U_k$. As announced, this implies that $U_k$ is steeper than any other entire solution. Before going back to the proof of Theorem \[th:min\_speeds\], let us note that the role of Assumption \[assumption-speed1\] is to insure that the hypotheses of Theorem \[th:lem\_speeds\] are satisfied. This is the subject of the following claim: \[claim:42\] If there exists a pulsating traveling wave connecting some $q_2$ to $q_1$ with positive speed, then $q_1$ is isolated and stable from below. We briefly sketch the proof, which is similar to that of Lemma 4.3 in [@DGM]. We first define, for any $\lambda >0$, $\mu (\lambda)$ the principal eigenvalue of $$\left\{ \begin{array}{rcl} -\partial_x (a \partial_x \phi_\lambda ) + 2 \lambda \partial_x \phi_\lambda - \frac{\partial f}{\partial u} (x, q_1 (x)) \phi_\lambda & = & \mu (\lambda) \phi_\lambda \ \mbox{ in } \mathbb{R},\vspace{3pt}\\ \phi_\lambda >0 \ \mbox{ and periodic.} \end{array} \right.$$ Adapting a formula from Nadin [@Nadin10], it is known that $$\begin{array}{rcl} \mu (\lambda ) &=& \displaystyle \min_{\eta \in C^1_{per} , \eta >0} \frac{1}{\int_0^L \eta^2 } \left( \int_0^L a \eta'^2 - \int_0^L \frac{\partial f}{\partial u} (x,q_1) \eta^2 \right. \vspace{3pt} \\ & & \displaystyle \hspace{3cm} \left. + \lambda^2 \bigg( \int_0^L a \eta^2 - \frac{L^2}{\int_0^L a^{-1} \eta^{-2}} \bigg)\right). \end{array}$$ It in particular follows that $\mu (\lambda) - \mu (0) = O (\lambda^2)$ for small enough $\lambda$. Now proceed by contradiction and assume that $q_1$ is not isolated from below. Then there exists a sequence of stationary solutions $r_j \leq q_1$, which tends to $q_1$ as $j \to +\infty$. This implies that $\mu (0)=0$ ($q_1$ is a degenerate equilibrium), thus $\mu (\lambda)= O (\lambda^2)$ for small $\lambda$. It is then straightforward to check that, for $j$ large enough and $c$ arbitrary small, one can construct a supersolution of of the type $$\min \{ q_1 (x) , e^{-\lambda (x-ct)} \phi_\lambda (x) + r_j (x) \},$$ where $\lambda >0$. This easily contradicts the fact that there exists a pulsating traveling wave connecting some $q_2$ to $q_1$ with positive speed. A similar argument leads to the same contradiction if $q_1$ is unstable from below. The claim is proved. \ We now return to the proof of Theorem \[th:min\_speeds\]. Thanks to Claim \[claim:42\] and Assumption \[assumption-speed1\], we can apply Theorem \[th:lem\_speeds\] to immediately conclude that statement $(i)$ of Theorem \[th:min\_speeds\] holds. Statement $(ii)$ is very similar, by applying the same argument to $-U_k (t,-x)$. Statement $(iii)$ follows from the fact that if there exist two traveling waves $V_-$ and $V_+$ connecting $p_k$ to $p_{k-1}$ with speeds $c_- <0$ and $c_+>0$, then $p_k$ is isolated and stable from above, and $p_{k-1}$ isolated and stable from below. Using Assumption \[assumption-speed1\] and Theorem \[th:lem\_speeds\], we get that $c_k$ is both the minimal and the maximal speed among traveling waves connecting $p_k$ to $p_ {k-1}$, which is a clear contradiction. This ends the proof of Theorem \[th:min\_speeds\]. Proof of Theorem \[th:lem\_speeds\] ----------------------------------- We begin by assuming that either $c_2 < c_1$, or $c_2 = c_1 \neq 0$. Up to some time shift, thanks to the asymptotics of $U_1$ and the characterization of pulsating traveling waves, one can find $0 <\delta' < \delta$ and $x_0$ such that $$U_1 (t,x) \in \left[q (x)-\delta,q (x) \right] \; \mbox{ for all } x \leq x_0 + c_1 t,$$ $$U_1 (t,x) \in \left[q (x)-\delta,q (x) - \delta' \right] \; \mbox{ for all } x = x_0 + c_1 t.$$ Let us also assume that $$U_2 (t,x) \in \left[q (x)-\frac{\delta'}{2},q (x) \right] \; \mbox{ for all } t \leq 0 \; \mbox{ and } x \leq x_0 + c_1 t .$$ This clearly holds whenever $c_1 \geq c_2 \neq 0$, up to some time shift of $U_2$. When $c_2 = 0 <c_1$ one can achieve the same inequality without loss of generality, by shifting $U_1$ in time so that $x_0$ is a large enough negative number. We have in particular that, for any $t \leq 0$, $$\label{compx0} U_1 (t,x_0+c_1 t) \leq U_2 (t,x_0+c_1 t) - \frac{\delta'}{2}.$$ We now place ourselves in the moving frame with speed $c_1$, and define $$\eta (t,x) := U_2 (t, x +c_1 t) - U_1 (t,x+c_1 t).$$ One can check from the above that the function $\eta$ satisfies: $$\left\{ \begin{array}{rl} \displaystyle \partial_t \eta = \partial_{x} (a (x) \partial_x \eta ) + c_1 \partial_x \eta + h(t,x) \eta, & \forall (t,x) \in (-\infty,0) \times (-\infty,x_0), \vspace{5pt}\\ \displaystyle \eta (t,x_0) \geq \frac{\delta'}{2} >0, & \forall t \in (-\infty,0),\vspace{5pt}\\ \displaystyle \lim_{x \rightarrow -\infty} \eta (t,x) = 0, & \forall t \in (-\infty,0), \end{array} \right.$$ where $$\begin{aligned} h(t,x)& := & \left\{ \begin{array}{lc} \displaystyle \frac{ f(x+c_1 t,U_2 (t, x +c_1 t)) -f(x+c_1 t,U_1 (t, x +c_1 t))}{\eta (t,x)} & \mbox{ if } \eta (t,x) \neq 0 , \vspace{5pt}\\ \displaystyle \partial_u f(x+c_1 t,U_1 (t,x+c_1 t))& \mbox{ if } \eta (t,x) =0 . \end{array}\right.\end{aligned}$$ We now first proceed by contradiction and assume that $c_2 < c_1$. Therefore, we have that $$\liminf_{t \rightarrow -\infty} \inf_{x \leq x_0} \eta (t,x) \geq 0.$$ Recall that $\mu_g \leq 0$ and $\phi$ are respectively the principal eigenvalue and principal eigenfunction of the periodic problem: $$\left\{ \begin{array}{l} \displaystyle \partial_{x} (a \partial_x \phi ) +g \phi = \mu_g \phi \ \mbox{ in } {\mathbb{R}}, \vspace{5pt}\\ \phi >0 \mbox{ and $L$-periodic}. \end{array} \right.$$ It follows from our assumptions and the choice of $x_0$ that $h(t,x) \leq g (x+c_1 t)$ for any $t \leq 0$ and $x \leq x_0$. Then for any $\kappa >0$, the function $\psi (t,x) := - \kappa \phi (x+c_1 t) <0$ satisfies: $$\begin{aligned} && \displaystyle \partial_t \psi - \partial_{x} (a \partial_x \psi) - c_1 \partial_x \psi - h(t,x) \psi \vspace{3pt}\\ & \leq & \displaystyle \partial_t \psi - \partial_{x}(a \partial_x \psi ) - c_1 \partial_x \psi - g(x+c_1 t) \psi \vspace{3pt}\\ & \leq & \displaystyle \kappa \mu_g \phi (c+c_1 t) \vspace{3pt}\\ & \leq & 0,\end{aligned}$$ hence it is a subsolution of the equation satisfied by $\eta$. Furthermore, as $\phi$ is continuous, positive and periodic, and since $\liminf_{t \rightarrow -\infty} \inf_{x \leq x_0} \eta (t,x) \geq 0$, one can find, for any choice of $\kappa >0$, a time $T <0$ such that $$\eta (T,x) \geq -\kappa \min \phi, \ \ \forall x \leq x_0.$$ It then follows, from the parabolic maximum principle and the fact that $\eta (t,x_0) \geq 0$ for all $t \leq 0$, that $$\eta (0,x) \geq -\kappa \max \phi, \ \ \forall x \leq x_0.$$ Since this holds for any $\kappa >0$, we get that $ 0 \leq \eta (0,x) \mbox{ for all } x \leq x_0$ and, since $\eta (0,x_0) >0$, one has by the strong maximum principle that $ \eta (0,x)>0 \mbox{ for all } x \leq x_0.$ Since $U_1$ is steeper than $U_2$, $\eta (0,\cdot)$ must be nonpositive on the left of any zero. Therefore, for all $x \in {\mathbb{R}}$, $\eta (0,x) = U_2 (0,x) - U_1 (0,x) >0$. Moreover, from the parabolic comparison principle, we get that for any $t \geq 0$ and $x \in {\mathbb{R}}$, $$U_2 (t,x) \geq U_1 (t,x).$$ This clearly implies that $U_2$ has to be faster than $U_1$, that is $c_2 \geq c_1$, and we have reached a contradiction.\ Consider now the case $c_1 = c_2 \neq 0$. The function $\eta (t,x) := U_2 (t, x +c_1 t) - U_1 (t,x+c_1 t)$ is now periodic in time and converges to 0 as $x \rightarrow -\infty$ uniformly with respect to $t$. Assume by contradiction that $\eta (t'_0, x'_0) <0$ for some $t'_0 \in {\mathbb{R}}$ and $x'_0 \in {\mathbb{R}}$. Without loss of generality and by periodicity, we can of course assume that $t'_0 <0$ and, as explained above, as $U_1$ is steeper than $U_2$ we also have that $x'_0 < x_0$. Then, one can define $$\kappa^* := \min \{ \kappa > 0 \; | \ \eta (t,x) \geq -\kappa \phi (x+c_1 t) \mbox{ for all } t \leq 0 \mbox{ and } x\leq x_0\} >0 ,$$where $\phi$ is again a positive and $L$-periodic function such that $$\partial_{x} (a \partial_x \phi )+ g \phi = \mu_g \phi.$$ Indeed, this set is non empty from the boundedness of $\eta$, and the minimum is reached since $\eta (t'_0, x'_0) <0$ with $t'_0<0$ and $x'_0 < x_0$. Furthermore, it is clear that $\eta (t,x) \geq -\kappa^* \phi (x+c_1 t)$ for all $(t,x) \in (-\infty,0] \times (-\infty,x_0]$, with equality for some $(t_1,x_1) \in (-\infty,0] \times (-\infty,x_0)$. Applying the strong maximum principle, we get that $\eta (t,x) \equiv -\kappa^* \phi (x+c_1 t)$, which contradicts the fact that it goes to 0 as $ x \rightarrow -\infty$ for any $t \leq 0$. We conclude that $\eta \geq 0$. That is, we can assume, up to some time shift and without loss of generality, that $U_1 \leq U_2 $. Assume that $c_1 >0$. From the asymptotics of $U_1$, it is clear that for any time shift $\tau$ large enough, we have that $U_1 (\tau,0) > U_2 (0,0)$. It follows that the following time shift is well defined: $$\tau^* := \sup \left\{ \tau \geq 0 \; | \ U_1 (\tau , \cdot ) \leq U_2 (0,\cdot) \right\} < + \infty.$$ When $c_1 < 0$, one instead have that $U_1 (0,0) > U_2 (\tau,0)$ for large enough $\tau$, so that one may rather define $\tau^*$ as the largest nonnegative $\tau$ such that $U_1 (0,\cdot) \leq U_2 (\tau,\cdot)$. Then the proof proceeds similarly as below so that we omit the details of the case $c_1 <0$. Let us first note that, by continuity, $U_1 (\tau^* , \cdot ) \leq U_2 (0,\cdot).$ Our aim is to show that we in fact have $$\label{eqn:tauabove} U_1 (\tau^* , \cdot ) \equiv U_2 (0,\cdot).$$ It will then immediately follow that $U_1$ is identically equal to $U_2$ up to the time shift $\tau^*$. Let us argue by contradiction and assume that there exists $x_1 \in {\mathbb{R}}$ such that $U_1 (\tau^* , x_1 ) < U_2 (0,x_1)$. One can then easily check that $$\label{eqn:tauabovecontrad1} U_1 (t+\tau^*, x) < U_2 (t,x) \mbox{ for all } (t,x)\in {\mathbb{R}}^2.$$ This indeed follows from the periodicity in time of $U_1 (t+\tau^*,x+c_1t) - U_2 (t,x+c_1t)$ and the strong maximum principle. It is also clear, by continuity, that for $\tau > \tau^*$ but close enough to $\tau^*$, one has that $U_1 (\tau, x_1) < U_2 (0,x_1)$. Besides, by construction of $\tau^*$, for any $\tau > \tau^*$, there also exists $x_2 \in {\mathbb{R}}$ such that $U_1 (\tau,x_2) > U_2 (0,x_2)$. Therefore, for some $\epsilon >0$ we have that $U_1 (\tau,\cdot)$ and $U_2 (0,\cdot)$ intersect at least once for all $\tau \in (\tau^* , \tau^* +\epsilon)$. Because $U_1$ is steeper than $U_2$, and from the periodicity in time of $U_1 (t+\tau,x+c_1t) - U_2 (t,x+c_1t)$, there can only be one (non degenerate) intersection, for each time $t \in {\mathbb{R}}$, of $ U_1 (t+\tau,\cdot)$ and $U_2 (t, \cdot)$. Hence, we can define, for any $\tau \in (\tau^*, \tau^* + \epsilon)$, the real-valued function $x (\tau,t)$, which is periodic in its second variable and is such that $$U_1 (t+\tau,x (\tau,t) +c_1t ) = U_2 (t,x (\tau,t)+c_1t ),$$ that is the only intersection of $U_1 (t+\tau,\cdot)$ and $U_2 (t,\cdot)$.\ Let us now look at the behavior of $x (\tau,t)$ as $\tau \rightarrow \tau^*$. Consider first the case when there exist two sequences $\tau_j \rightarrow \tau^*$ and $t_j$ such that $x (\tau_j,t_j)$ converges to some $x (\tau^*) \in {\mathbb{R}}$. From the periodicity of $x(\tau,t)$ in the variable $t$, we can assume up to extraction of a subsequence that $t_j \rightarrow t_\infty \in {\mathbb{R}}$ as $j \rightarrow +\infty$. It immediately follows by passing to the limit as $j \rightarrow +\infty$ that $$U_1 (t_\infty +\tau^*,x (\tau^*)+c_1 t_\infty)= U_2 (t_\infty,x(\tau^*)+c_1 t_\infty).$$ We have reached a contradiction with (\[eqn:tauabovecontrad1\]). Consider now the case when there exist two sequences $\tau_j \rightarrow \tau^*$ and $t_j$ such that $x (\tau_j,t_j) \rightarrow +\infty$ as $j\rightarrow +\infty$. Again, we can assume without loss of generality that $t_j \rightarrow t_\infty \in {\mathbb{R}}$ as $j \rightarrow +\infty$. As $U_1$ is steeper than $U_2$, we know that $U_1 (t_j+ \tau_j,\cdot)$ is above $U_2 (t_j, \cdot)$ on the left of the point $x(\tau_j,t_j) +c_1 t_j$, which goes to $+\infty$ as $j \rightarrow +\infty$. Thus by passing to the limit as $j \rightarrow +\infty$, we get $$U_1 (t_\infty+ \tau^* , \cdot) \geq U_2 (t_\infty,\cdot),$$ which again is a clear contradiction with (\[eqn:tauabovecontrad1\]). The only remaining case is $x (\tau,t) \rightarrow -\infty$ as $\tau \rightarrow \tau^*$, uniformly with respect to its second variable. We proceed similarly as before. Indeed, choose $0 < \delta' < \delta$ and $x_0 \in {\mathbb{R}}$ such that for any $\tau \in (\tau^*, \tau^*+\epsilon)$, $$U_1 (t+\tau,x) \in \left[q (x)-\delta,q (x) \right] \; \mbox{ for all } x \leq x_0 + c_1 t,$$ $$U_1 (t+\tau,x) \in \left[q (x)-\delta,q (x) - \delta' \right] \; \mbox{ for all } x = x_0 + c_1 t.$$ Let also $x_1$ such that for all $t \in \mathbb{R}$, $$\label{eqn:ineq42} U_2 (t,x) \in \left[q (x)-\delta,q (x) \right] \; \mbox{ for all } x \leq x_1 + c_1 t.$$ Up to reducing $\epsilon$, for any $\tau \in (\tau^*, \tau^* + \epsilon)$, one has $x (\tau,t) < \min \left\{ x_0 , x_1 \right \}$ for all $ t\in {\mathbb{R}}$. Thus, as $U_1$ is steeper, $$U_1 (t+\tau,x_0+c_1t) \leq U_2 (t,x_0+c_1 t),$$ and moreover, since $U_2$ lies above $U_1 (\cdot +\tau,\cdot)$ on the right of $x(\tau,t) + c_1 t$, and using the inequality (\[eqn:ineq42\]) on the left of the same point, one gets that $$U_2 (t,x) \in \left[q (x)-\delta,q (x) \right] \; \mbox{ for all } x \leq x_0 + c_1 t.$$ Similarly as before, the function $\eta(t,x)=U_2 (t,x+c_1t) - U_1 (t+\tau,x+c_1t)$ satisfies $$\left\{ \begin{array}{rl} \displaystyle \partial_t \eta = \partial_{x}( a \partial_x \eta ) + c_1 \partial_x \eta + h(t,x) \eta, & \forall (t,x) \in (-\infty,0) \times (-\infty,x_0), \vspace{5pt}\\ \displaystyle \eta (t,x_0) \geq 0, & \forall t \in (-\infty,0),\vspace{5pt}\\ \displaystyle \eta (t,x) \ \mbox{ is periodic in } \; t & \mbox{ and } \ \eta (t,x) \rightarrow 0 \ \mbox{ as } \; x \rightarrow -\infty, \end{array} \right.$$ where $h\leq g$ with $\mu_g \leq 0$. As before, we can find a positive and periodic function $\phi$ such that, for any $\kappa>0$, the function $-\kappa \phi (x+c_1t)$ is a subsolution of the equation above satisfied by $\eta$. Again, it is clear that there exists $$\kappa^* := \min \{ \kappa > 0 \; | \ \eta (t,x) \geq -\kappa \phi(x+c_1 t) \mbox{ for all } t \leq 0 \mbox{ and } x\leq x_0\} >0.$$ Therefore, using the limiting conditions of $\eta$ near $-\infty$ and $x_0$, we get that $\eta (t,x) \geq - \kappa^* \phi (x+c_1 t)$, with equality for some $(t_1,x_1) \in (-\infty,0]\times(-\infty,x_0]$. From the strong maximum principle, it follows that $\eta (t,x) \equiv -\kappa^* \phi (x+c_1 t) < 0$ for all $t \leq 0$ and $x\leq x_0$, which is a contradiction. As all the above cases lead to a contradiction, we conclude that (\[eqn:tauabove\]) holds, that is, $U_2$ is identically equal to some time shift of $U_1$. This ends the proof of Theorem \[th:lem\_speeds\]. Uniqueness properties of terraces {#sec:uniqueness} ================================= We can now prove our uniqueness theorems. We already gave a brief proof of Theorem \[th:unique\_min\] in Section \[sec:intro\]. We therefore focus here on Theorems \[th:unique\_semi\] and \[th:unique\]. Semi-minimal terraces --------------------- Let us consider a semi-minimal terrace $$\mathcal{T}= \left( (p_k)_{0 \leq k \leq N},(U_k)_{1 \leq k \leq N} \right)$$ connecting 0 to $p$, such that all the speeds $c_k$ of the pulsating traveling waves $U_k$ are not zero. As the minimal terrace is unique when it has non zero speeds, it is enough to prove that $\mathcal{T}$ is minimal to get the first part of Theorem \[th:unique\_min\].\ From Theorem \[th:exists\], we know that there exists a minimal terrace $\mathcal{T}'=((p'_k),(U'_k))$ connecting 0 to $p$, such that any $p'_k$ and $U'_k$ is steeper than any other entire solution. Our goal is to prove that $\mathcal{T}'$ and $\mathcal{T}$ share the same platforms, so that $\mathcal{T}$ is minimal (and we then even have that the $U_k$ and $U'_k$ are the same up to some time shift as $c_k \neq 0$). Proceed by contradiction and assume this is not the case. Note first that the family $(p'_k)_k$ is included in the family $(p_k)_k$. Let $i \geq 1$ be the smallest integer such that $p_i \neq p'_i$. Then there clearly exists $j >i$ such that $$p'_{i-1}= p_{i-1} > p_i > p_j = p'_i .$$ One can now choose a real number $a$ large enough so that for all $x \in \mathbb{R}$: $$U'_i (0,x) < p_i + H(a-x) (p_{i-1} - p_i) =:r(x).$$ Denote by $R(t,x)$ the solution of with initial datum $r(x)$. It is clear from our definitions that the minimal terrace connecting $p_i$ to $p_{i-1}$ is reduced to the single traveling wave $U_i$ with speed $c_i \neq 0$. From our Theorem \[th:CV\], we thus know that $R(t,x)$ converges to $U_i$ and spreads with the speed $c_i$. Since $U'_i (t,x) < R (t,x)$ for all $x \in {\mathbb{R}}$ and $t \geq 0$, it follows that $c_i \geq c'_i$, where $c '_i$ is the speed of $U'_i$. One can then proceed similarly to get that $c'_i \geq c_j$ where $c_j$ is the speed of $U_j$ the $j$-th front of the terrace $\mathcal{T}$. Hence, $c_j \leq c_i$. Under statement $(a)$ of Theorem \[th:unique\_semi\], we have already reached a contradiction, and we conclude that $\mathcal{T}$ is a minimal terrace. In general, from the definition of a terrace, we only get that $c'_i=c_i = c_j$. Furthermore, as speeds of $\mathcal{T}$ are non zero, we have either $c'_i=c_i = c_j >0$ or $c'_i = c_i = c_j < 0$. Assume that the former holds true. As before, it is then known that $p_{i-1}$ is isolated and stable from below (see the proof of Theorem \[th:min\_speeds\]). Under statement $(b)$, one can use Assumption \[assumption-speed1\] and Theorem \[th:lem\_speeds\], as well as the fact that $U'_i$ is steeper than $U_i$, to get that $U'_i \equiv U_i$ up to some time shift. This is a contradiction as both fronts do not share the same asymptotics, namely $$U_i (-\infty,\cdot) = p_i (\cdot) > p'_i (\cdot) = U'_i (-\infty, \cdot).$$ The other case can be dealt with the same way. Having reached a contradiction, we get that $U'_i \equiv U_j$, and we can again conclude that $\mathcal{T}$ is a minimal terrace.\ We now check that there is no other semi-minimal propagating terrace. Let $\mathcal{T}'' =((p''_k) , (U''_k))$ be any other semi-minimal terrace, and assume first that there is no component with zero speed. The argument above then applies, so that $\mathcal{T}''$, $\mathcal{T'}$ and $\mathcal{T}$ are one and the same terrace (up to time shifts), and thus $\mathcal{T}$ is the unique semi-minimal terrace. Let us now proceed by contradiction and assume that $c''_i =0$ for some $i$. Since $\mathcal{T}$ is minimal, there exists some $j$ such that $$p_j \leq p''_i \leq p''_{i-1} \leq p_{j-1}.$$ Furthermore, there exist two integers $i_1 < i$ and $i_2 \geq i$ such that $$p''_{i_1}=p_{j-1} \mbox{ and } p''_{i_2} = p_{j}.$$ Then the solutions of with initial data $$\begin{aligned} && p''_{i_1 +1} + H(1-x) (p''_{i_1} -p''_{i_1 +1}), \vspace{3pt} \\ && p_{j} + H(-x) (p_{j-1} - p_j ),\vspace{3pt} \\ && p''_{i_2 } + H(-1-x) (p''_{i_2 -1} -p''_{i_2}),\end{aligned}$$ converge respectively, thanks to Theorem \[th:CV\], to $U''_{i_1 +1}$, $U_j$ and $U''_{i_2}$. By the comparison principle, it immediately follows that $c''_{i_2} \leq c_j \leq c''_{i_1 +1}$. Since $\mathcal{T}''$ is a terrace and $c''_i =0$, we have that $c''_{i_1 +1} \leq 0$ and $c''_{i_2} \geq 0$. Thus $c''_i = c_j =0$, which contradicts the fact that $\mathcal{T}$ has only non zero speeds. This ends the proof of Theorem \[th:unique\_semi\]. Uniqueness of the terrace ------------------------- We now prove Theorem \[th:unique\] and let $$\mathcal{T}= \left( (p_k)_{0 \leq k \leq N},(U_k)_{1 \leq k \leq N} \right)$$ be some terrace connecting 0 to $p$ with non zero speeds. Again, we first prove that it is minimal, then check that there is no other terrace.\ The proof is very similar to the previous subsection. From Theorem \[th:exists\], we know that there exists a minimal terrace $\mathcal{T}'=\left( (p'_k)_{0 \leq k \leq N'},(U'_k)_{1 \leq k \leq N'}\right)$ connecting 0 to $p$, such that any $p'_k$ and $U'_k$ is steeper than any other entire solution. Our goal is again to prove that $\mathcal{T}'$ and $\mathcal{T}$ are the same terrace up to time shifts. As before, note first that the family $(p'_k)_{0 \leq k \leq N'}$ is included in the family $(p_k)_{0 \leq k \leq N}$. As each $U_k$ moves with speed $c_k \neq 0$, it follows that each $p'_k$ (for $1 \leq k \leq N'-1$) is isolated and stable from either above or below. Thanks to Assumption \[assumption-speed2\] and Theorem \[th:lem\_speeds\], we get that for each integer $1 \leq k \leq N'$, the speed $c'_k$ of $U'_k$ is minimal among the speeds of all traveling waves connecting some $q<p'_{k-1}$ to $p'_{k-1}$, but also maximal among the speeds of all traveling waves connecting $p'_k$ to some $q>p'_k$. In particular, for any $k$, there exist $i \leq j$ such that $c'_k \leq c_i$ and $c'_k \geq c_j$, $i$ and $j$ being chosen such that $U_i$ connects $p_i$ to $p_{i-1}=p'_{k-1}$ and $U_j$ connects $p_j=p'_{k}$ to $p_{j-1}$. Since $c_i \leq c_j$ by definition of a propagating terrace, we get that $c'_k =c_i =c_j$. As they cannot be zero, and using the second part of Theorem \[th:lem\_speeds\], we conclude that $i=k$ and $U_i \equiv U_j \equiv U'_k$ up to some time shifts. This immediately implies that $\mathcal{T}$ and $\mathcal{T}'$ are the same terrace up to time shifts.\ We now check that there is no propagating terrace with a zero speed component. Proceed by contradiction and assume that there exists some $\mathcal{T}'' =((p''_k) , (U''_k))$ such that $c''_i =0$ for some $i$. As above, since we now know $\mathcal{T}$ to be minimal, there exist some integers $j$, $i_1$ and $i_2$ such that $$p''_{i_1}=p_{j-1} \geq p''_{i-1} \geq p''_i \geq p_j= p''_{i_2}.$$ Moreover, as we have proved that $\mathcal{T}$ is also the unique minimal terrace, $U_j$ is steeper than $U''_{i_1 +1}$ and $U''_{i_2}$. It also has non zero speed, and we can again use Assumption \[assumption-speed2\] to get that $U_j \equiv U''_{i_1 +1} \equiv U''_{i_2}$ up to time shifts. Thus, $i=i_2 = i_1 +1$ and $c_j =0$, which clearly contradicts our assumptions on $\mathcal{T}$. We conclude that any terrace has no zero speed and, by the argument above, is equal to the unique minimal terrace up to time shifts. Theorem \[th:unique\] is now proved. Discussion on the case with zero speeds {#sec:zero} ======================================= We now discuss the case when zero speeds occur, which was avoided in the statement of our uniqueness Theorems \[th:unique\_min\], \[th:unique\_semi\] and \[th:unique\]. Indeed, terraces can obviously no longer be unique up to time shifts anymore if they involve stationary waves. However, one could still expect the platforms of terraces to be unique under some appropriate assumption similar to Assumption \[assumption-speed2\]. We will see that this may not be true either, even when all platforms are assumed to be linearly stable. We will end this section with a more precise statement, along with a sketch of its proof.\ Let us first look at the homogeneous case, that is when the equation  is invariant by space translation, and assume for simplicity that all platforms of any terrace is locally stable from both below and above. It was already proved in [@FifMcL77], using ODE-inspired proofs, that there exists a unique terrace up to some space shifts, no matter if zero speeds occur or not. In particular, platforms of terraces are also unique. Without giving the details, we point out that our proof of Theorem \[th:lem\_speeds\] can easily be extended, in the homogeneous framework, to stationary waves provided that we replace all time shifts by space shifts. In particular, we could use the same argument as in Section \[sec:uniqueness\] to reach the same conclusion as in [@FifMcL77], at least under a stronger assumption similar to Assumption \[assumption-speed2\]. More generally, the key idea is that if there exists a total foliation of stationary waves between any two platforms (which is given, in the homogeneous framework, by all space shifts of a single stationary wave), then all terraces share the same platforms. If not, one could find two distinct stationary waves intersecting each other arbitrarily close to a common upper stable state, and then apply the same method as in the second part of our proof of Theorem \[th:lem\_speeds\] to reach a contradiction.\ However, in the heterogeneous case, such a foliation does not always exist. What we know is the following: \[critical\] Let $q_1 < q_0$ be two stationary solutions of . For any given $x_0 \in {\mathbb{R}}$ and $\alpha \in (q_1 (x_0), q_0 (x_0))$, there exists a monotonic in time entire solution $U_{x_0,\alpha}$ of such that $U_{x_0,\alpha}(0,x_0)=\alpha$, which is steeper than any other entire solution lying between $q_0$ and $q_1$. This theorem was proved in [@Nadin12], where those steepest entire solutions where denoted as “critical waves". The proof is similar to the argument of Theorem \[th:exists\] in Section \[sec:existence\] except that, instead of looking at the large time behavior of one Heaviside type initial datum, the Heaviside type initial datum is shifted in order to pin the solution at the value $\alpha$ at the point $x_0$. Let us now assume that, for a given $x_0$ and $\alpha$, the critical wave $U_{x_0,\alpha}$ is not a spatially periodic solution of . It then connects, as $x \to \pm \infty$, two spatially periodic stationary solutions, which we still denote by $q_1$ and $q_0$ without loss of generality. Two situations then occur. When the average speed of the level sets of this critical wave is not zero, then it is a pulsating traveling wave in the usual sense and, as explained before, it is the unique critical wave, up to time shifts, lying between $q_1$ and $q_0$. When the average speed of the level sets is zero and the equation is homogeneous, then it is a stationary wave, which is the unique critical wave up to space shifts. However, when the average speed of the critical wave is zero and the equation is heterogeneous, then the set of critical waves is ordered (by a straightforward steepness argument) but non trivial and may contain both stationary waves and monotonic in time entire solutions. More precisely, there may exist two critical stationary waves $U_{x,\alpha_1} < U_{x,\alpha_2}$ such that, for any $\alpha_1 < \alpha < \alpha_2$, the critical wave $U_{x,\alpha}$ is a monotonic in time entire solution which converges as $t \to \pm \infty$ to $U_{x,\alpha_1}$ and $U_{x,\alpha_2}$ (in one way or the other depending on the monotonicity). The occurring of this new object in the dynamics leads to more complicated situations where non minimal terraces exist, even though all platforms are linearly stable. Indeed, let $q_0 > q_1 >q_2$ be three linearly stable states, such that there exists a two platforms semi-minimal terrace with zero speeds. We also assume that for some $\alpha_1 \in (q_1 (0),q_0 (0))$ and $\alpha_2 \in (q_2 (0), q_1 (0))$, the critical waves $U_{0,\alpha_1}$ and $U_{0,\alpha_2}$ (taken steeper than any entire solution lying between respectively $q_1$ and $q_0$, and $q_2$ and $q_1$) are respectively increasing and decreasing in time entire solutions. Then, we know by the arguments developed in this paper that the solution $u$ associated with Heaviside type initial datum $q_2 (x) + H(-x) (q_0 - q_2)(x)$ converges locally uniformly to a stationary solution $U$, which is steeper than any other entire solution. We now claim that it is a stationary wave connecting $q_2$ to $q_0$. If it does not, then either $U \geq q_1$ or $U \leq q_1$ (as before, $U$ should be a part of a minimal terrace, whose platforms are included in the decomposition $(q_0, q_1, q_2)$). Assume the former occurs. As in Section \[sec:existence\], one can define for any large integer $n$ the smallest time $\tau_n$ such that $u$ reaches the value $ \alpha_2$ at the point $nL$ and time $\tau_n$, and then check by a steepness argument that $u$ converges locally uniformly around $(\tau_n, nL)$ to the decreasing in time function $U_{0,\alpha_2}$. However, by definition of $\tau_n$, one has that $\partial_t u (\tau_n,nL) \geq 0$, a contradiction. A similar argument can be performed when $U \leq q_1$, and as announced, $U$ is a stationary wave connecting $q_2$ to $q_0$. We conclude that the minimal terrace only has one platform, while our initial terrace had two.\ In our last example above, the speed of the upper critical wave was larger than the speed of the lower critical wave, in some sense to be made more rigourous below. Therefore, our initial terrace was not appropriate to describe propagation dynamics, although it matched our definition of a propagating terrace. This means that, whenever stationary waves appear, we need to distinguish between different situations. Let $q_1 < q_0$ be two periodic stationary solutions that are connected by some pulsating wave, and denote by $U_{x_0,\alpha}$ the critical waves betwen $q_1$ and $q_0$ as defined by Theorem \[critical\]. We will distinguish the following cases: 1. There exist $x_0$ and $\alpha \in (q_1 (x_0),q_0 (x_0))$ such that the solution $U_{x_0,\alpha}$ is a pulsating traveling wave with speed $c \neq 0$ (then it is does not depend on $x_0$ and $\alpha$ up to time shifts). We say that $q_1$ and $q_0$ are connected with critical speed $c$. Otherwise, there exists a stationary wave connecting $q_1$ to $q_0$, and one of the following holds: 1. For any $x_0$ and $\alpha \in (q_1 (x_0),q_0 (x_0))$, the solution $U_{x_0,\alpha}$ is a stationary wave. We say that $q_1$ and $q_0$ are connected with critical speed $0$. 2. There exist $x_0$ and $\alpha \in (q_1 (x_0),q_0 (x_0))$ such that the solution $U_{x_0,\alpha}$ is monotonically increasing in time and, for any $x'$ and $\alpha ' \in (q_1 (x'),q_0 (x'))$, the solution $U_{x',\alpha '}$ is never decreasing in time. We say that $q_1$ and $q_0$ are connected with critical speed $0^+$. 3. There exist $x_0$ and $\alpha \in (q_1 (x_0),q_0 (x_0))$ such that the solution $U_{x_0,\alpha}$ is monotonically decreasing in time and, for any $x'$ and $\alpha ' \in (q_1 (x'),q_0 (x'))$, the solution $U_{x',\alpha '}$ is never increasing in time. We say that $q_1$ and $q_0$ are connected with critical speed $0_-$. 4. There exist $x_0$ and $\alpha \in (q_1 (x_0),q_0 (x_0))$ such that the solution $U_{x_0,\alpha}$ is monotonically increasing in time, and another $x_1$ and $\beta \in (q_1 (x_1),q_0 (x_1))$ such that the solution $U_{x_1,\beta}$ is decreasing in time. We say that $q_1$ and $q_0$ are connected with critical speed $0^+_-$. We point out Theorem 1.7 in [@DHZ2] for a situation where two steady states are connected with critical speed $0^+_-$. Other cases may be constructed in a similar fashion. The discussion above leads us to order the zero speeds in the following way: for all $c >0$, $$- c < 0 < c \ \ \mbox { and } -c < 0_- < 0^+_- < 0^+ < c.$$ Note that the set of admissible speeds is no longer fully ordered. We then formulate the theorem below: \[th:zero\_speed\_thm\] Assume that $q_0 >q_1 > q_2$ are three stationary solutions of , and that there exist $\delta >0$ and $g$ an $L$-periodic function such that $\mu_g \leq 0$ and $$\partial_u f (x,u) \leq g \; \mbox{ for all } x \in {\mathbb{R}}, \ u \in \left[ q_i (x)-\delta, q_i (x)+\delta \right] \mbox{ and } i=0,1,2.$$ 1. If $q_1 < q_0$ are connected with critical speed $c \neq 0^+_-$, then there does not exist a non minimal terrace connecting $q_1$ to $q_0$. 2. If $q_1< q_0$ and $q_2 < q_1$ are connected with critical speeds respectively $c_1$ and $c_2$ in the sense defined above. Then the minimal terrace has only one platform if and only if $c_1 > c_2$ or $c_1 =c_2 = 0^+_-$. We only give a brief sketch of the proof, which follows the ideas exposed in the discussion above.\ First, one may extend Theorem \[th:lem\_speeds\], to get that the speed of a critical wave has to be the slowest in a slightly stronger sense: if a critical wave is stationary or increasing in time, then any other monotonic in time entire solution that it intersects close enough to $q_1$ has to be either identically equal, or increasing in time. It follows that, if there exist both a critical stationary wave $U$ and a non minimal terrace $\mathcal{T}$ connecting $q_1$ to $q_0$, then $q_1$ and $q_0$ are connected with speed $c\leq 0^+_-$. Indeed, from our Theorem \[th:lem\_speeds\], one can easily check that any other propagating terrace connecting $q_1$ to $q_0$ has zero speeds only (this could also be proved using the convergence to critical waves from Heaviside type initial data). Then, the highest component of the non minimal terrace $\mathcal{T}$ provides some stationary wave $U_1$ connecting some $q'$ to $q_0$. By the extended Theorem \[th:lem\_speeds\] described above, any critical wave intersecting $U_1$ close enough to $q_0$ has to be decreasing in time, hence $q_1$ and $q_0$ are connected with speed $c \leq 0_-^+$. One can get similarly, by looking close to $q_1$, that $c \geq 0^+_-$, thus $c=0^+_-$. This proves statement $(i)$ of Theorem \[th:zero\_speed\_thm\]. Assume now that $q_1< q_0$ and $q_2 < q_1$ are connected with critical speeds respectively $c_1$ and $c_2$. On one hand, if there exists a critical traveling wave connecting $q_2$ to $q_0$ with speed $c \neq 0$, then we already know that $c_1 > c > c_2$ by Theorem \[th:lem\_speeds\]. On the other hand, if there exists some critical stationary wave connecting $q_2$ to $q_0$, then it intersects, arbitrarily close to $q_0$, some critical wave $U_1$ connecting $q_1$ to $q_0$. By the extended Theorem \[th:lem\_speeds\], $U_1$ is increasing in time, hence $c_1 \geq 0_-^+$. Similarly, one gets that $c_2 \leq 0_-^+$. Thus, there can be a one platform minimal terrace with zero speed only if $c_ 2 \leq 0_-^+ \leq c_1$. Conversely, if $c_1 > c_2$ or $c_1 =c_2 = 0_-^+$, one can proceed as in our previous example above (earlier in this section), looking at the large time behavior from some Heaviside type initial datum, to get that the minimal terrace has only one platform. A few examples {#sec:exemples} ============== In this work, we have displayed general results in order to describe propagating terraces, and to offer a wide and natural extension of the classical notion of traveling waves. In this section, we illustrate how those results can be applied to several examples, ranging from the standard bistable case to some multistable nonlinearities. Bistable case ------------- We first consider $f$ of the bistable type (see Figure \[fig:bistable\] for the typical homogeneous case). More precisely, 0 and $p$ are asymptotically stable, respectively from above and below, with respect to , and any other stationary solution $p_1$ of is unstable. Moreover, by unstable we mean that any solution of starting from either below or above $p_1$ diverges from $p_1$ (and thus, it converges to either 0 or $p$ as $t \to +\infty$). We also make the technical assumption that for any such unstable solution $p_1$, any traveling wave connecting $p_1$ to $p$ has to be strictly faster than any traveling wave connecting $0$ to $p_1$. This assumption is satisfied, for instance, if any such $p_1$ is linearly unstable. ![Bistable nonlinearity[]{data-label="fig:bistable"}](bistable.eps){width=".6\textwidth"} We give the following result: \[th:bistable\] If $f$ is bistable in the sense above, then there exists a traveling wave connecting 0 to $p$. If moreover Assumption \[assumption-speed2\] holds, then there exists a unique admissible speed $c \in {\mathbb{R}}$ for pulsating traveling waves. When $c \neq 0$, then the profile of the pulsating traveling wave is unique up to time shifts. In [@DGM], we only proved the existence part under the additional assumption that propagation occurs to $p$ for some compactly supported initial datum. The full existence result was obtained in [@FZ11], which also dealt with more general discrete and continuous monotone semiflows that we do not consider here. In a similar spirit that consists in looking at the bistable equation as a combination of two monostable parts with opposite speeds, we give an alternative proof which easily follows from our main results. Similar uniqueness results on bistable pulsating waves also recently appeared in the literature; we refer to [@BH12; @DHZ].\ Let us first prove that the spatially periodic and bistable (in the sense above) equation admits a pulsating traveling wave. Note first that an intermediate unstable solution of necessarily exists [@Matano84]. Let $p_1$ be such a solution. Then $f$ is of the monostable type both between $p_1$ and $p$, and between 0 and $p_1$, which gives us a decomposition by [@Wein02], hence a minimal terrace by our Theorem \[th:exists\]. Moreover, this terrace is made of a single traveling wave connecting 0 to $p$. Indeed, if it is not, then it must be made of two pulsating traveling waves connecting respectively $0$ to $p_1$ and $p_1$ to $p$. However, as all traveling waves between 0 and $p_1$ have strictly slower speed than traveling waves between $p_1$ and $p$ (this is by our bistable assumption; we also point out that the same hypothesis was made in [@FZ11]), this contradicts our definition of a terrace, namely the fact that speeds must be ordered. Furthermore, assume now that Assumption \[assumption-speed2\] holds. This in particular includes the case when 0 and $p$ are linearly stable, or when $f$ is nonincreasing in some neighborhoods of both 0 and $p$. Then thanks to Theorem \[th:unique\] and unless it has zero speed, the traveling wave we just constructed is also the only one connecting 0 to $p$. In other words, the bistable pulsating traveling wave is unique. When the traveling wave is stationary and repeating the same argument, we may only infer that all terraces are made of a single stationary traveling wave: this means that there exists a unique admissible speed for bistable pulsating traveling waves, although there may be different profiles. \ Let us point out that the existence part of the above theorem is in fact a particular case of Proposition \[proposition\]. “Monostable + Bistable" and tristable cases ------------------------------------------- We now want to look at some two platform cases. We assume that $f$ is of the bistable type between $p_1$ and $p$ for some positive and periodic stationary solution $p_1 < p$, and either bistable or monostable between 0 and $p_1$, as shown in Figure \[fig:tristable\]. We also assume that Assumption \[assumption-speed1\] holds. ![(left) Monostable + bistable nonlinearity;\ (right) Tristable nonlinearity[]{data-label="fig:tristable"}](mono_bi.eps){width="90.00000%"} Denote by $U_1$ and $U_2$ the unique pulsating waves with minimal speeds $c_1 \neq 0$ and $c_2 \neq 0$ of, respectively, the upper and lower parts. As before, existence of a decomposition immediately gives us existence of a minimal terrace, whose platforms are included in the set $\{ 0, p_1, p \}$. One can check that the minimal terrace has only one platform if and only if $c_1 > c_2$. Indeed, from Theorem \[th:lem\_speeds\], the fronts $U_1$ and $U_2$ are also the steepest traveling waves on their respective intervals. Hence, if $c_1 \leq c_2$, they form a semi-minimal terrace which is also the unique minimal terrace by Theorem \[th:unique\_semi\]. On the other hand, we have just shown, in the bistable case, that there exists no traveling wave connecting $p_1$ to $p$ with some speed $c \neq c_1$. Therefore, if $c_1 > c_2$, there cannot exist a two platforms minimal terrace. In this case, the minimal terrace has only one platform and its single wave moves with some speed $c \in (c_2, c_1 ) \subset {\mathbb{R}}^+$ (from either Theorem \[th:lem\_speeds\] or \[th:CV\]). It is also the unique semi-minimal terrace by Theorem \[th:unique\_semi\]. Moreover, if the lower part is bistable, it follows from Theorem \[th:unique\] that there is no non minimal terrace. On the other hand, if the lower part is monostable, then there is a continuum of admissible speeds $[c_2,+\infty)$ for traveling waves connecting 0 to $p_1$ and, in particular, there always exist non minimal and two platforms terraces connecting 0 to $p$. Note that in this particular case, choosing initial data with slower decay as $x \rightarrow +\infty$, one may construct solutions of the Cauchy problem converging to non minimal terraces as $t \to +\infty$. Quadristable case ----------------- We conclude with a three platforms example. Let us assume that $f$ is quadristable, that is there exist exactly four stable equilibria $p> p_1 > p_2>0$, and all of them are linearly stable, such as in Figure \[fig:quadristable\]. ![Quadristable nonlinearity[]{data-label="fig:quadristable"}](quadri.eps){width="90.00000%"} From the bistable case, we know that there exist three traveling waves $U_1$, $U_2$ and $U_3$ connecting respectively $p_1$ to $p$, $p_2$ to $p_1$ and 0 to $p_2$, that we now assume to move with non zero speeds. If $c_1 \leq c_2 \leq c_3$, then it is a terrace, which is unique according to Theorem \[th:unique\]. If $c_1 > c_2$, then we have just shown that there exists a traveling wave connecting $p_1$ to $p$ with speed $c$, so that we are back to the two platforms case. If $c_3 > c$, then the unique (minimal) terrace has two platforms, while if $c_3<c$, then the unique terrace only has one platform. The difficulty is that we do not know a priori what the speed $c$ is, only that $c \in (c_2,c_1)$ thanks to Theorem \[th:lem\_speeds\]. Thus, knowing the speeds $c_1$, $c_2$ and $c_3$ is not enough to conclude in general. The case $c_3 < c_2$ can be treated similarly.\ The argument described above gives the sketch of how, by reiterating, one can consider more and more complex nonlinearities (even more degenerate ones, for instance of the ignition type) and extract terraces from any decomposition using our main theorems. [99]{} S.B. Angenent, The zero set of a solution of a parabolic equation, J. Reine. Angew. Math. 390 (1988), 79-96. D. G. Aronson, H. F. Weinberger, Nonlinear diffusion in population genetics, combustion and nerve propagation, Partial Diff. Eq. and related topics 446 (1975), 5-49. D. G. Aronson, H. F. Weinberger, Multidimensional nonlinear diffusion arising in population genetics, Adv. in Math. 30 (1978), 33-76. M. Bages, P. Martinez. J-M. Roquejoffre, How traveling waves attract the solutions of KPP-type equations, Trans. Amer. Math. Soc. 364 (2012), no. 10, 5414-5468. H. Berestycki, F. Hamel, Front propagation in periodic excitable media, Comm. Pure Appli. Math. 55 (2002), 949-1032. H. Berestycki, F. Hamel, Generalized transition waves and their properties, Comm. Pure Appl. Math. 65 (2012), 592-648. M. Bramson, The convergence of solutions of the Kolmogorov nonlinear diffusion equation to travelling waves, Amer. Math. Soc. 44 (1983), no. 285. W. Ding, F. Hamel, X-Q. Zhao, Bistable pulsating fronts for reaction-diffusion equations in a periodic habitat, Indiana Univ. Math. J. 66 (2017), no. 4, 1189-1265. W. Ding, F. Hamel, X-Q. Zhao, Transition fronts for periodic bistable reaction-diffusion equations, Calc. Var. Partial Differential Equations 54 (2015), no. 3, 2517-2551. Y. Du, H. Matano, Convergence and sharp thresholds for propagation in nonlinear diffusion problems, J. Eur. Math. Soc. 12 (2010), 279-312. A. Ducrot, T. Giletti, H. Matano, Existence and convergence to a propagating terrace in one-dimensional reaction-diffusion equations, Trans. Amer. Math. Soc. 366 (2014), no. 10, 5541-5566. J. Fang, X.-Q. Zhao, Bistable traveling waves for monotone semiflows with applications, J. Eur. Math. Soc. 17 (2015), no. 9, 2243-2288. P.C. Fife, J. McLeod, The approach of solutions of nonlinear diffusion equations to traveling front solutions, Arch. Rational Mech. Anal. 65 (1977), 335-361. T. Giletti, Convergence to pulsating traveling waves with minimal speed in some KPP heterogeneous problems, Calc. Var. Partial Differential Equations 51 (2014), no. 1-2, 265-289. F. Hamel, Qualitative properties of monostable pulsating fronts: exponential decay and monotonicity, J. Math. Pures Appl. 89 (2008), 355-399. F. Hamel, J. Nolen, J-M. Roquejoffre, L. Ryzhik, The logarithmic delay of KPP fronts in a periodic medium, J. Eur. Math. Soc. 18 (2016), no. 3, 465-505. F. Hamel, L.Roques, Uniqueness and stability properties of monostable pulsating fronts, J. Eur. Math. Soc. 13 (2011), 345-390. A.N. Kolmogorov, I.G. Petrovsky, N.S. Piskunov, A study of the equation of diffusion with increase in the quantity of matter, and its application to a biological problem, Bjul. Moskovskogo Gos. Univ. 1 (1937), 1-26. K. Lau, On the nonlinear diffusion equation of Kolmogorov, Petrovsky, and Piscounov, J. Diff. Eq. 59 (1985), 44-70. H. Matano, Convergence of solutions of one-dimensional semilinear parabolic equations, J. Math. Kyoto Univ. 18 (1978), 221-227. H. Matano, Asymptotic behavior and stability of solutions of semilinear diffusion equations, Publ. RIMS, Kyoto Univ. 15 (1979), 401-454. H. Matano, Existence of nontrivial unstable sets for equilibriums of strongly order-preserving systems, J. Fac. Sci. Univ. Tokyo Sect. IA Math. 30 (1984), 645-673. G. Nadin, The effect of the Schwarz rearrangement on the periodic principal eigenvalue of a nonsymmetric operator, SIAM J. on Math. Anal. 41 (2010), 2388-2406. G. Nadin, Critical travelling waves for general heterogeneous one-dimensional reaction-diffusion equations, Ann. Inst. H. Poincaré Anal. Non Linéaire 32 (2015), no. 4, 841-873. P. Polacik, Planar propagating terraces and the asymptotic one-dimensional symmetry of solutions of semilinear parabolic equations. SIAM J. Math. Anal. 49 (2017), no. 5, 3716-3740. P. Polacik, Propagating terraces and the dynamics of front-like solutions of reaction-diffusion equations on $\mathbb{R}$. Mem. Amer. Math. Soc., to appear. E. Risler, Global convergence toward traveling fronts in nonlinear parabolic systems with a gradient structure. Ann. Inst. H. Poincaré Anal. Non Linéaire 25 (2008), no. 2, 381-424. E. Risler, Global behaviour of bistable solutions for gradient systems in one unbounded spatial dimension, preprint. D.H. Sattinger, On the stability of waves of nonlinear parabolic systems, Advances in Math. 22 (1976), 312-355. K. Uchiyama, The behavior of solutions of some non-linear diffusion equations for large time, J. Math. Kyoto Univ. 18 (1978), 543-508. H. Weinberger, On spreading speed and travelling waves for growth and migration models in a periodic habitat, J. Math. Biol. 45 (2002), 511-548. [^1]: This work was initiated when the first author was visiting the University of Tokyo with the support of the Japanese Society for the Promotion of Science. The first author also acknowledges support from the NONLOCAL project (ANR-14-CE25-0013) funded by the French National Research Agency (ANR).
--- abstract: 'The thermodynamic quantities such as the local temperature, heat capacity, off-shell free energy and the stability of a black hole involving a global monopole within or outside the $f(R)$ gravity are examined. We compare the two classes of results to show the influence from the generalization of the general relativity. It is found that the $f(R)$ theory will modify the thermodynamic properties of black holes, but the shapes of curves for thermodynamic quantities with respect to the horizon are similar to the results within the frame of general relativity. In both cases there will exist a small black hole which will decay and a large stable black hole in the case that the temperatures are higher than their own critical temperature.' author: - | Jingyun Man Hongbo Cheng[^1]\ Department of Physics, East China University of Science and Technology,\ Shanghai 200237, China\ The Shanghai Key Laboratory of Astrophysics, Shanghai 200234, China title: '**The thermodynamic quantities of a black hole with an $f(R)$ global monopole**' --- =16.5truecm =24truecm =-1truecm =-2truecm PACS number(s): 04.70Bw, 14.80.Hv\ Keywords: Black hole, Global monopole, $f(R)$ gravity **I.Introduction** Recently more contributions were paid to the thermodynamics of various kinds of black holes. More than thirty years ago, Bekenstein pointed out that the entropy of a black hole is proportional to its surface area \[1-3\]. Hawking also discussed the particle creation around a black holes to show that the black hole has a thermal radiation with the temperature subject to its surface gravity \[4\]. The issues about phase transitions of black holes in the frame of semiclassical gravity were listed in Ref. \[5\]. Further the thermodynamic properties of modified Schwarszchild black holes have been investigated \[6, 7\]. The thermodynamic behaviours including phase transition in Born-Infeld-anti-de Sitter black holes were explored by means of various ways \[8, 9\]. The phase transition of the quantum-corrected Schwarzschild black hole was discussed, which fosters the research on the quantum-mechanical aspects of thermodynamic behaviours \[10\]. In the process of the vacuum phase transition in the early Universe the topological defects such as domain walls, cosmic strings and monopoles were generated from the breakdown of local or global gauge symmetries \[11, 12\]. Among these topological defects, a global monopole as a spherical symmetric topological defect occurred in the phase transition of a system composed by a self-coupling triplet of scalar field whose original global O(3) symmetry is spontaneously broken to U(1). It has been shown that the metric outside a monopole has a deficit solid angle \[13\]. We researched on the strong gravitational lensing for a massive source with a global monopole and find that the deficit angle associated with the monopole affect the lensing properties \[14\]. H. A. Buchdahl proposed a modified gravity theory named as $f(R)$ gravity to explain the accelerated expansion of the Universe instead of adding unknown forms of dark energy or dark matter \[15-18\]. The metric outside a gravitational object involving a global monopole in the context of $f(R)$ gravity theory has been studied \[19\]. Further the classical motion of a massive test particle around the gravitational source with an $f(R)$ global monopole is probed \[20\]. We also examine the gravitational lensing for the same object in the strong field limit \[21\]. Here we plan to investigate the thermodynamics of a static and spherically symmetric black hole swallowing a global monopole or an $f(R)$ global monopole. It is significant to understand the $f(R)$ theory in a new direction. We wish to find the influences from the modified gravity on the thermal properties of black holes. First of all we introduce a black hole containing a global monopole in the context of $f(R)$ gravity theory and certainly the black hole metric will recover to be one of the metrics of black hole with a deficit solid angle. We show the dependence of the horizon on the mass and parameters describing the monopole and the modification from $f(R)$ theory. We exhibit the thermodynamic characteristics due to the parameters of the black hole. The thermodynamic stability will also be checked. We are going to discuss the results in the end. **II.The thermodynamics of black holes involving an $f(R)$ global monopole** We adopt the line element, $$ds^{2}=A(r)dt^{2}-B(r)dr^{2}-r^{2}(d\theta^{2}+\sin^{2}\theta d\varphi^{2})$$ which is static and spherical. In the $f(R)$ gravity theory, the action introduced is, $$I=\frac{1}{2\kappa}\int d^{4}x\sqrt{-g}f(R)+I_{m}$$ where $f(R)$ is an analytical function of Ricci scalar $R$ and $\kappa=8\pi G$, $G$ is the Newton constant, $g$ is the determinant of metric and $I_{m}$ is the action of matter fields which can be denoted as, $$I_{m}=\int d^{4}x\sqrt{-g}{\cal L}.$$ Here the Lagrangian for global monopole is \[13\], $${\cal L}=\frac{1}{2}(\partial_{\mu}\phi^{a})(\partial^{\mu}\phi^{a}) -\frac{1}{4}\lambda(\phi^{a}\phi^{a}-\eta^{2})^{2}$$ where $\lambda$ and $\eta$ are parameters. The ansatz for the triplet of field configuration showing a monopole is $\phi^{a}=\eta h(r)\frac{x^{a}}{r}$ with $x^{a}x^{a}=r^{2}$, where $a=1, 2, 3$. $h(r)$ is a dimensionless function to be determined by the equation of motion. This model has a global O(3) symmetry, which is spontaneously broken to U(1). The field equation reads, $$F(R)R_{\mu\nu}-\frac{1}{2}f(R)g_{\mu\nu}-\nabla_{\mu}\nabla_{\nu}F(R) +g_{\mu\nu}\Box F(R)=\kappa T_{\mu\nu}$$ with $F(R)=\frac{df(R)}{dR}$ and $T_{\mu\nu}$ is the minimally coupled energy-momentum tensor. The field equation (5) was solved under the weak field approximation that assumes the components of metric tensor like $A(r)=1+a(r)$ and $B(r)=1+b(r)$ with $|a(r)|$ and $|b(r)|$ being smaller than unity \[19\]. Here the modified theory of gravity corresponds to a small correction on the general relativity like $F(R(r))=1+\psi(r)$ with $\psi(r)\ll1$. It is clear that $F(R)=1$ is equivalent to the conventional general relativity. Further the modification can be taken as the simplest analytical function of the radial coordinate $\psi(r)=\psi_{0}r$. In this case the factor $\psi_{0}$ reflects the deviation of standard general relativity. The external metric of the black hole with a global monopole is found finally \[19, 20\], $$A=B^{-1}=1-8\pi G\eta^{2}-\frac{2GM}{r}-\psi_{0}r$$ where $M$ is the mass parameter. It should be pointed out that the parameter $\eta$ is of the order $10^{16}GeV$ for a typical grand unified theory, which means $8\pi G\eta^{2}\approx 10^{-5}$. If we choose $\psi_{0}=0$ excluding the modification from $f(R)$ theory, the metric (6) will recover to be the result by M. Barriola et.al. \[13\] like, $$ds^{2}=(1-8\pi G\eta^{2}-\frac{2GM}{r})dt^{2} -\frac{dr^{2}}{1-8\pi G\eta^{2}-\frac{2GM}{r}} -r^{2}(d\theta^{2}+\sin^{2}\theta d\varphi^{2}).$$ The function $A(r)$ is plotted in Figure 1 for comparison of two metrics for a global monopole and $f(R)$ monopole black hole. According to the behavior of $f(R)$ curve, it is clear that the black hole has two horizons, one is the inner horizon which is supposed to be the event horizon $r_{H}$, the other one is the outer horizon. The event horizons of two metrics seem to be the same from Figure 1, because the given parameters $8\pi G\eta^{2}$ is very small, but the analytic expression of $r_{H}$ are quite distinct. Solving the equation $A(r_{H})=0$, the event horizon of metric (6) is located at, $$r_{H}=\frac{(1-8\pi G\eta^{2})-\sqrt{(1-8\pi G\eta^{2})^{2} -8\psi_{0}GM}}{2\psi_{0}}.$$ It also gives the relation between the mass parameter $GM$ and the event horizon $r_{H}$, $$GM=\frac{1}{2}r_{H}(1-8\pi G\eta^{2}-\psi_{0}r_{H})$$ for $f(R)$ monopole metric. It is seen from Figure 2 that there is a maximun $GM_{0}=\frac{(1-8\pi G\eta^{2})^{2}}{8\psi_{0}}$ at $r_{H0}=\frac{1-8\pi G\eta^{2}}{2\psi_{0}}$ where the inner and outer horizon meets. The Hawking temperature can be obtained, $$\begin{aligned} T_{H}^{M}=T_{H}^{M}(\eta, \psi_{0})\nonumber\hspace{1cm}\\ =\frac{1}{4\pi}[-g^{tt}g^{rr}g'_{tt}]|_{r=r_{H}}\nonumber\hspace{1mm}\\ =\frac{1}{4\pi}(\frac{1-8\pi G\eta^{2}}{r_{H}}-2\psi_{0})\end{aligned}$$ where the prime stands for the derivative with respect to the radial coordinate $r$. The local temperature is given by \[22\], $$\begin{aligned} T_{loc}^{M}=\frac{T_{H}^{M}}{\sqrt{A(r)}}\nonumber\hspace{10cm}\\ =\frac{1}{4\pi}(\frac{1-8\pi G\eta^{2}}{r_{H}}-2\psi_{0}) \sqrt{\frac{r}{\psi_{0}r_{H}^{2}-(1-8\pi G\eta^{2})r_{H} +(1-8\pi G\eta^{2})r-\psi_{0}r^{2}}}.\end{aligned}$$ Having omitted the influence from the modified gravity, we obtain the local temperature as, $$T_{loc}=\frac{1}{4\pi r_{H}}\sqrt{\frac{(1-8\pi G\eta^{2})r} {r-r_{H}}}.$$ The local temperatures for a global monopole and an $f(R)$ global monopole are shown respectively in the Figure 3. There is a minimal local temperature $T_{c}$, which means if the local temperature is below $T_{c}$, no black hole exists. When the temperatures for the two kinds of black holes are high enough, there will both exist two black holes, one is small and the other is large. From the calculation of the extrema of local temperature, $(\frac{\partial T_{loc}}{\partial r_{H}})_{r}=0$, the minimal local temperature can be obtained as, $$T_{c}^{M}=\frac{\sqrt{r}}{\pi}(\frac{\psi_{0}}{(1-8\pi G\eta^{2})^{\frac{2}{3}} -(1-8\pi G\eta^{2}-2\psi_{0} r)^{\frac{2}{3}}})^{\frac{3}{2}}.$$ The modifying factor $\psi_{0}$ from $f(R)$ leads the minimum of local temperature lower than the one for global monopole black hole, $$T_{c}=\frac{3\sqrt{3}}{8\pi r}\sqrt{1-8\pi G\eta^{2}}$$ for $8\pi G\eta^{2}=10^{-5}$, $r=10$ and $\psi_{0}=0.02$, then $T_{c}^{M}=0.01836$, $T_{c}=0.02067$. The Figure 4 shows that the critical temperature is a decreasing function of the modifying factor $\psi_{0}$ denoting the influence from $f(R)$ gravity here. According to Bekenstein’s opinion, the entropy is proportional to the area of event horizon and denoted as \[1-3\], $$\begin{aligned} S=\frac{A_{0}}{4}\nonumber\\ =\pi r_{H}^{2}\end{aligned}$$ leading $dS=2\pi r_{H}dr_{H}$. From the first law of thermodynamics $dE_{loc}=T_{loc}dS$, the thermodynamical local energy can be derived as, $$\begin{aligned} E_{loc}^{M}=E_{0}+\int_{S_{0}}^{S}T_{loc}^{M}dS\nonumber\hspace{7cm}\\ =r\sqrt{(1-8\pi G\eta^{2})-\psi_{0}r} -\sqrt{r(r-r_{H})}\sqrt{(1-8\pi G\eta^{2})-\psi_{0}(r+r_{H})}\end{aligned}$$ Here $S_{0}$ represents that $M=0$. For simplicity $E_{0}=0$. The thermodynamic local energy in the case not belonging to $f(R)$ theory becomes, $$E_{loc}=\sqrt{(1-8\pi G\eta^{2})r}(\sqrt{r}-\sqrt{r-r_{H}}).$$ For the purpose of checking the stability of the black holes, we should discuss their heat capacity, $$\begin{aligned} C^{M}=(\frac{\partial E_{loc}^{M}}{\partial T_{loc}^{M}})_{r}\nonumber\hspace{9cm}\\ =2\pi(r-r_{H})[(1-8\pi G\eta^{2})-2\psi_{0}r_{H}] [(1-8\pi G\eta^{2})-\psi_{0}(r+r_{H})]\nonumber\hspace{1cm}\\ \times[(1-8\pi G\eta^{2})(\frac{r^{2}}{r_{H}^{2}}-3)\psi_{0} +(1-8\pi G\eta^{2})^{2}\frac{3r_{H}-2r}{2r_{H}^{2}} +2\psi_{0}^{2}r_{H}]^{-1}.\end{aligned}$$ Similarly we choose the modifying factor $\psi_{0}=0$, the heat capacity of black holes leads, $$\begin{aligned} C=(\frac{\partial E_{loc}}{\partial T_{loc}})_{r}\nonumber\hspace{0.5cm}\\ =4\pi\frac{r_{H}^{2}(r-r_{H})}{3r_{H}-2r}.\end{aligned}$$ According to Eq. (18) and Eq. (19), we compare the heat capacities of these two kinds of black holes in the Figure 5. The shapes of the curves of heat capacities are similar, but can be recognized explicitly. The expression of heat capacity for a Schwarzschild black hole with a global monopole is exactly the same as the one for original Schwarzschild black hole. In Figure 5, the $f(R)$ curve shifts from the conventional one. In every case the relatively smaller horizon leads the heat capacity to be negative and the positive capacity is due to the larger horizon, which meaning that the larger black holes are stable. It should be emphasized that only huge black holes can survive for long time within the frame of $f(R)$ theory. When $r_{H}>\frac{1-8\pi G\eta^{2}-[(1-8\pi G\eta^{2})(2\psi_{0} r-(1-8\pi G\eta^{2}))^{2}]^{3/2}}{2\psi_{0}}$, the $f(R)$ heat capacity is positive and a large black hole is stable. A small black hole appears unstable for $0<r_{H}<\frac{1-8\pi G\eta^{2}-[(1-8\pi G\eta^{2})(2\psi_{0} r-(1-8\pi G\eta^{2}))^{2}]^{3/2}}{2\psi_{0}}$ since the heat capacity is negative. A large Schwarzchild black hole with a monopole is stable when $r_{H}>\frac{2r}{3}$, while it is unstable on $0<r_{H}<\frac{2r}{3}$. In order to explore the phase transition among the black holes, we should derive their off-shell free energy. The off-shell free energy can be defined as, $$F_{off}^{M}=E_{loc}^{M}-TS$$ where $E_{loc}^{M}$ is thermodynamic local energy from Eq. (16), $S$ is the entropy of the black hole from Eq. (15) and $T$ is an arbitrary temperature. The off-shell free energy of black holes containing a global monopole governed by $f(R)$ gravity theory can be calculated as, $$F_{off}^{M}=r\sqrt{(1-8\pi G\eta^{2})-\psi_{0}r} -\sqrt{r(r-r_{H})}\sqrt{(1-8\pi G\eta^{2})-\psi_{0}(r+r_{H})} -\pi r_{H}^{2}T.$$ If we are not going to consider the deviation from $f(R)$ theory, the Eq. (21) will become the expression of the off-shell free energy for the black holes with only a deficit solid angle like, $$F_{off}=\sqrt{(1-8\pi G\eta^{2})r}(\sqrt{r}-\sqrt{r-r_{H}}) -\pi r_{H}^{2}T.$$ In Figure 6 the behaviour of the off-shell free energy of black hole involving a global monopole proposed by Barriola and Vilenkin \[13\] is shown as a function of the horizon under several temperatures. Having considered extrema of off-shell free energy, $(\frac{\partial F_{off}}{\partial r_{H}})_{r}=0$ or $(\frac{\partial F^{M}_{off}}{\partial r_{H}})_{r}=0$, we give out the critical temperatures which has already been mentioned above. When the temperature is lower than the critical value, no black hole will appear. For the temperature above the critical one the large black holes are stable but the smaller ones are unstable. The dependence of the free energy of black holes with an $f(R)$ global monopole on the horizon is plotted in the Figure 7 when the temperature is chosen to be several values around their own critical temperature. The shapes of the off-shell free energy are similar to those in the Figure 6. When the temperature is lower than critical value, there is no real root from the equation $(\frac{\partial F_{off}}{\partial r_{H}})_{r}=0$ or $(\frac{\partial F^{M}_{off}}{\partial r_{H}})_{r}=0$, therefore no black hole will appear. There are two equal roots $r_{H1}=r_{H2}=\frac{1-8\pi G\eta^{2}-[(1-8\pi G\eta^{2})(2\psi_{0} r-(1-8\pi G\eta^{2}))^{2}]^{3/2}}{2\psi_{0}}$ at critical temperature. When $T>T_{c}$, two physically meaningful roots exist. An unstable small black hole appears at event horizon $r_{H}=r_{H1}$, and a stable large black hole appears at $r_{H}=r_{H2}$. **III.Discussion** We search for the thermodynamic quantities of the black holes with a global monopole in the context of $f(R)$ gravity theory. We also obtain the thermodynamic quantities excluding the modifications from $f(R)$ theory. We compare the two classes of results to show how the generalized gravity corrects the original thermodynamic quantities such as local temperature, heat capacity, off-shell free energy. It should be pointed out that the $f(R)$-modifications on these quantities are manifest. It is interesting that the shapes of these quantities of black holes controlled by two different gravity theories are similar although the quantities’ expressions are more different from each other. In both cases the small black holes are unstable and the large ones are stable when the temperature is higher than the critical temperature and no black hole can exist as the temperature is sufficiently low. The critical temperature has something to do with the modified factor from $f(R)$ theory. Here we open a new window to explore the $f(R)$ theory revising the Einstein Gravity. In this paper, in what we pay more attention to is the thermodynamic behaviours of a black hole with an $f(R)$ global monopole, as the stablity of the black hole is dependent on them. There are some methods proposed for probing a gravitational source which has a solid deflict angle subject to $f(R)$ global monopole. But one necessary condition is that the black hole has to be stable enough to be observed. Through calculation of the heat capacity, only when the event horizon is larger than $\frac{1-8\pi G\eta^{2}-[(1-8\pi G\eta^{2})(2\psi_{0} r-(1-8\pi G\eta^{2}))^{2}]^{3/2}}{2\psi_{0}}$, the black hole of an $f(R)$ global monopole can be stable. It is the condition for a large and more stable black hole given by extrema of off-shell free energy as well. Moreover, a stable black hole also requires the local temperature higher than the critical temperature $T_{c}$. **Acknowledge** This work is supported by NSFC No. 10875043 and is partly supported by the Shanghai Research Foundation No. 07dz22020. [99]{} J. D. Bekenstein, Lett. Nuovo Cim. 4(1972)737 J. D. Bekenstein, Phys. Rev. D7(1973)2333 J. D. Bekenstein, Phys. Rev. D9(1974)3292 S. W. Hawking, Commun. Math. Phys. 43(1975)199 G. J. Stephens, B. L. Hu, Int. J. Theor. Phys. 40(2001)2183 W. Kim, E. J. Son, M. Yoon, JHEP 0804(2008)042 R. G. Cai, L. M. Cao, N. Ohta, JHEP 1004(2010)082 Y. S. Myung, Y. Kim, Y. Park, Phys. Rev. D78(2008)084002 A. Lala, D. Roychowdhury, Phys. Rev. D86(2012)084027 W. Kim, Y. Kim, arXiv: 1207.5318 T. W. B. Kibble, J. Phys. A9(1976)1387 A. Vilenkin, Phys. Rep. 121(1985)263 M. Barriola, A. Vilenkin, Phys. Rev. Lett. 63(1989)341 H. Cheng, J. Man, Class. Quantum Grav. 28(2011)015001 H. A. Buchdahl, Non-linear Lagrangians and cosmological theory, MNRAS 150(1970)1 S. Nojiri, S. D. Odintsov, Phys. Rev. D68(2003)125312 S. M. Carroll, V. Duvvuri, M. Trodden, M. S. Turner, Phys. Rev. D70(2004)043528 S. Fay, R. Tavakol, S. Tsujikawa, Phys. Rev. D74(2007)063509 T. R. P. Carames, E. R. B. de Mello, M. E. X. Guimaraes, Int. J. Mod. Phys. A (2011) T. R. P. Carames, E. R. B. de Mello, M. E. X. Guimaraes, arXiv: 1111.1856 J. Man, H. Cheng, arXiv: 1205.4857 R. C. Tolman, Phys. Rev. 35(1930)904 ![The figure shows the function $A(r)$ of $f(R)$ monopole black hole (solid line) and Schwarzchild black hole with a global monopole (dot line). Here $8\pi G\eta^{2}\approx10^{-5}$, $\psi_{0}=0.02$ and $GM=1$.](figure1){width="15cm"} ![The figure show the mass parameter $GM$ for $f(R)$ monopole black hole (solid line) and global monopole black hole (dash line). Here $8\pi G\eta^{2}\approx10^{-5}$, $\psi_{0}=0.02$.](figure2){width="15cm"} ![The solid and dotted curves of the dependence of the local temperatures on the horizon with $8\pi G\eta^{2}\approx10^{-5}$, $r=10$ and $\psi_{0}=0.02$ for the Schwarzschild black hole with an $f(R)$ global monopole or a global monopole respectively.](figure3){width="15cm"} ![The dependence of the critical temperature for $8\pi G\eta^{2}\approx10^{-5}$ and $r=10$ on the modifying factor $\psi_{0}$ from $f(R)$ gravity.](figure4){width="15cm"} ![The solid and dotted curves correspond to the dependence of the heat capacities on the horizon with $8\pi G\eta^{2}\approx10^{-5}$, $r=10$ and $\psi_{0}=0.02$ for the Schwarzschild black hole with an $f(R)$ global monopole or a global monopole respectively.](figure5){width="15cm"} ![The dotted, solid and dashed curves correspond to the dependence of the off-shell free energy of the Schwarzschild black hole with a global monopole on the horizon with $8\pi G\eta^{2}\approx10^{-5}$ and $r=10$ for $T=0.018, 0.021, 0.03$ respectively.](figure6){width="15cm"} ![The solid, dot and dashed curves correspond to the dependence of the off-shell free energy of the Schwarzschild black hole with an $f(R)$ global monopole on the horizon with $8\pi G\eta^{2}\approx10^{-5}$, $r=10$ and $\psi_{0}=0.02$ for $T=0.018, 0.021, 0.03$ respectively.](figure7){width="15cm"} [^1]: E-mail address: hbcheng@ecust.edu.cn
--- abstract: 'We present a spectral sequence connecting the continuous and ’locally continuous’ group cohomologies for topological groups. As an application it is shown that for contractible topological groups these cohomology concepts coincide.' author: - Martin Fuchssteiner bibliography: - 'ASpectralSequenceConnectingContinuousWithLocallyContinuousGroupCohomology.bib' date: 'May 01, 2011' title: A Spectral Sequence Connecting Continuous With Locally Continuous Group Cohomology --- Introduction {#introduction .unnumbered} ============ There exist various cohomology concepts for topological groups $G$ and topological coefficient groups $V$ which take the topologies of the group and that of the coefficients into account. One is obtained by restricting oneself to the complex $C_c^* (G;V)$ continuous group cochains only whose cohomology is called the *continuous group cohomology* $H_c (G;V)$. For abstract groups $G$ and $G$-modules $V$ the first cohomology group $H^1 (G;V)$ classifies crossed morphisms modulo principal derivations, the second cohomology group $H^2 (G;V)$ classifies equivalence classes of group extensions $V \hookrightarrow \hat{G} \twoheadrightarrow G$ and the third cohomology group $H^3 (G;V)$ classifies equivalence classes crossed modules with kernel $V$ and cokernel $G$ (cf. [@WeHA Theorem 6.4.5, Theorem 6.6.3 and Theorem 6.6.13]. Analogous considerations show that for topological groups $G$ and $G$-modules $V$ the first cohomology group $H_c^1 (G;V)$ classifies continuous crossed morphisms modulo principal derivations, the second cohomology group $H_{c}^2 (G;V)$ classifies equivalence classes of topological group extensions $V \hookrightarrow \hat{G} \twoheadrightarrow G$ which admit a global section (i.e. $\hat{G} \twoheadrightarrow G$ is a trivial $V$-principal bundle) and the third cohomology group $H_{c}^3 (G;V)$ classifies equivalence classes of topologically split crossed modules. The continuous group cohomology has the drawback that for even the compact Hausdorff group $G=\mathbb{R}/\mathbb{Z}$ the short exact sequence $$0 \rightarrow \mathbb{Z} \hookrightarrow \mathbb{R} \twoheadrightarrow \mathbb{R}/\mathbb{Z} \rightarrow 0$$ of coefficients does not induce a long exact sequence of cohomology groups. (The group $H_{c}^1 (G;\mathbb{R})$ is trivial because the projection $\mathbb{R} \twoheadrightarrow \mathbb{R}/\mathbb{Z}$ does not admit global sections, $H_{c}^n (G;\mathbb{Z})=0$ because all continuous group cochains on $G$ are constant whereas the group of $H_{c}^1 (G;G)$ all continuous endomorphisms of $G$ is non-trivial.) This drawback is relieved by a second more general cohomology concept, which is obtained by considering the complex $C_{cg}^* (G;V)$ of group cochains which are continuous on some identity neighbourhood in $G$. By abuse of language some people call the corresponding cohomology groups $H_{cg} (G;V)$ the ‘locally continuous group cohomology´. The first cohomology group $H_{cg}^1 (G;V)$ classifies continuous crossed morphisms modulo principal derivations, the second cohomology group $H_{cg}^2 (G;V)$ classifies equivalence classes of topological group extensions $V \hookrightarrow \hat{G} \twoheadrightarrow G$ which admit local sections (i.e. $\hat{G} \twoheadrightarrow G$ is a locally trivial $V$-principal bundle) and the third cohomology group $H_{cg}^3 (G;V)$ classifies equivalence classes of crossed modules in which all homomorphisms admit local sections. The inclusion $C_{cg}^* (G;V) \hookrightarrow C_c^* (G;V)$ of cochain complexes induces a morphism $H_{cg}^* (G;V) \rightarrow H_c^* (G;V)$ of cohomology groups, which is used to compare the two cohomology concepts. As the above example shows, these cohomology concepts do not even coincide for connected compact Hausdorff groups and real coefficients. In the following we will show that the contractibility of a topological group $G$ forces the two cohomologies to coincide (cf. Corollary \[gcontriso\]): For contractible groups $G$ the inclusion $C_c^* (G;V) \hookrightarrow C_{cg}^* (X;V)$ induces an isomorphism in cohomology. This is proved by constructing a row-exact double complex $A_{cg,eq} (G;V)$ whose rows and columns can be augmented by the complexes $C_{cg}^* (G;V)$ and $C_{c}^* (G;V)$ respectively. The contractibility of $G$ will be shown to force the columns of this double complex to be exact as well, which then in turn is shown to imply that the inclusion $C_{cg}^* (G;V) \hookrightarrow C_c^* (G;V)$ induces an isomorphism in cohomology. In fact we will be considering the more general setting of transformation groups $(G,X)$ and $G$-equivariant cochains on $X$ and prove these results in this more general setting. Similar results for $k$-groups and smooth transformation groups will also be obtained. Basic Concepts ============== In this section we recall the definitions of various cochain complexes and the interpretation of some of their cohomology groups. For topological spaces $X$ and abelian topological groups $V$ one can consider variations of the exact *standard complex* $A^* (X;V)=\hom_{\mathbf{Set}}(X;V)$ of abelian groups. For every topological space $X$ and abelian topological group $V$ the subcomplex $A_c^* (X;V):= C (X^{*+1};V)$ of the standard complex is called the *continuous standard complex*. For transformation groups $(G,X)$ and $G$-modules $V$ the group $G$ acts on the spaces $X^{n+1}$ via the diagonal action and the groups $A^n (X;V)$ can be endowed with a $G$-action via $$\label{defgact} G \times A^n (X;V) \rightarrow A^n (X;V), \quad [g.f] (\vec{x})=g.[ f (g^{-1} .\vec{x})] \, .$$ The $G$-fixed points of this action are the $G$-equivariant cochains. Because the differential of the standard complex intertwines the $G$-action, the equivariant cochains form a subcomplex $A^* (X;V)^G$ of the standard complex and the continuous equivariant cochains form a subcomplex $A_c^* (X;V)^G$ of the continuous standard complex. These complexes not exact in general. For any group $G$ which acts on itself by left translation and $G$-module $V$ the complex $A^* (G;V)^G$ is the complex of (homogeneous) group cochains; for topological groups $G$ and $G$-modules $V$ the complex $A_c^* (G;V)^G$ is the complex of continuous (homogeneous) group cochains The cohomology $H_{eq} (X;V)$ of the complex $A^* (X;V)^G$ is called the equivariant cohomology of $X$ (with values in $V$). The cohomology $H_{eq,c} (X;V)$ of the subcomplex $A_c^* (X;V)^G$ is called the equivariant continuous cohomology of $X$ (with values in $V$). For any group $G$ which acts on itself by left translation and $G$-module $V$ the cohomology $H_{eq} (G;V)$ is the group cohomology of $G$ with values in $V$; for topological groups $G$ and $G$-modules $V$ the cohomology $H_{eq,c} (G;V)$ is the continuous group cohomology of $G$ with values in $V$. For transformation groups $(G,X)$ and $G$-modules $V$ there exists a $G$-invariant complex $A_{cg}^* (X;V)$ between $A_c (X;V)$ and $A (X;V)$ which we are going to define now. For each open covering $\mathfrak{U}$ of $X$ and each $n \in \mathbb{N}$ one can define an open neighbourhood $\mathfrak{U} [n]$ of the diagonal in $X^{n+1}$ via $$\mathfrak{U} [n] := \bigcup_{U \in \mathfrak{U}} U^{n+1} \, .$$ These neighbourhoods of the diagonals in $X^{*+1}$ form a simplicial subspace of $X^{*+1}$ which allows us to consider the subcomplex of $A^* (X;V)$ formed by the groups $$A_{cr}^n (X,\mathfrak{U};V) := \left\{ f \in A^n (X;V) \mid \, f_{\mid \mathfrak{U} [n]} \in C (\mathfrak{U}[n];V) \right\}$$ of cochains whose restriction to the subspaces $\mathfrak{U}[n]$ of $X^{n+1}$ are continuous. The cohomology of the cochain complex $A_{cr}^* (X,\mathfrak{U};V)$ is denoted by $H_{cr} (X,\mathfrak{U};V)$. If the covering $\mathfrak{U}$ of $X$ is $G$-invariant, then the subspaces $\mathfrak{U}[*]$ is a simplicial $G$-subspace of the simplicial $G$-space $X^{*+1}$. If $G=X$ is a topological group which acts on itself by left translation and $U$ an open identity neighbourhood, then $\mathfrak{U}_U :=\{ g.U \mid g \in G \}$ is a $G$-invariant open covering of $G$ and $\mathfrak{U}[*]$ is an open simplicial $G$-subspace of $G^{*+1}$. For $G$-invariant coverings $\mathfrak{U}$ of $X$ the cohomology of the subcomplex $A_{cr}^* (X,\mathfrak{U};V)^G$ of $G$-equivariant cochains is denoted by $H_{cr,eq} (X,\mathfrak{U};V)$. If $G=X$ is a topological group which acts on itself by left translation and $U$ an open identity neighbourhood, then the complex $A_{cr}^* (X,\mathfrak{U}_U;V)^G$ is the complex of homogeneous group cochains whose restrictions to the subspaces $\mathfrak{U}_U [*]$ are continuous. (These are sometimes called $\mathfrak{U}$-continuous cochains.) For directed systems $\{ \mathfrak{U}_i \mid i \in I \}$ of open coverings of $X$ one can also consider the colimit complex $\operatorname{colim}_i A_{cr}^* (X,\mathfrak{U}_i ;V)$. In particular for the directed system of all open coverings of $X$ one observes that the open diagonal neighbourhoods $\mathfrak{U}[n]$ in $X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods, hence one obtains the complex $$A_{cg}^* (X;V):= \operatorname{colim}_{\mathfrak{U} \text{is open cover of $X$}} A_{cr}^* (X;\mathfrak{U};V)$$ of global cochains whose germs at the diagonal are continuous. This is a subcomplex of the standard complex $A^* (X;V)$ which is invariant under the $G$-action (Eq. \[defgact\]) and thus a sub complex of $G$-modules. The $G$-equivariant cochains with continuous germ form a subcomplex $A_{cg}^* (X;V)^G$ thereof, whose cohomology is denoted by $H_{cg,eq} (X;V)$. The latter subcomplex can also be obtained by taking the colimit over all $G$-invariant open coverings of $X$ only: \[natinclofeqccisiso\] The natural morphism of cochain complexes $$A_{cg,eq}^* (X;V):= \operatorname{colim}_{\mathfrak{U} \text{is $G$-invariant open cover of $X$}} A_{cr}^* (X;\mathfrak{U};V)^G \rightarrow A_{cg}^* (X;V)^G$$ is a natural isomorphism. We show that this morphism is surjective and injective. Every equivalence class in $A_{cg}^n (X;V)^G$ can be represented by a cochain $f \in A_{cr}^n (X,\mathfrak{U};V)^G$, where $\mathfrak{U}$ is an open cover of $X$. The cochain $f$ is continuous on $\mathfrak{U}[n]$ by definition. Its equivariance implies, that it also is continuous on $G . \mathfrak{U}[n]=( G. \mathfrak{U})[n]$, hence an element of $A_{eq}^n (X, (G. \mathfrak{U})[n];V)$. The equivalence class $[f] \in A_{cg,eq}^n (X;V)$ is mapped onto $[f]A_{cg}^n (X;V)^G$. This proves surjectivity. Every equivalence class in $A_{cg,eq}^n (X;V)^G$ can be represented by an equivariant $n$-cochain $f$ in $A_{cr}^n (X, \mathfrak{U};V)^G$, where $\mathfrak{U}$ is a $G$-invariant open cover of $X$. If the image of the class $[f] \in A_{cg}^* (X;V)^G$ is trivial, then the cochain $f$ itself is trivial and so is its class $[f] \in A_{cg,eq}^n (X;V)^G$. This proves injectivity. The cohomology $H_{cg,eq} (X;V)$ is the cohomology of the complex of equivariant cochains which are continuous on some $G$-invariant neighbourhood of the diagonal. If $G=X$ is a topological group which acts on itself by left translation, then the complex $A_{cg}^* (G;V)^G$ is the complex of homogeneous group cochains whose germs at the diagonal are continuous. (By abuse of language these are sometimes called ’locally continuous’ group cochains.) The Spectral Sequence {#sectss} ===================== Let $(G,X)$ be a transformation group, $V$ be $G$-module and $\mathfrak{U}$ be an open covering of $X$. We will show (in Section \[seccontanduccc\]) that the inclusion $A_{cr}^* (X,\mathfrak{U};V) \hookrightarrow A_c^* (X;V)$ induces an isomorphism in cohomology provided the space $X$ is contractible. For this purpose we consider the abelian groups $$\label{defrcu} A_{cr}^{p,q} ( X,\mathfrak{U} ; V ) := \left\{ f: X^{p+1} \times X^{q+1} \rightarrow V \mid f_{\mid X^{p+1} \times \mathfrak{U}[q]} \; \text{is continuous} \right\} \, .$$ The abelian groups $A_{cr}^{p,q} ( X,\mathfrak{U} ; V )$ form a first quadrant double complex whose vertical and horizontal differentials are given by $$\begin{aligned} d_{h}^{p,q} : A_{cr}^{p,q} \to A_{cr}^{p+1,q}, & \quad d_{h}^{p,q}(f^{p,q})(\vec{x},\vec{x}) =\sum_{i=0}^{p+1}(-1)^{i}f^{p,q}(x_{0},...,\widehat{x_{i}},...,x_{p+1},\vec{x}')\\ d_{v}^{p,q} : A_{cr}^{p,q}\to A_{cr}^{p,q+1}, &\quad d_{v}^{p,q}(f^{p,q})(\vec{x},\vec{x}') = (-1)^p \sum_{i=0}^{q+1}(-1)^{i}f^{p,q}(\vec{x},x_{0}',...,\widehat{x_{i}}',...,x_{q+1}') \, .\end{aligned}$$ The double complex $A_{cr}^{*,*} ( X,\mathfrak{U} ; V )$ can be filtrated column-wise to obtain a spectral sequence $E_{cr,*}^{*,*} (X,\mathfrak{U};V)$ (cf. [@Mcl Theorem 2.15]). Since the double complex is a first quadrant double complex, the spectral sequence $E_{cr,*}^{*,*} (X,\mathfrak{U};V)$ converges to the cohomology of the total complex of $A_{cr}^{*,*} ( X,\mathfrak{U} ; V )$. The rows of the double complex $A_{cr}^{*,*} ( X,\mathfrak{U} ; V )$ can be augmented by the complex $A_{cr}^* (X,\mathfrak{U} ;V)$ for the covering $\mathfrak{U}$ and the columns can be augmented by the exact complex $A_c^* (X;V)$ of continuous cochains: $$\vcenter{ \xymatrix{ \vdots & \vdots & \vdots & \vdots \\ A_{cr}^2 (X, \mathfrak{U};V) \ar[r] \ar[u]_{d_{v}} & A_{cr}^{0,2} ( X, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{cr}^{1,2} ( X, \mathfrak{U}; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{cr}^{2,2} ( X, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & \cdots \\ A_{cr}^1 (X, \mathfrak{U};V) \ar[r] \ar[u]_{d_{v}} & A_{cr}^{0,1} ( X, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{cr}^{1,1} ( X, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{cr}^{2,1} ( X, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & \cdots \\ A_{cr}^0 (X, \mathfrak{U};V) \ar[r] \ar[u]_{d_{v}} & A_{cr}^{0,0} ( X, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{cr}^{1,0} ( X, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{cr}^{2,0} ( X, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & \cdots \\ & A_c^0 ( X ; V) \ar[r]^{d_{h}}\ar[u] & A_c^1 ( X ; V) \ar[r]^{d_{h}}\ar[u] & A_c^2 ( X ; V) \ar[r]^{d_{h}}\ar[u] & \cdots }}$$ We denote the total complex of the double complex $A_{cr}^{*,*} ( X, \mathfrak{U} ; V)$ by ${\mathrm{Tot}}A_{cr}^{*,*} ( X, \mathfrak{U} ; V)$. The augmentations of the rows and columns of this double complex induce morphisms $i^* : A_{cr}^* ( X, \mathfrak{U} ; V) \rightarrow {\mathrm{Tot}}A_{cr}^{*,*} ( X, \mathfrak{U} ; V)$ and $j^*: A_c^* ( X ; V) \rightarrow {\mathrm{Tot}}A_{cr}^{*,*} ( X,\mathfrak{U} ; V)$ of cochain complexes respectively. \[columnsexact\] The morphism $i^*: A_{cr}^* ( X,\mathfrak{U} ; V) \rightarrow {\mathrm{Tot}}A_{cr}^{*,*} (X,\mathfrak{U} ; V)$ induces an isomorphism in cohomology. On each augmented row $A_{cr}^q ( X, \mathfrak{U} ; V) \hookrightarrow A_{cr}^{*,q} ( X, \mathfrak{U} ; V)$ one can define a contraction $h^{*,q}$ via $$\label{defrowcontr} h^{p,q} : A_{cr}^{p,q} ( X, \mathfrak{U} ; V) \rightarrow A_{cr}^{p-1,q} ( X, \mathfrak{U} ; V) , \quad h^{p,q} (f) (\vec{x},\vec{x}')= f ( x_0, \ldots, x_{p-1}, x_0 ', \vec{x}' ) \, .$$ Therefore the augmented rows are exact and the augmentation $i^*$ induces an isomorphism in cohomology. Note that for non-trivial $\mathfrak{U}$ this construction does not work for the column complexes, because the so constructed cochains would not fulfil the continuity condition in Def. \[defrcu\]. For $G$-invariant open coverings $\mathfrak{U}$ of $X$ one can consider the sub double complex $A_{cr}^{*,*} ( X,\mathfrak{U} ; V )^G$ of $A_{cr}^{*,*} ( X,\mathfrak{U} ; V )$ whose rows are augmented by the cochain complex $A_{cr}^* (X,\mathfrak{U} ;V)^G$ for the covering $\mathfrak{U}$ and the columns can be augmented by the complex $A_c^* (X;V)^G$ of continuous equivariant cochains (,which is not exact in general). \[columnsexacteq\] For $G$-invariant coverings $\mathfrak{U}$ of $X$ the morphism $i_{eq}^*: ={i^*}^G$ induces an isomorphism in cohomology. The contraction $h_{*,q}$ of the augmented rows $A_{cr}^q ( X, \mathfrak{U} ; V) \hookrightarrow {\mathrm{Tot}}A_{cr}^{*,q} ( X, \mathfrak{U} ; V)$ defined in Eq. \[defrowcontr\] is $G$-equivariant and thus restricts to a row contraction of the augmented sub-row $A_{cr}^q ( X, \mathfrak{U} ; V)^G \hookrightarrow {\mathrm{Tot}}A_{cr}^{*,q} ( X, \mathfrak{U} ; V)^G$. So the morphism $H (i_{eq}) : H_{cr,eq} (X,\mathfrak{U};V) \rightarrow H ( {\mathrm{Tot}}A_{cr}^{*,*} ( X, \mathfrak{U} ; V)^G )$ is invertible. For the composition $H (i_{eq})^{-1} H(j_{eq}):H_{c,eq}(X;V)\rightarrow H_{cr,eq}(X,\mathfrak{U};V)$ we observe: \[contiscohtocr\] The image $j^n (f)$ of a continuous equivariant $n$-cocycle $f$ on $X$ in ${\mathrm{Tot}}A_{cr}^{*,*} (X,\mathfrak{U};,V)^G$ is cohomologous to the image $i_{eq}^n (f)$ of the equivariant $n$-cocycle $f\in A_{cr}^n (X,\mathfrak{U};V)^G$ in ${\mathrm{Tot}}A_{cr}^{*,*} (X,\mathfrak{U};V)^G$. The proof is a variation of the proof of [@F10 Proposition 14.3.8]: Let $f:X^{n+1}\to V$ be a continuous equivariant $n$-cocycle on $X$ and (for $p+q=n-1$) define equivariant cochains $\psi^{p,q} : X^{p+1}\times X^{q+1} \cong X^{n+1} \to V$ in $A_{cr}^{p,q} (X,\mathfrak{U};V)^G$ via $\psi^{p,q} ( \vec{x},\vec{x}')= (-1)^p f (\vec{x},\vec{x}')$. The vertical coboundary of the cochain $\psi^{p,q}$ is given by $$\begin{aligned} [d_v \psi^{p,q}] (\vec{x},x_0',\ldots,x_{q+1}') & = & (-1 )^p \sum (-1)^i f (\vec{x},x_0',\ldots,\hat{x}_i',\ldots,x_q') \\ & = & - \sum (-1)^{p+1+i} f ( x_0,\ldots,\hat{x}_i, \ldots, x_p,\vec{x}') \\ & = & [d_h \psi^{p-1,q+1}](x_0,...,x_p,\vec{x}'). \end{aligned}$$ The anti-commutativity of the horizontal and the vertical differential ensures that the coboundary of the cochain $\sum_{p+q=n-1} (-1)^p \psi^{p,q}$ in the total complex is the cochain $j^n (f) - i^n (f)$. Thus the cocycles $j^n (f)$ and $i_{eq}^n (f)$ are cohomologous in ${\mathrm{Tot}}A_{cr}^{*,*} (X,\mathfrak{U};V)^G$. The composition $H (i_{eq})^{-1} H(j_{eq}):H_{c,eq}(X;V)\rightarrow H_{cr,eq}(X,\mathfrak{U};V)$ is induced by the inclusion $A_c^* (X,\mathfrak{U};V)^G \hookrightarrow A_{cr}^* (X,\mathfrak{U};V)^G$. If the morphism $j^*_{eq}:={j^*}^G : A_c^* (X;V)^G \rightarrow {\mathrm{Tot}}A_{cr}^{*,*}(X,\mathfrak{U}A)^G$ induces a monomorphism, epimorphism or isomorphism in cohomology, then the inclusion $A_c^* (X;V)^G \hookrightarrow A_{cr}^* (X,\mathfrak{U};V)^G$ induces a monomorphism, epimorphism or isomorphism in cohomology respectively. For any directed system $\{ \mathfrak{U}_i \mid i \in I \}$ of open coverings of $X$ one can also consider the corresponding augmented colimit double complexes. In particular for the directed system of all open coverings of $X$ one obtains the double complex complex $$A_{cg}^{*,*} (X;V):= \operatorname{colim}_{\mathfrak{U} \text{ is open cover of $X$}} A_{cr}^{*,*} (X;\mathfrak{U};V)$$ whose rows and columns are augmented by the colimit complex $A_{cg}^* (X;V)$ and by the complex $A_c^* (X;V)$ respectively. For any directed system $\{ \mathfrak{U}_i \mid i \in I \}$ of open coverings of $X$ the morphism $\operatorname{colim}_i i^*: \operatorname{colim}_i A_{cr}^* ( X,\mathfrak{U}_i ; V) \rightarrow {\mathrm{Tot}}\operatorname{colim}_i A_{cr}^{*,*} (X,\mathfrak{U}_i ; V)$ induces an isomorphism in cohomology. The passage to the colimit preserves the exactness of the augmented row complexes (Lemma \[columnsexact\]). As a consequence the colimit morphism $i_{cg}^* : A_{cg}^* ( X; V) \rightarrow {\mathrm{Tot}}A_{cg}^{*,*} (X; V)$ induces an isomorphism in cohomology. The colimit double complex $A_{cg}^{*,*} (X;V)$ is a double complex of $G$-modules and the $G$-equivariant cochains in form a sub double complex $A_{cg}^{*,*} (X;V)^G$, whose rows and columns are augmented by the colimit complex $A_{cg,eq}^* (X;V)$ and by the complex $A_c^* (X;V)^G$ respectively. For any directed system $\{ \mathfrak{U}_i \mid i \in I \}$ of $G$-invariant open coverings of $X$ the morphism $\operatorname{colim}_i i_{eq}^*: \operatorname{colim}_i A_{cr}^* ( X,\mathfrak{U}_i ; V)^G \rightarrow {\mathrm{Tot}}\operatorname{colim}_i A_{cr}^{*,*} (X,\mathfrak{U}_i ; V)^G$ induces an isomorphism in cohomology. The passage to the colimit preserves the exactness of the augmented row complexes (Lemma \[columnsexacteq\]). Moreover, since the open diagonal neighbourhoods $\mathfrak{U}[n]$ in $X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods, we observe: \[natinclofeqdcisiso\] The natural morphism of double complexes $$A_{cg,eq}^{*,*} (X;V):= \operatorname{colim}_{\mathfrak{U} \text{is $G$-invariant open cover of $X$}} A_{cr}^{*,*} (X;\mathfrak{U};V)^G \rightarrow A_{cg}^* (X;V)^G$$ is a natural isomorphism. The proof is analogous to that of Proposition \[natinclofeqccisiso\]. As a consequence the colimit morphism $i_{cg,eq}^* : A_{cg,eq}^* ( X; V) \rightarrow {\mathrm{Tot}}A_{cg}^{*,*} (X; V)^G$ induces an isomorphism in cohomology, and the morphism $H (i_{cg,eq})$ is invertible. For the composition $H(i_{cg,eq})^{-1} H(j_{eq}):H_{c,eq}(X;V)\rightarrow H_{cg,eq}(X,\mathfrak{U};V)$ we observe: \[contiscohtocreq\] The image $j^n (f)$ of a continuous equivariant $n$-cocycle $f$ on $X$ in ${\mathrm{Tot}}A_{cg}^{*,*} (X;,V)^G$ is cohomologous to the image $i_{cg,eq}^n (f)$ of the equivariant $n$-cocycle $f\in A_{cg,eq}^n (X;V)$ in ${\mathrm{Tot}}A_{cg}^{*,*} (X;V)^G$. The proof is analogous to that of Proposition \[contiscohtocr\]. The composition $H (i_{cg,eq})^{-1} H(j_{eq}):H_{c,eq}(X;V)\rightarrow H_{cg,eq}(X;V)$ is induced by the inclusion $A_c^* (X;V)^G \hookrightarrow A_{cg}^* (X;V)^G$. If the morphism $j^*_{eq}:={j^*}^G : A_c^* (X;V)^G \rightarrow {\mathrm{Tot}}A_{cg}^{*,*}(X;V)^G$ induces a monomorphism, epimorphism or isomorphism in cohomology, then the inclusion $A_c^* (X;V)^G \hookrightarrow A_{cg,eq}^* (X;V)$ induces a monomorphism, epimorphism or isomorphism in cohomology respectively. Continuous and $\mathfrak{U}$ -Continuous Cochains {#seccontanduccc} ================================================== In this section we consider transformation groups $(G,X)$ and $G$-modules $V$ for which we show that the inclusion $A_c^* (X,\mathfrak{U};V)^G \hookrightarrow A_{cr}^* (X,\mathfrak{U};V)^G$ of the complex of continuous equivariant cochains into the complex of equivariant $\mathfrak{U}$-continuous cochains induces an isomorphism $H_c^* (X,\mathfrak{U};V) \cong H_{cr}^* (X,\mathfrak{U};V)$ provided the topological space $X$ is contractible. The proof relies on the row exactness of the double complexes $A_c^{*,*} (X,\mathfrak{U};V )^G$ and $A_{cr}^{*,*} ( X,\mathfrak{U};V)^G$. At first we reduce the problem to the non-equivariant case: \[noneqextheneqex\] If the augmented column complexes $A_c^p (X;V) \hookrightarrow A_{cr}^{p,*}(X,\mathfrak{U};V)$ are exact, then the augmented sub column complexes $A_c^p (X;V)^G \hookrightarrow A_{cr}^{p,*}(X,\mathfrak{U};V)^G$ of equivariant cochains are exact as well. Assume that the augmented column complexes $A_c^p (X,\mathfrak{U};V) \hookrightarrow A_{cr}^{p,*}(X,\mathfrak{U};V)$ are exact. Then each equivariant vertical cocycle $f_{eq}^{p,q} \in A_{rc}^{p,q} (X,\mathfrak{U};V)^G$ is the vertical coboundary $d_v f^{p,q-1}$ of a cocycle $f^{p,q-1} \in A_{cr}^{p,q-1} (X,\mathfrak{U};V)$ (which is not necessary equivariant). Define an equivariant cochain $f_{eq}^{p,q-1}$ of bidegree $$f_{eq}^{p,q-1} (\vec{x},\vec{x}'):= x_0 . f^{p,q-1} (x_0^{-1} . \vec{x}, x_0^{-1} . \vec{x}') \, .$$ This equivariant cochain is continuous on $X^{p+1} \times \mathfrak{U}[q-1]$ because $f^{p,q-1}$ is continuous on $X^{p+1} \times \mathfrak{U}[q-1]$. We assert that the vertical coboundary $d_v f_{eq}^{p,q-1}$ of $f_{eq}$ is the equivariant vertical cocycle $f_{eq}^{p,q}$. Indeed, since the differential $d_v$ is equivariant, the vertical coboundary of $f_{eq}^{p,q-1}$ computes to $$d_v f_{eq}^{p,q-1} (\vec{x},\vec{x}') = x_0 . \left[ d_v f^{p,q-1} (x_0^{-1} . \vec{x} , x_0^{-1} . \vec{x}') \right]= f_{eq}^{p,q} (\vec{x} , \vec{x}') \, .$$ Thus every equivariant vertical cocycle $f_{eq}^{p,q}$ in $A_{cr}^{*,*} (X,\mathfrak{U};V)^G$ is the vertical coboundary of an equivariant cochain $f_{eq}^{p,q-1}$ of bidegree $(p,q-1)$. \[augexthenjeqindiso\] If the augmented column complexes $A_c^p (X;V) \hookrightarrow A_{cr}^{p,*} (X,\mathfrak{U};V)$ are exact, then the inclusion $j_{eq}^* : A_c^* (X;V)^G \hookrightarrow {\mathrm{Tot}}A_{cr}^{*,*}(X,\mathfrak{U},V)^G$ induces an isomorphism in cohomology. \[augextheninclindiso\] If the augmented column complexes $A_c^p (X;V) \hookrightarrow A_{cr}^{p,*} (X,\mathfrak{U};V)$ are exact, then the inclusion $A_c^* (X;V)^G \hookrightarrow A_{cr}^* (X,\mathfrak{U};V)^G$ induces an isomorphism in cohomology. To achieve the announced result it remains to show that for contractible spaces $X$ the colimit augmented columns $A_c^p (X;V) \hookrightarrow A_{cg}^{p,*} (X;V)$ are exact. For this purpose we first consider the cochain complex associated to the cosimplicial abelian group $A^{p,*} (X;V) := \left\{ f : X^{p+1} \times X^{*+1} \rightarrow V \mid \, \forall \vec{x}' \in X^{*+1} : f (-,\vec{x}') \in C (X^{p+1},V) \right\}$ of global cochains, its subcomplex $A_{cr}^{p,*} (X,\mathfrak{U};V)$ and the cochain complexes associated to the cosimplicial abelian groups $$\begin{aligned} A^{p,*} (\mathfrak{U};V) & := & \{ f : X^{p+1} \times \mathfrak{U}[*] \rightarrow \mid \, \forall \vec{x}' \in \mathfrak{U}[*] : f (-,\vec{x}') \in C (X^{p+1},V) \} \quad \text{and} \\ A_c^{p,*} (X,\mathfrak{U};V) & := & C ( X^{p+1} \times \mathfrak{U}[*] , V) \, .\end{aligned}$$ Restriction of global to local cochains induces morphisms of cochain complexes $\operatorname{Res}^{p,*} : A^{p,*} (X;V) \twoheadrightarrow A^{p,*} (X,\mathfrak{U};V)$ and $\operatorname{Res}_{cr}^{p,*} : A_{cr}^* (X,\mathfrak{U};V) \twoheadrightarrow A_c^{p,*} (X,\mathfrak{U};V)$ intertwining the inclusions of the subcomplexes $A_{cr}^{p,*} (X,\mathfrak{U};V) \hookrightarrow A^{p,*} (X;V)$ and $A_c^{p,*} (X,\mathfrak{U};V) \hookrightarrow A^{p,*} (X, \mathfrak{U};V)$, so one obtains the following commutative diagram $$\label{morphexseq} \begin{array}{cccccccc} 0 \longrightarrow & \ker (\operatorname{Res}_{cr}^{p,*} ) & \longrightarrow & A_{cr}^{p,*} (X,\mathfrak{U};V) & \longrightarrow & A_c^{p,*} (X,\mathfrak{U};V) & \longrightarrow 0 \\ & \downarrow & & \downarrow & & \downarrow & \\ 0 \longrightarrow & \ker (\operatorname{Res}^{p,*} ) & \longrightarrow & A^{p,*} (X;V) & \longrightarrow & A^{p,*} (X , \mathfrak{U};V) & \longrightarrow 0 \end{array}$$ of cochain complexes whose rows are exact. The kernel $\ker (\operatorname{Res}^{p,q} )$ is the subspace of those $(p,q)$-cochains which are trivial on $X^{p+1} \times \mathfrak{U} [q]$. Since these $(p,q)$-cochains are continuous on $X^{p+1} \times \mathfrak{U}[q]$ we find that both kernels coincide. We abbreviate the complex $\ker (\operatorname{Res}^{p,*} ) = \ker (\operatorname{Res}_{rc}^{p,*} )$ by $K^{p,*}$ and denote the cohomology groups of the complex $A_{cr}^{p,*} (X,\mathfrak{U};V)$ by $H_{cr}^{p,*} (X,\mathfrak{U};V)$, the cohomology groups of the complex $A_c^{p,*} (X,\mathfrak{U};V)$ of continuous cochains by $H_c^{p,*} (X,\mathfrak{U};V)$ and the cohomology groups of the complex $A^{p,*} (X,\mathfrak{U};V)$ by $H^{p,*} (X,\mathfrak{U};V)$. The cochain complexes $A^{p,*} (X;V)$ are exact. For any point $* \in X$ the homomorphisms $h^{p,q} : A^{p,q} (X;V) \rightarrow A^{p,q-1} (X;V)$ given by $h^{p,q} (f) (\vec{x},\vec{x}'):=f (\vec{x},*,\vec{x}')$ form a contraction of the complex $A^{p,*} (X;V)$. The morphism of short exact sequences of cochain complexes in Diagram \[morphexseq\] gives rise to a morphism of long exact cohomology sequences, in which the cohomology of the complex $A^{p,*} (X;V)$ is trivial: $$\label{diaglecs} \xymatrix@R-10pt@C-4pt{ \ar[r] & H^q (K^{p,*} )) \ar[r] \ar@{=}[d] & H_{cr}^{p,q} (X,\mathfrak{U};V) \ar[r] \ar[d] & H_c^{p,q} (X,\mathfrak{U};V) \ar[r] \ar[d] & H^{q+1} (K^{p,*} ) \ar[r] \ar@{=}[d] & {} \\ \ar[r]^\cong & H^q (K^{p,*} ) \ar[r]& 0 \ar[r] & H^{p,q} (X,\mathfrak{U};V) \ar[r]^\cong & H^{q+1} (K^{p,*} )) \ar[r] & {} }$$ The augmented complex $A_c^p (X;V) \hookrightarrow A_{cr}^{p,*} (X,\mathfrak{U};V)$ is exact if and only if the inclusion $A_c^{p,*} (X,\mathfrak{U};V) \hookrightarrow A^{p,*} (X,\mathfrak{U};V)$ induces an isomorphism in cohomology. This is an immediate consequence of Diagram \[diaglecs\] If the inclusion $A_c^{p,*} (X,\mathfrak{U};V) \hookrightarrow A^{p,*} (X,\mathfrak{U};V)$ induces an isomorphism in cohomology, then the inclusions $j_{eq}^* : A_c^* (X;V)^G \hookrightarrow {\mathrm{Tot}}A_{cr}^{*,*}(X,\mathfrak{U},V)^G$ and $A_c^* (X,\mathfrak{U};V)^G \hookrightarrow A_{cr}^* (X,\mathfrak{U};V)^G$ also induces an isomorphism in cohomology. This follows from the preceding Lemma and Corollaries \[augexthenjeqindiso\] and \[augextheninclindiso\]. The passage to the colimit over all open coverings of $X$ yields the corresponding results for the complexes of cochains with continuous germs: \[noneqextheneqexcg\] If the augmented column complexes $A_c^p (X;V) \hookrightarrow A_{cg}^{p,*}(X;V)$ are exact, then the augmented sub column complexes $A_c^p (X;V)^G \hookrightarrow A_{cg}^{p,*}(X;V)^G$ of equivariant cochains are exact as well. The proof is similar to that of Proposition \[noneqextheneqex\]. \[augexthenjeqindisocg\] If the augmented column complexes $A_c^p (X;V) \hookrightarrow A_{cg}^{p,*} (X;V)$ are exact, then the inclusion $j_{eq}^* : A_c^* (X;V)^G \hookrightarrow {\mathrm{Tot}}A_{cg}^{*,*}(X;V)^G$ induces an isomorphism in cohomology. \[augextheninclindisocg\] If the augmented column complexes $A_c^p (X;V) \hookrightarrow A_{cg}^{p,*} (X;V)$ are exact, then the inclusion $A_c^* (X;V)^G \hookrightarrow A_{cg}^* (X;V)^G$ induces an isomorphism in cohomology. \[remonlyginvcov\] Alternatively to taking the colimit over all open coverings $\mathfrak{U}$ of $X$ one may consider $G$-invariant open coverings only to obtains the same results. (This was shown in Proposition \[natinclofeqccisiso\] and Lemmata \[natinclofeqdcisiso\].) If $G=X$ is a topological group which acts on itself by left translation and the augmented columns $A_c^p (X;V) \hookrightarrow A_{cg}^{p,*} (X;V) := \operatorname{colim}A^{p,*} (X,\mathfrak{U}_U;V)$ (where $U$ runs over all open identity neighbourhoods in $G$) are exact, then the inclusion $A_c^* (X;V)^G \hookrightarrow A_{cg}^* (X;V)^G$ induces an isomorphism in cohomology. The complex $A^{p,*} (X,\mathfrak{U};V)$ is isomorphic to the complex $A^* (\mathfrak{U}; C (X^{p+1},V))$. The colimit $A_{AS}^* (X; C (X^{p+1} ,V)):=\operatorname{colim}A^* (\mathfrak{U}; C (X^{p+1} ,V))$, where $\mathfrak{U}$ runs over all open coverings of $X$ is the complex of Alexander-Spanier cochains on $X$. Therefore the colimit complex $\operatorname{colim}A^p (X; A^* (\mathfrak{U};V))$ is isomorphic to the cochain complex $A_{AS}^* (X;C (X^{p+1} ,V))$. A similar observation can be made for the cochain complex $A_c^{p,*} (X,\mathfrak{U};V)$ if the exponential law $C (X^{p+1} \times \mathfrak{U}[q],V) \cong C (X,C (\mathfrak{U}[q],V))$ holds for a cofinal set of open coverings $\mathfrak{U}$ of $X$. Passing to the colimit in Diagram \[morphexseq\] yields the morphism $$\label{morphexseqcg} \begin{array}{cccccccc} 0 \longrightarrow & \ker (\operatorname{Res}_{cg}^{p,*} ) & \longrightarrow & A_{cg}^{p,*} (X;V) & \longrightarrow & \operatorname{colim}A_c^{p,*} (X,\mathfrak{U};V) & \longrightarrow 0 \\ & \downarrow & & \downarrow & & \downarrow & \\ 0 \longrightarrow & \ker (\operatorname{Res}^{p,*} ) & \longrightarrow & A^{p,*} (X;V) & \longrightarrow & A_{AS}^* (X; C^{p+1} (X,V)) & \longrightarrow 0 \end{array}$$ of short exact sequences of cochain complexes. The kernel $\ker (\operatorname{Res}^{p,q})$ is the subspace of those $(p,q)$-cochains which are trivial on $X^{p+1} \times \mathfrak{U} [q]$ for some open covering $\mathfrak{U}$ of $X$. Since these $(p,q)$-cochains are continuous on $X^{p+1} \times \mathfrak{U}[q]$ we find that both kernels coincide. We abbreviate the complex $\ker (\operatorname{Res}^{p,*} ) = \ker (\operatorname{Res}_{cg}^{p,*} )$ by $K_{cg}^{p,*}$ and denote the cohomology groups of the complex $A_{cg}^{p,*} (X;V)$ by $H_{cg}^{p,*} (X;V)$. The morphism of short exact sequences of cochain complexes in Diagram \[morphexseqcg\] gives rise to a morphism of long exact cohomology sequences: $$\label{diaglecscg} \xymatrix@C-11pt@R-10pt{ \ar[r] & H^q (K_{cg}^{p,*} )) \ar[r] \ar@{=}[d] & H_{cg}^{p,q} (X,\mathfrak{U};V) \ar[r] \ar[d] & H^q (\operatorname{colim}A_c^{p,*} (X,\mathfrak{U};V) \ar[r] \ar[d] & H^{q+1} (K_{cg}^{p,*} ) \ar[r] \ar@{=}[d] & {} \\ \ar[r]^\cong & H^q (K^{p,*} ) \ar[r]& 0 \ar[r] & H_{AS}^q (X; C^{p+1} (X,V)) \ar[r]^\cong & H^{q+1} (K^{p,*} )) \ar[r] & {} }$$ The augmented complex $A_c^p (X;V) \hookrightarrow A_{cg}^{p,*} (X;V)$ is exact if and only if the inclusion $\operatorname{colim}A_c^{p,*} (X,\mathfrak{U};V)\hookrightarrow A_{AS}^* (X;C^{p+1}(X,V))$ of cochain complexes induces an isomorphism in cohomology. This is an immediate consequence of Diagram \[diaglecscg\] \[inclcolimacpsindisothenjeqiso\] If the inclusion $\operatorname{colim}A_c^{p,*} (X,\mathfrak{U};V)\hookrightarrow A_{AS}^* (X;C(X^{p+1},V))$ induces an isomorphism in cohomology, then $j_{eq}^* : A_c^* (X;V)^G \hookrightarrow {\mathrm{Tot}}A_{cg}^{*,*}(X;V)^G$ and $A_c^* (X;V)^G \hookrightarrow A_{cg}^* (X;V)^G$ also induce an isomorphism in cohomology. This follows from the preceding Lemma and Corollaries \[augexthenjeqindisocg\] and \[augextheninclindisocg\]. As observed before (cf. Remark \[remonlyginvcov\]) one may restrict oneself to the directed system of $G$-invariant open coverings only to achieve the same result. Thus we observe: If $G=X$ is a locally contractible topological group which acts on itself by left translation and the inclusion $\operatorname{colim}A_c^{p,*} (X,\mathfrak{U};V)\hookrightarrow A_{AS}^* (X;C(X^{p+1},V))$ (where $U$ runs over all open identity neighbourhoods in $G$) induces an isomorphism in cohomology, then the inclusion $A_c^* (X;V)^G \hookrightarrow A_{cg}^* (X;V)^G$ induces an isomorphism in cohomology as well. It has been shown in [@vE62b] that the cohomology of the colimit cochain complex $\operatorname{colim}A^* ( \mathfrak{U} ;C(X^{p+1},V))$ is the Alexander-Spanier cohomology of $X$. \[xcontrthenacpsistriv\] If the topological space $X$ is contractible, then the cohomology of the complex $\operatorname{colim}A_c^{p,*} (X,\mathfrak{U};V)$ is trivial. The reasoning is analogous to that for the Alexander-Spanier presheaf. The proof [@F10 Theorem 2.5.2] carries over almost in verbatim. For contractible $X$ the inclusion $A_c^* (X;V)^G \hookrightarrow A_{cg}^* (X;V)^G$ induces an isomorphism in cohomology. If the topological space $X$ is contractible, then the Alexander-Spanier cohomology $H_{AS} (X;C^{p+1}(X,V))$ is trivial and the cohomology of the cochain complex $\operatorname{colim}A_c^{p,*} (X,\mathfrak{U};V)$ is trivial by Lemma \[xcontrthenacpsistriv\]. By Proposition \[inclcolimacpsindisothenjeqiso\] the inclusion $A_c^* (X;V)^G \hookrightarrow A_{cg}^* (X;V)^G$ then induces an isomorphism in cohomology. \[gcontriso\] For contractible topological groups $G$ the continuous group cohomology is isomorphic to the cohomology of homogeneous group cochains with continuous germ at the diagonal. working in the category of $k$-spaces {#secwinktop} ===================================== In this section we consider transformation groups $(G,X)$ in the category ${\mathbf{kTop}}$ of $k$-spaces and $G$-modules $V$ in ${\mathbf{kTop}}$. Working only in the category ${\mathbf{kTop}}$ we construct a spectral sequence analogously to that in Section \[sectss\] and derive results analogous to those obtained there. For every $k$-space $X$ and abelian $k$-group $V$ the subcomplex $A_{kc}^* (X;V):= C ( {\mathrm{k}}X^{*+1};V)$ of the standard complex is called the *continuous standard complex in ${\mathbf{kTop}}$*. For open coverings $\mathfrak{U}$ of a $k$-space $X$ we also consider the subcomplex of $A^* (X;V)$ formed by the groups $$A_{kcr}^n (X,\mathfrak{U};V) := \left\{ f \in A^n (X;V) \mid \, f_{\mid {\mathrm{k}}\mathfrak{U} [n]} \in C ( {\mathrm{k}}\mathfrak{U}[n];V) \right\}$$ of cochains whose restriction to the open subspaces ${\mathrm{k}}\mathfrak{U}[n]$ of ${\mathrm{k}}X^{n+1}$ are continuous. The cohomology of the cochain complex $A_{kcr}^* (X,\mathfrak{U};V)$ is denoted by $H_{kcr} (X,\mathfrak{U};V)$. If the covering $\mathfrak{U}$ of $X$ is $G$-invariant, then the subspaces ${\mathrm{k}}\mathfrak{U}[*]$ is a simplicial $G$-subspace of the simplicial $G$-space ${\mathrm{k}}X^{*+1}$. If $G=X$ is a $k$-group which acts on itself by left translation and $U$ an open identity neighbourhood, then $\mathfrak{U}_U :=\{ g.U \mid g \in G \}$ is a $G$-invariant open covering of $G$ and ${\mathrm{k}}\mathfrak{U}[*]$ is a simplicial $G$-subspace of ${\mathrm{k}}G^{*+1}$. For $G$-invariant coverings $\mathfrak{U}$ of $X$ the cohomology of the subcomplex $A_{kcr}^* (X,\mathfrak{U};V)^G$ of $G$-equivariant cochains is denoted by $H_{kcr,eq} (X,\mathfrak{U};V)$. If $G=X$ is a $k$-group which acts on itself by left translation and $U$ an open identity neighbourhood, then the complex $A_{kcr}^* (X,\mathfrak{U}_U;V)^G$ is the complex of homogeneous group cochains whose restrictions to the subspaces ${\mathrm{k}}\mathfrak{U}_U [*]$ are continuous. (These are sometimes called $\mathfrak{U}$-continuous cochains.) For directed systems $\{ \mathfrak{U}_i \mid i \in I \}$ of open coverings of $X$ one can also consider the colimit complex $\operatorname{colim}_i A_{kcr}^* (X,\mathfrak{U}_i ;V)$. In particular, if the open diagonal neighbourhoods ${\mathrm{k}}\mathfrak{U}[n]$ in ${\mathrm{k}}X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods, one obtains the complex $$A_{kcg}^* (X;V):= \operatorname{colim}_{\mathfrak{U} \text{is open cover of $X$}} A_{kcr}^* (X;\mathfrak{U};V)$$ of global cochains whose germs at the diagonal are continuous. This happens for all $k$-spaces $X$ for which the finite products $X^{n+1}$ in ${\mathbf{Top}}$ are already compactly Hausdorff generated, e.g. metrisable spaces, locally compact spaces or Hausdorff $k_\omega$-spaces. The complex $A_{kcg}^* (X;V)$ is then a subcomplex of the standard complex $A^* (X;V)$ which is invariant under the $G$-action (Eq. \[defgact\]) and thus a sub complex of $G$-modules. The $G$-equivariant cochains with continuous germ form a subcomplex $A_{kcg}^* (X;V)^G$ thereof, whose cohomology is denoted by $H_{kcg,eq} (X;V)$. The latter subcomplex can also be obtained by taking the colimit over all $G$-invariant open coverings of $X$ only: \[natinclofeqccisisok\] If the open diagonal neighbourhoods ${\mathrm{k}}\mathfrak{U}[n]$ in ${\mathrm{k}}X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods then the natural morphism of cochain complexes $$A_{kcg,eq}^* (X;V):= \operatorname{colim}_{\mathfrak{U} \text{is $G$-invariant open cover of $X$}} A_{kcr}^* (X;\mathfrak{U};V)^G \rightarrow A_{kcg}^* (X;V)^G$$ is a natural isomorphism. The proof is analogous top that of Proposition \[natinclofeqccisiso\]. If the open diagonal neighbourhoods ${\mathrm{k}}\mathfrak{U}[n]$ in ${\mathrm{k}}X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods then the cohomology $H_{kcg,eq} (X;V)$ is the cohomology of the complex of equivariant cochains which are continuous on some $G$-invariant neighbourhood of the diagonal. If $G=X$ is a metrisable or locally compact topological group or a real or complex Kac-Moody group which acts on itself by left translation, then the complex $A_{kcg}^* (G;V)^G$ is the complex of homogeneous group cochains whose germs at the diagonal are continuous. (By abuse of language these are sometimes called ’locally continuous’ group cochains.) Analogously to the procedure in Section \[sectss\] we can construct a spectral sequence relating $A_{kcr}^* (X,\mathfrak{U};V)$ and $A_{kc}^* (X;V)$. For this purpose we consider the abelian groups $$\label{defrcuk} A_{kcr}^{p,q} ( X,\mathfrak{U} ; V ) := \left\{ f: X^{p+1} \times X^{q+1} \rightarrow V \mid f_{\mid {\mathrm{k}}X^{p+1} \times_k {\mathrm{k}}\mathfrak{U}[q]} \; \text{is continuous} \right\} \, .$$ The abelian groups $A_{kcr}^{p,q} ( X,\mathfrak{U} ; V )$ form a first quadrant double complex whose vertical and horizontal differentials are given by the same formulas as for the double complex $A_{cr}^{p,q} ( X,\mathfrak{U} ; V )$ introduced in Section \[sectss\]. Analogously to the latter double complex the rows of the double complex $A_{kcr}^{*,*} ( X,\mathfrak{U} ; V )$ can be augmented by the complex $A_{kcr}^* (X,\mathfrak{U} ;V)$ for the covering $\mathfrak{U}$ and the columns can be augmented by the exact complex $A_{kc}^* (X;V)$ of continuous cochains. We denote the total complex of the double complex $A_{kcr}^{*,*} ( X, \mathfrak{U} ; V)$ by ${\mathrm{Tot}}A_{kcr}^{*,*} ( X, \mathfrak{U} ; V)$. The augmentations of the rows and columns induce morphisms $i_k^* : A_{kcr}^* ( X, \mathfrak{U} ; V) \rightarrow {\mathrm{Tot}}A_{kcr}^{*,*} ( X, \mathfrak{U} ; V)$ and $j_k^*: A_{kc}^* ( X ; V) \rightarrow {\mathrm{Tot}}A_{kcr}^{*,*} ( X,\mathfrak{U} ; V)$ of cochain complexes respectively. \[columnsexactk\] The morphism $i_k^*: A_{kcr}^* ( X,\mathfrak{U} ; V) \rightarrow {\mathrm{Tot}}A_{kcr}^{*,*} (X,\mathfrak{U} ; V)$ induces an isomorphism in cohomology. The proof of Lemma \[columnsexact\] also works in the category ${\mathbf{kTop}}$ of $k$-spaces. For $G$-invariant open coverings $\mathfrak{U}$ of $X$ one can consider the sub double complex $A_{kcr}^{*,*} ( X,\mathfrak{U} ; V )^G$ of $A_{kcr}^{*,*} ( X,\mathfrak{U} ; V )$ whose rows are augmented by the cochain complex $A_{kcr}^* (X,\mathfrak{U} ;V)^G$ for the covering $\mathfrak{U}$ and the columns can be augmented by the complex $A_{kc}^* (X;V)^G$ of continuous equivariant cochains (,which is not exact in general). \[columnsexacteqk\] For $G$-invariant coverings $\mathfrak{U}$ of $X$ the morphism $i_{k,eq}^*: ={i_k^*}^G$ induces an isomorphism in cohomology. The proof is analogous to that of Lemma \[columnsexacteq\]. So the morphism $H (i_{k,eq}) : H_{kcr,eq} (X,\mathfrak{U};V) \rightarrow H ( {\mathrm{Tot}}A_{kcr}^{*,*} ( X, \mathfrak{U} ; V)^G )$ is invertible. For the composition $H (i_{k, eq})^{-1} H(j_{k,eq}):H_{kc,eq}(X;V)\rightarrow H_{kcr,eq}(X,\mathfrak{U};V)$ we observe: \[contiscohtocrk\] The image $j_k^n (f)$ of a continuous equivariant $n$-cocycle $f$ on $X$ in ${\mathrm{Tot}}A_{kcr}^{*,*} (X,\mathfrak{U};,V)^G$ is cohomologous to the image $i_{k,eq}^n (f)$ of the equivariant $n$-cocycle $f\in A_{kcr}^n (X,\mathfrak{U};V)^G$ in ${\mathrm{Tot}}A_{kcr}^{*,*} (X,\mathfrak{U};V)^G$. The proof is analogous to that of Proposition \[contiscohtocr\]. The map $H (i_{k,eq})^{-1} H(j_{eq}) :H_{kc,eq}(X;V) \rightarrow H_{kcr,eq}(X,\mathfrak{U};V)$ is induced by the inclusion $A_{kc}^* (X,\mathfrak{U};V)^G \hookrightarrow A_{kcr}^* (X,\mathfrak{U};V)^G$. If the morphism $j^*_{k, eq}:={j^*}^G : A_{kc}^* (X;V)^G \rightarrow {\mathrm{Tot}}A_{kcr}^{*,*}(X,\mathfrak{U}A)^G$ induces a monomorphism, epimorphism or isomorphism in cohomology, then the inclusion $A_{kc}^* (X;V)^G \hookrightarrow A_{kcr}^* (X,\mathfrak{U};V)^G$ induces a monomorphism, epimorphism or isomorphism in cohomology respectively. For any directed system $\{ \mathfrak{U}_i \mid i \in I \}$ of open coverings of $X$ the morphism $\operatorname{colim}_i i^*: \operatorname{colim}_i A_{kcr}^* ( X,\mathfrak{U}_i ; V) \rightarrow {\mathrm{Tot}}\operatorname{colim}_i A_{kcr}^{*,*} (X,\mathfrak{U}_i ; V)$ induces an isomorphism in cohomology. The passage to the colimit preserves the exactness of the augmented row complexes (Lemma \[columnsexactk\]). For any directed system $\{ \mathfrak{U}_i \mid i \in I \}$ of $G$-invariant open coverings of $X$ the morphism $\operatorname{colim}_i i_{k, eq}^*: \operatorname{colim}_i A_{kcr}^* ( X,\mathfrak{U}_i ; V)^G \rightarrow {\mathrm{Tot}}\operatorname{colim}_i A_{cr}^{*,*} (X,\mathfrak{U}_i ; V)^G$ induces an isomorphism in cohomology. The passage to the colimit preserves the exactness of the augmented row complexes (Lemma \[columnsexacteqk\]). If the open diagonal neighbourhoods ${\mathrm{k}}\mathfrak{U}[n]$ in ${\mathrm{k}}X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods then one obtains the double complex complex $$A_{kcg}^{*,*} (X;V):= \operatorname{colim}_{\mathfrak{U} \text{ is open cover of $X$}} A_{kcr}^{*,*} (X;\mathfrak{U};V)$$ whose rows and columns are augmented by the complexes $A_{kcg}^* (X;V)$ and $A_{kc}^* (X;V)$ respectively. In this case the colimit morphism $i_{kcg}^* : A_{kcg}^* ( X; V) \rightarrow {\mathrm{Tot}}A_{kcg}^{*,*} (X; V)$ induces an isomorphism in cohomology. Furthermore the colimit double complex $A_{kcg}^{*,*} (X;V)$ then is a double complex of $G$-modules and the $G$-equivariant cochains in form a sub double complex $A_{kcg}^{*,*} (X;V)^G$, whose rows and columns are augmented by the colimit complex $A_{kcg,eq}^* (X;V)$ and by the complex $A_{kc}^* (X;V)^G$ respectively. In addition we observe: \[natinclofeqdcisisok\] If the open diagonal neighbourhoods ${\mathrm{k}}\mathfrak{U}[n]$ in ${\mathrm{k}}X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods then the natural morphism of double complexes $$A_{kcg,eq}^{*,*} (X;V):= \operatorname{colim}_{\mathfrak{U} \text{is $G$-invariant open cover of $X$}} A_{kcr}^{*,*} (X;\mathfrak{U};V)^G \rightarrow A_{kcg}^* (X;V)^G$$ is a natural isomorphism. The proof is analogous to that of Proposition \[natinclofeqccisisok\]. As a consequence the colimit morphism $i_{kcg,eq}^* : A_{kcg,eq}^* ( X; V) \rightarrow {\mathrm{Tot}}A_{kcg}^{*,*} (X; V)^G$ then induces an isomorphism in cohomology, and the morphism $H (i_{kcg,eq})$ is invertible. For the composition $H(i_{kcg,eq})^{-1} H(j_{k,eq}):H_{kc,eq}(X;V)\rightarrow H_{kcg,eq}(X,\mathfrak{U};V)$ we observe: \[contiscohtocreqk\] If the open diagonal neighbourhoods ${\mathrm{k}}\mathfrak{U}[n]$ in ${\mathrm{k}}X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods then the image $j^n (f)$ of a continuous equivariant $n$-cocycle $f$ on $X$ in ${\mathrm{Tot}}A_{kcg}^{*,*} (X;,V)^G$ is cohomologous to the image $i_{kcg,eq}^n (f)$ of the equivariant cocycle $f\in A_{kcg,eq}^n (X;V)$ in ${\mathrm{Tot}}A_{kcg}^{*,*} (X;V)^G$. The proof is analogous to that of Proposition \[contiscohtocr\]. If the open diagonal neighbourhoods ${\mathrm{k}}\mathfrak{U}[n]$ in ${\mathrm{k}}X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods then the composition $H (i_{kcg,eq})^{-1} H(j_{k, eq}):H_{kc,eq}(X;V)\rightarrow H_{kcg,eq}(X;V)$ is induced by the inclusion $A_{kc}^* (X;V)^G \hookrightarrow A_{kcg}^* (X;V)^G$. If the open diagonal neighbourhoods ${\mathrm{k}}\mathfrak{U}[n]$ in ${\mathrm{k}}X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods and the morphism $j^*_{k, eq}:={j_k^*}^G : A_{kc}^* (X;V)^G \rightarrow {\mathrm{Tot}}A_{kcg}^{*,*}(X;V)^G$ induces a monomorphism, epimorphism or isomorphism in cohomology, then the inclusion $A_{kc}^* (X;V)^G \hookrightarrow A_{kcg,eq}^* (X;V)$ induces a monomorphism, epimorphism or isomorphism in cohomology respectively. Continuous and $\mathfrak{U}$ -Continuous Cochains on $k$-spaces {#seccontanduccck} ================================================================ In this section we consider transformation $k$-groups $(G,X)$ and $G$-modules $V$ in ${\mathbf{kTop}}$ for which we show that the inclusion $A_{kc}^* (X,\mathfrak{U};V)^G \hookrightarrow A_{kcr}^* (X,\mathfrak{U};V)^G$ of the complex of continuous equivariant cochains into the complex of equivariant $\mathfrak{U}$-continuous cochains induces an isomorphism $H_{kc}^* (X,\mathfrak{U};V) \cong H_{kcr}^* (X,\mathfrak{U};V)$ provided the $k$-space $X$ is contractible. The proceeding is similar to that in Section \[seccontanduccck\]. At first we reduce the problem to the non-equivariant case: \[noneqextheneqexk\] If the augmented column complexes $A_{kc}^p (X;V) \hookrightarrow A_{kcr}^{p,*}(X,\mathfrak{U};V)$ are exact, then the augmented sub column complexes $A_{kc}^p (X;V)^G \hookrightarrow A_{kcr}^{p,*}(X,\mathfrak{U};V)^G$ of equivariant cochains are exact as well. The proof is analogous to that of Proposition \[noneqextheneqex\]. \[augexthenjeqindisok\] If the augmented column complexes $A_{kc}^p (X;V) \hookrightarrow A_{kcr}^{p,*} (X,\mathfrak{U};V)$ are exact, then the inclusion $j_{k, eq}^* : A_{kc}^* (X;V)^G \hookrightarrow {\mathrm{Tot}}A_{kcr}^{*,*}(X,\mathfrak{U},V)^G$ induces an isomorphism in cohomology. \[augextheninclindisok\] If the augmented column complexes $A_{kc}^p (X;V) \hookrightarrow A_{kcr}^{p,*} (X,\mathfrak{U};V)$ are exact, then the inclusion $A_{kc}^* (X;V)^G \hookrightarrow A_{kcr}^* (X,\mathfrak{U};V)^G$ induces an isomorphism in cohomology. To achieve the announced result it remains to show that for contractible $k$-spaces $X$ the colimit augmented columns $A_{kc}^p (X;V) \hookrightarrow A_{kcg}^{p,*} (X;V)$ are exact. For this purpose we first consider the cochain complex associated to the cosimplicial abelian group $$A_k^{p,*} (X;V) := \left\{ f : X^{p+1} \times X^{*+1} \rightarrow V \mid \, \forall \vec{x}' \in X^{*+1} :f (-,\vec{x}') \in C ( {\mathrm{k}}X^{p+1},V) \right\}$$ of global cochains, its subcomplex $A_{kcr}^{p,*} (X,\mathfrak{U};V)$ and the cochain complexes associated to the cosimplicial abelian groups $$\begin{aligned} A_k^{p,*} (\mathfrak{U};V) & := & \{ f : X^{p+1} \times \mathfrak{U}[*] \rightarrow \mid \, \forall \vec{x}' \in \mathfrak{U}[*] : f (-,\vec{x}') \in C ({\mathrm{k}}X^{p+1},V) \} \quad \text{and} \\ A_{kc}^{p,*} (X,\mathfrak{U};V) & := & C ( {\mathrm{k}}X^{p+1} \times_k {\mathrm{k}}\mathfrak{U}[*] , V) \, .\end{aligned}$$ Restriction of global to local cochains induces morphisms of cochain complexes $\operatorname{Res}^{p,*}_k : A_k^{p,*} (X;V) \twoheadrightarrow A_k^{p,*} (X,\mathfrak{U};V)$ and $\operatorname{Res}_{kcr}^{p,*} : A_{kcr}^* (X,\mathfrak{U};V) \twoheadrightarrow A_{kc}^{p,*} (X,\mathfrak{U};V)$ intertwining the inclusions of the subcomplexes $A_{kcr}^{p,*} (X,\mathfrak{U};V) \hookrightarrow A_k^{p,*} (X;V)$ and $A_{kc}^{p,*} (X,\mathfrak{U};V) \hookrightarrow A_k^{p,*} (X,\mathfrak{U};V)$, so one obtains the following commutative diagram $$\label{morphexseqk} \begin{array}{cccccccc} 0 \longrightarrow & \ker (\operatorname{Res}_{kcr}^{p,*} ) & \longrightarrow & A_{kcr}^{p,*} (X,\mathfrak{U};V) & \longrightarrow & A_{kc}^{p,*} (X,\mathfrak{U};V) & \longrightarrow 0 \\ & \downarrow & & \downarrow & & \downarrow & \\ 0 \longrightarrow & \ker (\operatorname{Res}^{p,*}_k ) & \longrightarrow & A_k^{p,*} (X;V) & \longrightarrow & A_k^{p,*} (X , \mathfrak{U};V) & \longrightarrow 0 \end{array}$$ of cochain complexes whose rows are exact. The kernel $\ker (\operatorname{Res}^{p,q}_k )$ is the subspace of those $(p,q)$-cochains which are trivial on ${\mathrm{k}}X^{p+1} \times_k {\mathrm{k}}\mathfrak{U} [q]$. Since these $(p,q)$-cochains are continuous on ${\mathrm{k}}X^{p+1} \times_k {\mathrm{k}}\mathfrak{U}[q]$ we find that both kernels coincide. We abbreviate the complex $\ker (\operatorname{Res}^{p,*}_k ) = \ker (\operatorname{Res}_{krc}^{p,*} )$ by $K_k^{p,*}$ and denote the cohomology groups of the complex $A_{kcr}^{p,*} (X,\mathfrak{U};V)$ by $H_{kcr}^{p,*} (X,\mathfrak{U};V)$, the cohomology groups of the complex $A_{kc}^{p,*} (X,\mathfrak{U};V)$ of continuous cochains by $H_{kc}^{p,*} (X,\mathfrak{U};V)$ and the cohomology groups of the complex $A_k^{p,*} (X,\mathfrak{U};V)$ by $H_k^{p,*} (X,\mathfrak{U};V)$. The cochain complexes $A_k^{p,*} (X;V)$ are exact. For any point $* \in X$ the homomorphisms $h^{p,q} : A_k^{p,q} (X;V) \rightarrow A_k^{p,q-1} (X;V)$ given by $h^{p,q} (f) (\vec{x},\vec{x}'):=f (\vec{x},*,\vec{x}')$ form a contraction of the complex $A_k^{p,*} (X;V)$. The morphism of short exact sequences of cochain complexes in Diagram \[morphexseqk\] gives rise to a morphism of long exact cohomology sequences, in which the cohomology of the complex $A_k^{p,*} (X;V)$ is trivial: $$\label{diaglecsk} \xymatrix@R-10pt@C-4pt{ \ar[r] & H^q (K_k^{p,*} )) \ar[r] \ar@{=}[d] & H_{kcr}^{p,q} (X,\mathfrak{U};V) \ar[r] \ar[d] & H_{kc}^{p,q} (X,\mathfrak{U};V) \ar[r] \ar[d] & H^{q+1} (K_k^{p,*} ) \ar[r] \ar@{=}[d] & {} \\ \ar[r]^\cong & H^q (K^{p,*} ) \ar[r]& 0 \ar[r] & H_k^{p,q} (X,\mathfrak{U};V) \ar[r]^\cong & H^{q+1} (K^{p,*} )) \ar[r] & {} }$$ The augmented complex $A_{kc}^p (X;V) \hookrightarrow A_{cr}^{p,*} (X,\mathfrak{U};V)$ is exact if and only if the inclusion $A_{kc}^{p,*} (X,\mathfrak{U};V) \hookrightarrow A_k^{p,*} (X,\mathfrak{U};V)$ induces an isomorphism in cohomology. This is an immediate consequence of Diagram \[diaglecsk\] If the inclusion $A_{kc}^{p,*} (X,\mathfrak{U};V) \hookrightarrow A_k^{p,*} (X,\mathfrak{U};V)$ induces an isomorphism in cohomology, then the inclusions $A_c^* (X;V)^G \hookrightarrow {\mathrm{Tot}}A_{kcr}^{*,*}(X,\mathfrak{U},V)^G$ and $A_{kc}^* (X,\mathfrak{U};V)^G \hookrightarrow A_{kcr}^* (X,\mathfrak{U};V)^G$ also induces an isomorphism in cohomology. This follows from the preceding Lemma and Corollaries \[augexthenjeqindisok\] and \[augextheninclindisok\]. For $k$-spaces $X$ for which the open diagonal neighbourhoods ${\mathrm{k}}\mathfrak{U}[n]$ in ${\mathrm{k}}X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods the passage to the colimit over all open coverings of $X$ yields the corresponding results for the complexes of cochains with continuous germs: \[noneqextheneqexcgk\] If the open diagonal neighbourhoods ${\mathrm{k}}\mathfrak{U}[n]$ in ${\mathrm{k}}X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods and the augmented column complexes $A_{kc}^p (X;V) \hookrightarrow A_{kcg}^{p,*}(X;V)$ are exact, then the augmented sub column complexes $A_{kc}^p (X;V)^G \hookrightarrow A_{kcg}^{p,*}(X;V)^G$ of equivariant cochains are exact as well. The proof is similar to that of Proposition \[noneqextheneqex\]. \[augexthenjeqindisocgk\] If the open diagonal neighbourhoods ${\mathrm{k}}\mathfrak{U}[n]$ in ${\mathrm{k}}X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods and the augmented column complexes $A_{kc}^p (X;V) \hookrightarrow A_{kcg}^{p,*} (X;V)$ are exact, then the inclusion $j_{k, eq}^* : A_{kc}^* (X;V)^G \hookrightarrow {\mathrm{Tot}}A_{kcg}^{*,*}(X;V)^G$ induces an isomorphism in cohomology. \[augextheninclindisocgk\] If the open diagonal neighbourhoods ${\mathrm{k}}\mathfrak{U}[n]$ in ${\mathrm{k}}X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods and the augmented column complexes $A_{kc}^p (X;V) \hookrightarrow A_{kcg}^{p,*} (X;V)$ are exact, then the inclusion $A_{kc}^* (X;V)^G \hookrightarrow A_{kcg}^* (X;V)^G$ induces an isomorphism in cohomology. \[remonlyginvcovk\] Alternatively to taking the colimit over all open coverings $\mathfrak{U}$ of $X$ one may consider $G$-invariant open coverings only to obtains the same results. (This was shown in Proposition \[natinclofeqccisisok\] and Lemmata \[natinclofeqdcisisok\].) If $G=X$ is a metrisable, locally compact or Hausdorff $k_\omega$ topological group which acts on itself by left translation and the augmented columns $A_{kc}^p (X;V) \hookrightarrow A_{kcg}^{p,*} (X;V) := \operatorname{colim}A_k^{p,*} (X,\mathfrak{U}_U;V)$ (where $U$ runs over all open identity neighbourhoods in $G$) are exact, then $A_{kc}^* (X;V)^G \hookrightarrow A_{kcg}^* (X;V)^G$ induces an isomorphism in cohomology. The complex $A^{p,*} (X,\mathfrak{U};V)$ is isomorphic to the complex $A^* (\mathfrak{U}; C ({\mathrm{k}}X^{p+1},V))$. If the open diagonal neighbourhoods $\mathfrak{U}[n]$ in $X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods then the colimit $A_{AS}^* (X; C (X^{p+1} ,V)):=\operatorname{colim}A^* (\mathfrak{U}; C (X^{p+1} ,V))$, where $\mathfrak{U}$ runs over all open coverings of $X$ is the complex of Alexander-Spanier cochains on $X$ (with values in $C (X^{p+1} ,V)$). In this case the colimit complex $\operatorname{colim}A^p (X; A^* (\mathfrak{U};V))$ is isomorphic to the cochain complex $A_{AS}^* (X;C ({\mathrm{k}}X^{p+1} ,V))$. A similar observation can be made for the cochain complex $A_{kc}^{p,*} (X,\mathfrak{U};V)$ because the exponential law $C ( {\mathrm{k}}X^{p+1} \times_k {\mathrm{k}}\mathfrak{U}[q],V) \cong C (X, {\mathrm{k}}C (\mathfrak{U}[q],V))$ holds in ${\mathbf{kTop}}$. Passing to the colimit in Diagram \[morphexseqk\] yields the morphism $$\label{morphexseqcgk} \begin{array}{cccccccc} 0 \longrightarrow & \ker (\operatorname{Res}_{kcg}^{p,*} ) & \longrightarrow & A_{kcg}^{p,*} (X;V) & \longrightarrow & \operatorname{colim}A_{kc}^{p,*} (X,\mathfrak{U};V) & \longrightarrow 0 \\ & \downarrow & & \downarrow & & \downarrow & \\ 0 \longrightarrow & \ker (\operatorname{Res}_k^{p,*} ) & \longrightarrow & A_k^{p,*} (X;V) & \longrightarrow & A_{AS}^* (X; C^{p+1} (X,V)) & \longrightarrow 0 \end{array}$$ of short exact sequences of cochain complexes. The kernel $\ker (\operatorname{Res}^{p,q}_k)$ is the subspace of those $(p,q)$-cochains which are trivial on ${\mathrm{k}}X^{p+1} \times_k {\mathrm{k}}\mathfrak{U} [q]$ for some open covering $\mathfrak{U}$ of $X$. Since these $(p,q)$-cochains are continuous on ${\mathrm{k}}X^{p+1} \times_k {\mathrm{k}}\mathfrak{U}[q]$ we find that both kernels coincide. We abbreviate the complex $\ker (\operatorname{Res}^{p,*}_k ) = \ker (\operatorname{Res}_{kcg}^{p,*} )$ by $K_{kcg}^{p,*}$ and denote the cohomology groups of the complex $A_{kcg}^{p,*} (X;V)$ by $H_{kcg}^{p,*} (X;V)$. The morphism of short exact sequences of cochain complexes in Diagram \[morphexseqcgk\] gives rise to a morphism of long exact cohomology sequences: $$\label{diaglecscgk} \xymatrix@C-11pt@R-10pt{ \ar[r] & H^q (K_{kcg}^{p,*} )) \ar[r] \ar@{=}[d] & H_{kcg}^{p,q} (X,\mathfrak{U};V) \ar[r] \ar[d] & H^q (\operatorname{colim}A_{kc}^{p,*} (X,\mathfrak{U};V) \ar[r] \ar[d] & H^{q+1} (K_{kcg}^{p,*} ) \ar[r] \ar@{=}[d] & {} \\ \ar[r]^\cong & H^q (K_{kcg}^{p,*} ) \ar[r]& 0 \ar[r] & H_{AS}^q (X; C^{p+1} (X,V)) \ar[r]^\cong & H^{q+1} (K_{kcg}^{p,*} )) \ar[r] & {} }$$ The augmented complex $A_{kc}^p (X;V) \hookrightarrow A_{kcg}^{p,*} (X;V)$ is exact if and only if the inclusion $\operatorname{colim}A_{kc}^{p,*} (X,\mathfrak{U};V)\hookrightarrow A_{AS}^* (X;C^{p+1}(X,V))$ of cochain complexes induces an isomorphism in cohomology. This is an immediate consequence of Diagram \[diaglecscgk\] \[inclcolimacpsindisothenjeqisok\] If the open diagonal neighbourhoods ${\mathrm{k}}\mathfrak{U}[n]$ in ${\mathrm{k}}X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods and the inclusion $\operatorname{colim}A_{kc}^{p,*}(X,\mathfrak{U};V)\hookrightarrow A_{AS}^* (X;C(X^{p+1},V))$ induces an isomorphism in cohomology, then $j_{k, eq}^* : A_{kc}^* (X;V)^G \hookrightarrow {\mathrm{Tot}}A_{kcg}^{*,*}(X;V)^G$ and $A_{kc}^* (X;V)^G \hookrightarrow A_{kcg}^* (X;V)^G$ also induce an isomorphism in cohomology. This follows from the preceding Lemma and Corollaries \[augexthenjeqindisocgk\] and \[augextheninclindisocgk\]. As observed before (cf. Remark \[remonlyginvcovk\]) one may restrict oneself to the directed system of $G$-invariant open coverings only to achieve the same result. Thus we observe: If $G=X$ is a locally contractible metrisable, locally contractible locally compact or locally contractible Hausdorff $k_\omega$ topological group which acts on itself by left translation and the inclusion $\operatorname{colim}A_{kc}^{p,*} (X,\mathfrak{U};V)\hookrightarrow A_{AS}^* (X;C(X^{p+1},V))$ (where $U$ runs over all open identity neighbourhoods in $G$) induces an isomorphism in cohomology, then the inclusion $A_{kc}^* (X;V)^G \hookrightarrow A_{kcg}^* (X;V)^G$ induces an isomorphism in cohomology as well. It has been shown in [@vE62b] that the cohomology of the colimit cochain complex $\operatorname{colim}A^* ( \mathfrak{U} ;C({\mathrm{k}}X^{p+1},V))$ is the Alexander-Spanier cohomology of $X$ with coefficients $C({\mathrm{k}}X^{p+1},V)$. \[xcontrthenacpsistrivk\] If the topological space $X$ is contractible, then the cohomology of the complex $\operatorname{colim}A_{kc}^{p,*} (X,\mathfrak{U};V)$ is trivial. The reasoning is analogous to that for the Alexander-Spanier presheaf. The proof [@F10 Theorem 2.5.2] carries over almost in verbatim. For contractible $X$ the inclusion $A_{kc}^* (X;V)^G \hookrightarrow A_{kcg}^* (X;V)^G$ induces an isomorphism in cohomology. If the $k$-space $X$ is contractible, then the Alexander-Spanier cohomology of $X$ is trivial and the cohomology of the cochain complex $\operatorname{colim}A_{kc}^{p,*} (X,\mathfrak{U};V)$ is trivial by Lemma \[xcontrthenacpsistrivk\]. By Proposition \[inclcolimacpsindisothenjeqisok\] the inclusion $A_{kc}^* (X;V)^G \hookrightarrow A_{kcg}^* (X;V)^G$ then induces an isomorphism in cohomology. For metrisable, locally compact or Hausdorff $k_\omega$ topological groups $G$ which are contractible the continuous group cohomology $H_{kc,eq} (G;V)$ is isomorphic to the cohomology $H_{kcg,eq} (G;V)$ of homogeneous group cochains with continuous germ at the diagonal. Complexes of Smooth Cochains ============================ In this Section we introduce the sub (double)complexes for smooth transformation groups $(G,M)$ and smooth $G$-modules $V$, where $V$ is an abelian Lie group. (We use the general infinite dimensional calculus introduced in [@BGN04].) Let $(G,M)$ be a smooth transformation group, $V$ be a smooth $G$-module and $\mathfrak{U}$ be an open covering of $M$. For every manifold $M$ and abelian Lie group $V$ the subcomplex $A_s^* (M;V):= C^\infty (M^{*+1};V)$ of the standard complex is called the *smooth standard complex*. The cohomology $H_{eq,s} (M;V)$ of the subcomplex $A_s^* (M;V)^G$ is called the equivariant smooth cohomology of $M$ (with values in $V$). For any Lie group $G$ which acts on itself by left translation and smooth $G$-module $V$ the complex $A_s^* (G;V)^G$ is the complex of smooth (homogeneous) group cochains; its cohomology $H_{eq,s} (G;V)$ is the smooth group cohomology of $G$ with values in $V$. For Lie groups $G$ and $G$-modules $V$ the first cohomology group $H_{eq,s}^1 (G;V)$ classifies smooth crossed morphisms modulo principal derivations, the second cohomology group $H_{eq,s}^2 (G;V)$ classifies equivalence classes of Lie group extensions $V \hookrightarrow \hat{G} \twoheadrightarrow G$ which admit a smooth global section (i.e. $\hat{G} \twoheadrightarrow G$ is a trivial smooth $V$-principal bundle) and the third cohomology group $H_{eq,c}^3 (G;V)$ classifies equivalence classes of smoothly split crossed modules. For each open covering $\mathfrak{U}$ of $M$ one can consider the subcomplex of $A^* (M;V)$ formed by the groups $$A_{sr}^n (M,\mathfrak{U};V) := \left\{ f \in A^n (M;V) \mid \, f_{\mid \mathfrak{U} [n]} \in C^\infty (\mathfrak{U}[n];V) \right\}$$ of cochains whose restriction to the subspaces $\mathfrak{U}[n]$ of $M^{n+1}$ are smooth. The cohomology of the cochain complex $A_{sr}^* (M,\mathfrak{U};V)$ is denoted by $H_{sr} (M,\mathfrak{U};V)$. If the covering $\mathfrak{U}$ of $M$ is $G$-invariant, then the subspaces $\mathfrak{U}[*]$ is a simplicial $G$-subspace of the simplicial $G$-space $M^{*+1}$. For $G$-invariant coverings $\mathfrak{U}$ of $M$ the cohomology of the subcomplex $A_{sr}^* (M,\mathfrak{U};V)^G$ of $G$-equivariant cochains is denoted by $H_{cr,eq} (M,\mathfrak{U};V)$. If $G=M$ is a Lie group which acts on itself by left translation and $U$ an open identity neighbourhood, then the complex $A_{sr}^* (M,\mathfrak{U}_U;V)^G$ is the complex of homogeneous group cochains whose restrictions to the subspaces $\mathfrak{U}_U [*]$ are smooth. (These are sometimes called $\mathfrak{U}$-smooth cochains.) For directed systems $\{ \mathfrak{U}_i \mid i \in I \}$ of open coverings of $M$ one can also consider the colimit complex $\operatorname{colim}_i A_{sr}^* (M,\mathfrak{U}_i ;V)$. In particular for the directed system of all open coverings of $M$ one observes that the open diagonal neighbourhoods $\mathfrak{U}[n]$ in $M^{n+1}$ for open coverings $\mathfrak{U}$ of $M$ are cofinal in the directed set of all open diagonal neighbourhoods, hence one obtains the complex $$A_{sg}^* (M;V):= \operatorname{colim}_{\mathfrak{U} \text{is open cover of $M$}} A_{sr}^* (M;\mathfrak{U};V)$$ of global cochains whose germs at the diagonal are continuous. This is a subcomplex of the standard complex $A^* (M;V)$ which is invariant under the $G$-action (Eq. \[defgact\]) and thus a sub complex of $G$-modules. The $G$-equivariant cochains with continuous germ form a subcomplex $A_{sg}^* (M;V)^G$ thereof, whose cohomology is denoted by $H_{cg,eq} (M;V)$. The latter subcomplex can also be obtained by taking the colimit over all $G$-invariant open coverings of $M$ only: \[natinclofeqccisisosmooth\] The natural morphism of cochain complexes $$A_{cg,eq}^* (M;V):= \operatorname{colim}_{\mathfrak{U} \text{is $G$-invariant open cover of $M$}} A_{sr}^* (M;\mathfrak{U};V)^G \rightarrow A_{sg}^* (M;V)^G$$ is a natural isomorphism. The proof is analogous to that of Proposition \[natinclofeqccisiso\]. The cohomology $H_{cg,eq} (M;V)$ is the cohomology of the complex of equivariant cochains which are continuous on some $G$-invariant neighbourhood of the diagonal. If $G=M$ is a Lie group which acts on itself by left translation, then the complex $A_{sg}^* (G;V)^G$ is the complex of homogeneous group cochains whose germs at the diagonal are smooth. (By abuse of language these are sometimes called ’locally smooth’ group cochains.) We will show (in Section \[secsmoothanduscc\]) that the inclusion $A_{sr}^* (M,\mathfrak{U};V) \hookrightarrow A_s^* (M;V)$ induces an isomorphism in cohomology provided the manifold $M$ is smoothly contractible. For this purpose we consider the abelian groups $$\label{defrsu} A_{sr}^{p,q} ( M,\mathfrak{U} ; V ) := \left\{ f: M^{p+1} \times M^{q+1} \rightarrow V \mid f_{\mid M^{p+1} \times \mathfrak{U}[q]} \; \text{is continuous} \right\} \, .$$ The abelian groups $A_{sr}^{p,q} ( M,\mathfrak{U} ; V )$ form a first quadrant sub double complex of the double complex $A_{cr}^{p,q} ( M,\mathfrak{U} ; V )$. The rows of the double complex $A_{sr}^{*,*} ( M,\mathfrak{U} ; V )$ can be augmented by the complex $A_{sr}^* (M,\mathfrak{U} ;V)$ for the covering $\mathfrak{U}$ and the columns can be augmented by the exact complex $A_s^* (M;V)$ of continuous cochains: $$\vcenter{ \xymatrix{ \vdots & \vdots & \vdots & \vdots \\ A_{sr}^2 (M, \mathfrak{U};V) \ar[r] \ar[u]_{d_{v}} & A_{sr}^{0,2} ( M, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{sr}^{1,2} ( M, \mathfrak{U}; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{sr}^{2,2} ( M, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & \cdots \\ A_{sr}^1 (M, \mathfrak{U};V) \ar[r] \ar[u]_{d_{v}} & A_{sr}^{0,1} ( M, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{sr}^{1,1} ( M, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{sr}^{2,1} ( M, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & \cdots \\ A_{sr}^0 (M, \mathfrak{U};V) \ar[r] \ar[u]_{d_{v}} & A_{sr}^{0,0} ( M, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{sr}^{1,0} ( M, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{sr}^{2,0} ( M, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & \cdots \\ & A_s^0 ( M ; V) \ar[r]^{d_{h}}\ar[u] & A_s^1 ( M ; V) \ar[r]^{d_{h}}\ar[u] & A_s^2 ( M ; V) \ar[r]^{d_{h}}\ar[u] & \cdots }}$$ We denote the total complex of the double complex $A_{sr}^{*,*} ( M, \mathfrak{U} ; V)$ by ${\mathrm{Tot}}A_{sr}^{*,*} ( M, \mathfrak{U} ; V)$. The augmentations of the rows and columns of this double complex induce morphisms $i^* : A_{sr}^* ( M, \mathfrak{U} ; V) \rightarrow {\mathrm{Tot}}A_{sr}^{*,*} ( M, \mathfrak{U} ; V)$ and $j^*: A_s^* ( M ; V) \rightarrow {\mathrm{Tot}}A_{sr}^{*,*} ( M,\mathfrak{U} ; V)$ of cochain complexes respectively. \[columnsexactsmooth\] The morphism $i^*: A_{sr}^* ( M,\mathfrak{U} ; V) \rightarrow {\mathrm{Tot}}A_{sr}^{*,*} (M,\mathfrak{U} ; V)$ induces an isomorphism in cohomology. The row contraction given in the proof of Lemma \[columnsexact\] restricts to one of the sub row complex $A_s^* (M,\mathfrak{U} ; V) \hookrightarrow A_{sr}^{*,*} (M,\mathfrak{U} ; V)$. Note that this construction does not work for the column complexes. For $G$-invariant open coverings $\mathfrak{U}$ of $M$ one can consider the sub double complex $A_{sr}^{*,*} ( M,\mathfrak{U} ; V )^G$ of $A_{sr}^{*,*} ( M,\mathfrak{U} ; V )$ whose rows are augmented by the cochain complex $A_{sr}^* (M,\mathfrak{U} ;V)^G$ for the covering $\mathfrak{U}$ and the columns can be augmented by the complex $A_s^* (M;V)^G$ of smooth equivariant cochains (,which is not exact in general). \[columnsexacteqsmooth\] For $G$-invariant coverings $\mathfrak{U}$ of $M$ the morphism $i_{eq}^*: ={i^*}^G$ induces an isomorphism in cohomology. The contraction $h_{*,q}$ of the augmented rows $A_{cr}^q ( M, \mathfrak{U} ; V) \hookrightarrow {\mathrm{Tot}}A_{sr}^{*,q} ( M, \mathfrak{U} ; V)$ defined in Eq. \[defrowcontr\] is $G$-equivariant and thus restricts to a row contraction of the augmented sub-row $A_{sr}^q ( M, \mathfrak{U} ; V)^G \hookrightarrow {\mathrm{Tot}}A_{sr}^{*,q} ( M, \mathfrak{U} ; V)^G$. So the morphism $H (i_{eq}) : H_{cr,eq} (M,\mathfrak{U};V) \rightarrow H ( {\mathrm{Tot}}A_{sr}^{*,*} ( M, \mathfrak{U} ; V)^G )$ is invertible. For the composition $H (i_{eq})^{-1} H(j_{eq}):H_{c,eq}(M;V)\rightarrow H_{cr,eq}(M,\mathfrak{U};V)$ we observe: \[contiscohtocrsmooth\] The image $j^n (f)$ of a smooth equivariant $n$-cocycle $f$ on $M$ in ${\mathrm{Tot}}A_{sr}^{*,*} (M,\mathfrak{U};,V)^G$ is cohomologous to the image $i_{eq}^n (f)$ of the equivariant $n$-cocycle $f\in A_{sr}^n (M,\mathfrak{U};V)^G$ in ${\mathrm{Tot}}A_{sr}^{*,*} (M,\mathfrak{U};V)^G$. The proof is analogous to that of Proposition \[contiscohtocr\]. The composition $H (i_{eq})^{-1} H(j_{eq}):H_{s,eq}(M;V)\rightarrow H_{sr,eq}(M,\mathfrak{U};V)$ is induced by the inclusion $A_s^* (M,\mathfrak{U};V)^G \hookrightarrow A_{sr}^* (M,\mathfrak{U};V)^G$. If the morphism $j^*_{eq}:={j^*}^G : A_s^* (M;V)^G \rightarrow {\mathrm{Tot}}A_{sr}^{*,*}(M,\mathfrak{U}A)^G$ induces a monomorphism, epimorphism or isomorphism in cohomology, then the inclusion $A_s^* (M;V)^G \hookrightarrow A_{sr}^* (M,\mathfrak{U};V)^G$ induces a monomorphism, epimorphism or isomorphism in cohomology respectively. For any directed system $\{ \mathfrak{U}_i \mid i \in I \}$ of open coverings of $M$ one can also consider the corresponding augmented colimit double complexes. In particular for the directed system of all open coverings of $M$ one obtains the double complex complex $$A_{sg}^{*,*} (M;V):= \operatorname{colim}_{\mathfrak{U} \text{ is open cover of $M$}} A_{sr}^{*,*} (M;\mathfrak{U};V)$$ whose rows and columns are augmented by the colimit complex $A_{sg}^* (M;V)$ and by the complex $A_s^* (M;V)$ respectively. For any directed system $\{ \mathfrak{U}_i \mid i \in I \}$ of open coverings of $M$ the morphism $\operatorname{colim}_i i^*: \operatorname{colim}_i A_{sr}^* ( M,\mathfrak{U}_i ; V) \rightarrow {\mathrm{Tot}}\operatorname{colim}_i A_{sr}^{*,*} (M,\mathfrak{U}_i ; V)$ induces an isomorphism in cohomology. The passage to the colimit preserves the exactness of the augmented row complexes (Lemma \[columnsexactsmooth\]). As a consequence the colimit morphism $i_{sg}^* : A_{sg}^* ( M; V) \rightarrow {\mathrm{Tot}}A_{sg}^{*,*} (M; V)$ induces an isomorphism in cohomology. The colimit double complex $A_{sg}^{*,*} (M;V)$ is a double complex of $G$-modules and the $G$-equivariant cochains in form a sub double complex $A_{sg}^{*,*} (M;V)^G$, whose rows and columns are augmented by the colimit complex $A_{cg,eq}^* (M;V)$ and by the complex $A_s^* (M;V)^G$ respectively. For any directed system $\{ \mathfrak{U}_i \mid i \in I \}$ of $G$-invariant open coverings of $M$ the morphism $\operatorname{colim}_i i_{eq}^*: \operatorname{colim}_i A_{sr}^* ( M,\mathfrak{U}_i ; V)^G \rightarrow {\mathrm{Tot}}\operatorname{colim}_i A_{sr}^{*,*} (M,\mathfrak{U}_i ; V)^G$ induces an isomorphism in cohomology. The passage to the colimit preserves the exactness of the augmented row complexes (Lemma \[columnsexacteqsmooth\]). Moreover, since the open diagonal neighbourhoods $\mathfrak{U}[n]$ in $X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods, we observe: \[natinclofeqdcisisosmooth\] The natural morphism of double complexes $$A_{cg,eq}^{*,*} (X;V):= \operatorname{colim}_{\mathfrak{U} \text{is $G$-invariant open cover of $X$}} A_{sr}^{*,*} (X;\mathfrak{U};V)^G \rightarrow A_{sg}^* (X;V)^G$$ is a natural isomorphism. The proof is analogous to that of Proposition \[natinclofeqccisiso\]. As a consequence the colimit morphism $i_{cg,eq}^* : A_{cg,eq}^* ( M; V) \rightarrow {\mathrm{Tot}}A_{sg}^{*,*} (M; V)^G$ induces an isomorphism in cohomology, and the morphism $H (i_{cg,eq})$ is invertible. For the composition $H(i_{cg,eq})^{-1} H(j_{eq}):H_{c,eq}(M;V)\rightarrow H_{cg,eq}(M,\mathfrak{U};V)$ we observe: \[smoothiscohtosreq\] The image $j^n (f)$ of a continuous equivariant $n$-cocycle $f$ on $M$ in ${\mathrm{Tot}}A_{sg}^{*,*} (M;,V)^G$ is cohomologous to the image $i_{cg,eq}^n (f)$ of the equivariant $n$-cocycle $f\in A_{cg,eq}^n (M;V)$ in ${\mathrm{Tot}}A_{sg}^{*,*} (M;V)^G$. The proof is analogous to that of Proposition \[contiscohtocr\]. The composition $H (i_{cg,eq})^{-1} H(j_{eq}):H_{c,eq}(M;V)\rightarrow H_{cg,eq}(M;V)$ is induced by the inclusion $A_s^* (M;V)^G \hookrightarrow A_{sg}^* (M;V)^G$. If the morphism $j^*_{eq}:={j^*}^G : A_s^* (M;V)^G \rightarrow {\mathrm{Tot}}A_{sg}^{*,*}(M;V)^G$ induces a monomorphism, epimorphism or isomorphism in cohomology, then the inclusion $A_s^* (M;V)^G \hookrightarrow A_{cg,eq}^* (M;V)$ induces a monomorphism, epimorphism or isomorphism in cohomology respectively. Smooth and $\mathfrak{U}$-Smooth Cochains {#secsmoothanduscc} ========================================= In this Section we derive results for smooth transformation groups $(G,M)$ and smooth $G$-modules $V$, which are analogous to those concerning continuous cochains. Let $(G,M)$ be a smooth transformation group, $V$ be a smooth $G$-module and $\mathfrak{U}$ be an open covering of $M$. \[noneqextheneqexsmooth\] If the augmented column complexes $A_s^p (M;V) \hookrightarrow A_{sr}^{p,*}(M,\mathfrak{U};V)$ are exact, then the augmented sub column complexes $A_s^p (M;V)^G \hookrightarrow A_{sr}^{p,*}(M,\mathfrak{U};V)^G$ of equivariant cochains are exact as well. The proof is analogous to that of Proposition \[noneqextheneqex\]. \[augexthenjeqindisosmooth\] If the augmented column complexes $A_s^p (M;V) \hookrightarrow A_{sr}^{p,*} (M,\mathfrak{U};V)$ are exact, then the inclusion $j_{eq}^* : A_s^* (M;V)^G \hookrightarrow {\mathrm{Tot}}A_{sr}^{*,*}(M,\mathfrak{U},V)^G$ induces an isomorphism in cohomology. \[augextheninclindisosmooth\] If the augmented column complexes $A_s^p (M;V) \hookrightarrow A_{sr}^{p,*} (M,\mathfrak{U};V)$ are exact, then the inclusion $A_s^* (M;V)^G \hookrightarrow A_{sr}^* (M,\mathfrak{U};V)^G$ induces an isomorphism in cohomology. It remains to show that for smoothly contractible manifolds $M$ the colimit augmented columns $A_s^p (M;V) \hookrightarrow A_{sg}^{p,*} (M;V)$ are exact. For this purpose we first consider the cochain complex associated to the cosimplicial abelian group $A^{p,*} (M;V) := \left\{ f : M^{p+1} \times M^{*+1} \rightarrow V \mid \, \forall \vec{x}' \in M^{*+1} : f (-,\vec{m}') \in C^\infty (M^{p+1},V) \right\}$ of global cochains, its subcomplex $A_{sr}^{p,*} (M,\mathfrak{U};V)$ and the cochain complexes associated to the cosimplicial abelian groups $$\begin{aligned} A^{p,*} (\mathfrak{U};V) & := & \{ f : M^{p+1} \times \mathfrak{U}[*] \rightarrow \mid \, \forall \vec{m}' \in \mathfrak{U}[*] : f (-,\vec{m}') \in C^\infty (M^{p+1},V) \} \quad \text{and} \\ A_s^{p,*} (M,\mathfrak{U};V) & := & C^\infty ( M^{p+1} \times \mathfrak{U}[*] , V) \, .\end{aligned}$$ Restriction of global to local cochains induces morphisms of cochain complexes $\operatorname{Res}^{p,*} : A^{p,*} (M;V) \twoheadrightarrow A^{p,*} (M,\mathfrak{U};V)$ and $\operatorname{Res}_{sr}^{p,*} : A_{sr}^* (M,\mathfrak{U};V) \twoheadrightarrow A_s^{p,*} (M,\mathfrak{U};V)$ intertwining the inclusions of the subcomplexes $A_{sr}^{p,*} (M,\mathfrak{U};V) \hookrightarrow A^{p,*} (M;V)$ and $A_s^{p,*} (M,\mathfrak{U};V) \hookrightarrow A^{p,*} (M, \mathfrak{U};V)$, so one obtains the following commutative diagram $$\label{morphexseqsmooth} \begin{array}{cccccccc} 0 \longrightarrow & \ker (\operatorname{Res}_{sr}^{p,*} ) & \longrightarrow & A_{sr}^{p,*} (M,\mathfrak{U};V) & \longrightarrow & A_s^{p,*} (M,\mathfrak{U};V) & \longrightarrow 0 \\ & \downarrow & & \downarrow & & \downarrow & \\ 0 \longrightarrow & \ker (\operatorname{Res}^{p,*} ) & \longrightarrow & A^{p,*} (M;V) & \longrightarrow & A^{p,*} (M , \mathfrak{U};V) & \longrightarrow 0 \end{array}$$ of cochain complexes whose rows are exact. The kernel $\ker (\operatorname{Res}^{p,q} )$ is the subspace of those $(p,q)$-cochains which are trivial on $M^{p+1} \times \mathfrak{U} [q]$. Since these $(p,q)$-cochains are smooth on $M^{p+1} \times \mathfrak{U}[q]$ we find that both kernels coincide. We abbreviate the complex $\ker (\operatorname{Res}^{p,*} ) = \ker (\operatorname{Res}_{rc}^{p,*} )$ by $K^{p,*}$ and denote the cohomology groups of the complex $A_{sr}^{p,*} (M,\mathfrak{U};V)$ by $H_{sr}^{p,*} (M,\mathfrak{U};V)$, the cohomology groups of the complex $A_s^{p,*} (M,\mathfrak{U};V)$ of continuous cochains by $H_s^{p,*} (M,\mathfrak{U};V)$ and the cohomology groups of the complex $A^{p,*} (M,\mathfrak{U};V)$ by $H^{p,*} (M,\mathfrak{U};V)$. The cochain complexes $A^{p,*} (M;V)$ are exact. For any point $* \in M$ the homomorphisms $h^{p,q} : A^{p,q} (M;V) \rightarrow A^{p,q-1} (M;V)$ given by $h^{p,q} (f) (\vec{x},\vec{x}'):=f (\vec{x},*,\vec{x}')$ form a contraction of the complex $A^{p,*} (M;V)$. The morphism of short exact sequences of cochain complexes in diagram \[morphexseq\] gives rise to a morphism of long exact cohomology sequences, in which the cohomology of the complex $A^{p,*} (M;V)$ is trivial: $$\label{diaglecssmooth} \xymatrix@R-10pt@C-4pt{ \ar[r] & H^q (K^{p,*} )) \ar[r] \ar@{=}[d] & H_{sr}^{p,q} (M,\mathfrak{U};V) \ar[r] \ar[d] & H_s^{p,q} (M,\mathfrak{U};V) \ar[r] \ar[d] & H^{q+1} (K^{p,*} ) \ar[r] \ar@{=}[d] & {} \\ \ar[r]^\cong & H^q (K^{p,*} ) \ar[r]& 0 \ar[r] & H^{p,q} (M,\mathfrak{U};V) \ar[r]^\cong & H^{q+1} (K^{p,*} )) \ar[r] & {} }$$ If the inclusion $A_s^{p,*} (M,\mathfrak{U};V) \hookrightarrow A^{p,*} (M,\mathfrak{U};V)$ induces an isomorphism in cohomology, then the augmented complex $A_s^p (M;V) \hookrightarrow A_{sr}^{p,*} (M,\mathfrak{U};V)$ is exact. This is an immediate consequence of Diagram \[diaglecssmooth\] If the inclusion $A_s^{p,*} (M,\mathfrak{U};V) \hookrightarrow A^{p,*} (M,\mathfrak{U};V)$ induces an isomorphism in cohomology, then the inclusions $j_{eq}^* : A_s^* (M;V)^G \hookrightarrow {\mathrm{Tot}}A_{sr}^{*,*}(M,\mathfrak{U},V)^G$ and $A_s^* (M,\mathfrak{U};V)^G \hookrightarrow A_{sr}^* (M,\mathfrak{U};V)^G$ also induces an isomorphism in cohomology. This follows from the preceding Lemma and Corollaries \[augexthenjeqindisosmooth\] and \[augextheninclindisosmooth\]. The passage to the colimit over all open coverings of $M$ yields the corresponding results for the complexes of cochains with continuous germs: \[noneqextheneqexsg\] If the augmented column complexes $A_s^p (M;V) \hookrightarrow A_{sg}^{p,*}(M;V)$ are exact, then the augmented sub column complexes $A_s^p (M;V)^G \hookrightarrow A_{sg}^{p,*}(M;V)^G$ of equivariant cochains are exact as well. The proof is similar to that of Proposition \[noneqextheneqex\]. \[augexthenjeqindisosg\] If the augmented column complexes $A_s^p (M;V) \hookrightarrow A_{sg}^{p,*} (M;V)$ are exact, then the inclusion $j_{eq}^* : A_s^* (M;V)^G \hookrightarrow {\mathrm{Tot}}A_{sg}^{*,*}(M;V)^G$ induces an isomorphism in cohomology. \[augextheninclindisosg\] If the augmented column complexes $A_s^p (M;V) \hookrightarrow A_{sg}^{p,*} (M;V)$ are exact, then the inclusion $A_s^* (M;V)^G \hookrightarrow A_{sg}^* (M;V)^G$ induces an isomorphism in cohomology. \[remonlyginvcovsmooth\] Alternatively to taking the colimit over all open coverings $\mathfrak{U}$ of $M$ one may consider $G$-invariant open coverings only to obtains the same results. (This was shown in Proposition \[natinclofeqccisisosmooth\] and Lemmata \[natinclofeqdcisisosmooth\].) If $G=M$ is a Lie group which acts on itself by left translation and the augmented columns $A_s^p (M;V) \hookrightarrow A_{sg}^{p,*} (M;V) := \operatorname{colim}A^{p,*} (M,\mathfrak{U}_U;V)$ (where $U$ runs over all open identity neighbourhoods in $G$) are exact, then the inclusion $A_s^* (M;V)^G \hookrightarrow A_{sg}^* (M;V)^G$ induces an isomorphism in cohomology. The complex $A^{p,*} (M,\mathfrak{U};V)$ is isomorphic to the complex $A^* (\mathfrak{U}; C (M^{p+1},V))$. The colimit $A_{AS}^* (M; C (M^{p+1} ,V)):=\operatorname{colim}A^* (\mathfrak{U}; C (M^{p+1} ,V))$, where $\mathfrak{U}$ runs over all open coverings of $M$ is the complex of Alexander-Spanier cochains on $M$. Therefore the colimit complex $\operatorname{colim}A^p (M; A^* (\mathfrak{U};V))$ is isomorphic to the cochain complex $A_{AS}^* (M;C (M^{p+1} ,V))$. A similar observation can be made for the cochain complex $A_s^{p,*} (M,\mathfrak{U};V)$ if the exponential law $C (M^{p+1} \times \mathfrak{U}[q],V) \cong C (M,C (\mathfrak{U}[q],V))$ holds for a cofinal set of open coverings $\mathfrak{U}$ of $M$. Passing to the colimit in Diagram \[morphexseqsmooth\] yields the morphism $$\label{morphexseqsg} \begin{array}{cccccccc} 0 \longrightarrow & \ker (\operatorname{Res}_{sg}^{p,*} ) & \longrightarrow & A_{sg}^{p,*} (M;V) & \longrightarrow & \operatorname{colim}A_s^{p,*} (M,\mathfrak{U};V) & \longrightarrow 0 \\ & \downarrow & & \downarrow & & \downarrow & \\ 0 \longrightarrow & \ker (\operatorname{Res}^{p,*} ) & \longrightarrow & A^{p,*} (M;V) & \longrightarrow & A_{AS}^* (M; C^{p+1} (M,V)) & \longrightarrow 0 \end{array}$$ of short exact sequences of cochain complexes. The kernel $\ker (\operatorname{Res}^{p,q})$ is the subspace of those $(p,q)$-cochains which are trivial on $M^{p+1} \times \mathfrak{U} [q]$ for some open covering $\mathfrak{U}$ of $M$. Since these $(p,q)$-cochains are continuous on $M^{p+1} \times \mathfrak{U}[q]$ we find that both kernels coincide. We abbreviate the complex $\ker (\operatorname{Res}^{p,*} ) = \ker (\operatorname{Res}_{sg}^{p,*} )$ by $K_{sg}^{p,*}$ and denote the cohomology groups of the complex $A_{sg}^{p,*} (M;V)$ by $H_{sg}^{p,*} (M;V)$. The morphism of short exact sequences of cochain complexes in Diagram \[morphexseqsg\] gives rise to a morphism of long exact cohomology sequences: $$\label{diaglecssg} \xymatrix@C-11pt@R-10pt{ \ar[r] & H^q (K_{sg}^{p,*} )) \ar[r] \ar@{=}[d] & H_{sg}^{p,q} (M,\mathfrak{U};V) \ar[r] \ar[d] & H^q (\operatorname{colim}A_s^{p,*} (M,\mathfrak{U};V) \ar[r] \ar[d] & H^{q+1} (K_{sg}^{p,*} ) \ar[r] \ar@{=}[d] & {} \\ \ar[r]^\cong & H^q (K^{p,*} ) \ar[r]& 0 \ar[r] & H_{AS}^q (M; C^{p+1} (M,V)) \ar[r]^\cong & H^{q+1} (K^{p,*} )) \ar[r] & {} }$$ If the inclusion $\operatorname{colim}A_s^{p,*} (M,\mathfrak{U};V)\hookrightarrow A_{AS}^* (M;C^{p+1}(M,V))$ of cochain complexes induces an isomorphism in cohomology, then the augmented complex $A_s^p (M;V) \hookrightarrow A_{sg}^{p,*} (M;V)$ is exact. This is an immediate consequence of Diagram \[diaglecssg\] \[inclcolimacpsindisothenjeqisosmooth\] If the inclusion $\operatorname{colim}A_s^{p,*} (M,\mathfrak{U};V)\hookrightarrow A_{AS}^* (M;C(M^{p+1},V))$ induces an isomorphism in cohomology, then $j_{eq}^* : A_s^* (M;V)^G \hookrightarrow {\mathrm{Tot}}A_{sg}^{*,*}(M;V)^G$ and $A_s^* (M;V)^G \hookrightarrow A_{sg}^* (M;V)^G$ also induce an isomorphism in cohomology. This follows from the preceding Lemma and Corollaries \[augexthenjeqindisosg\] and \[augextheninclindisosg\]. As observed before (cf. Remark \[remonlyginvcovsmooth\]) one may restrict oneself to the directed system of $G$-invariant open coverings only to achieve the same result. Thus we observe: If $G=M$ is a Lie group which acts on itself by left translation and the inclusion $\operatorname{colim}A_s^{p,*} (M,\mathfrak{U};V)\hookrightarrow A_{AS}^* (M;C(M^{p+1},V))$ (where $U$ runs over all open identity neighbourhoods in $G$) induces an isomorphism in cohomology, then the inclusion $A_s^* (M;V)^G \hookrightarrow A_{sg}^* (M;V)^G$ induces an isomorphism in cohomology as well. It has been shown in [@vE62b] that the cohomology of the colimit cochain complex $\operatorname{colim}A^* ( \mathfrak{U} ;C(M^{p+1},V))$ is the Alexander-Spanier cohomology of $M$. \[xsmoothlycontrthenacpsistriv\] If the manifold $M$ is contractible, then the cohomology of the complex $\operatorname{colim}A_s^{p,*} (M,\mathfrak{U};V)$ is trivial. The reasoning is analogous to that for the Alexander-Spanier presheaf. The proof [@F10 Theorem 2.5.2] carries over almost in verbatim. If the manifold $M$ is contractible, then the Alexander-Spanier cohomology $H_{AS} (M;C^{p+1}(M,V))$ is trivial and the cohomology of the cochain complex $\operatorname{colim}A_s^{p,*} (M,\mathfrak{U};V)$ is trivial by Lemma \[xsmoothlycontrthenacpsistriv\]. By Proposition \[inclcolimacpsindisothenjeqisosmooth\] the inclusion $A_s^* (M;V)^G \hookrightarrow A_{sg}^* (M;V)^G$ then induces an isomorphism in cohomology. For smoothly contractible Lie groups $G$ the continuous group cohomology is isomorphic to the cohomology of homogeneous group cochains with continuous germ at the diagonal.
--- abstract: 'The $\Omega\Omega$ system in the $^1S_0$ channel (the most strange dibaryon) is studied on the basis of the (2+1)-flavor lattice QCD simulations with a large volume (8.1 fm)$^3$ and nearly physical pion mass $m_{\pi}\simeq 146$ MeV at a lattice spacing $a\simeq 0.0846$ fm. We show that lattice QCD data analysis by the HAL QCD method leads to the scattering length $a_0 = 4.6 (6)(^{+1.2}_{-0.5}) {\rm fm}$, the effective range $r_{\rm eff} = 1.27 (3)(^{+0.06}_{-0.03}) {\rm fm}$ and the binding energy $B_{\Omega \Omega} = 1.6 (6) (^{+0.7}_{-0.6}) {\rm MeV}$. These results indicate that the $\Omega\Omega$ system has an overall attraction and is located near the unitary regime. Such a system can be best searched experimentally by the pair-momentum correlation in relativistic heavy-ion collisions.' author: - | Shinya Gongyo$^{1}$, Kenji Sasaki$^{1,2}$, Sinya Aoki$^{1,2,3}$, Takumi Doi$^{1,4}$, Tetsuo Hatsuda$^{4,1}$, Yoichi Ikeda$^{1,5}$, Takashi Inoue$^{1,6}$, Takumi Iritani$^{1}$, Noriyoshi Ishii$^{1,5}$, Takaya Miyamoto$^{1,2}$, Hidekatsu Nemura$^{1,5}$\ (HAL QCD Collaboration)\ title: Most Strange Dibaryon from Lattice QCD --- [***Introduction:***]{}   Dibaryon is defined as a baryon number $B$ = 2 system (equivalently a 6-quark system) in quantum chromodynamics [@Mulders:1980vx; @Oka:1988yq; @Gal:2015rev]. So far, only one stable dibaryon, the deuteron, has been observed: It is a loosely bound system of the proton and the neutron in spin-triplet and isospin-singlet channel. In recent years, there are renewed experimental interests in the dibaryons due to exclusive measurements in hadron reactions [@Clement:2016vnl] as well as the direct measurement in relativistic heavy-ion collisions [@Cho:2017dcy]. Also from the theoretical side, (2+1)-flavor lattice QCD simulations of the 6-quark system in a large box with nearly physical quark masses became possible recently [@Doi:2017cfx]. The main aim of this Letter is to report the first result and physical implication of $\Omega\Omega$, the strangeness $S = -6$ dibaryon (“most strange dibaryon"), in full QCD simulations with the lattice volume (8.1 fm)$^3$ and the pion mass $m_{\pi} \simeq 146$ MeV at a lattice spacing $a\simeq 0.0846$ fm [@Ishikawa:2015rho]. Before entering the detailed discussions of our study, we first introduce the reason why such an exotic channel ($S=-6$) is of interest in QCD. Let us consider octet ${\bf 8}$ and decuplet ${\bf 10}$ baryons in the flavor SU(3) classification. All the members of ${\bf 8}$ are stable under strong decay. This is why the forces between octet baryons in ${\bf 8} \otimes {\bf 8}$ are most relevant in the physics of hypernuclei and of neutron stars. Also, the elusive $H$-dibaryon (a combination of $\Lambda\Lambda$, $N\Xi$ and $\Sigma \Sigma$) is in this representation [@Jaffe1977; @Inoue:2010es; @Beane:2010hg] and does not suffer from the Pauli exclusion principle in the flavor-SU(3) limit. On the other hand, only $\Omega$ in ${\bf 10}$ is stable under strong decay. Therefore, in the ${\bf 8} \otimes {\bf 10}$ representation, the most promising candidate of stable dibaryon is $N\Omega$ [@Goldman:1987ma]. The Pauli exclusion principle does not work in this case too, so that there is a possibility to have a bound state in the S-wave and total-spin 2 channel [@Etminan:2014tya]. Such a system is indeed studied by the two-particle momentum correlation in high-energy heavy-ion collisions both theoretically and experimentally [@Morita:2016auo]. In the decuplet-decuplet channnel, we have $$\begin{aligned} {\rm {\bf 10} \otimes {\bf 10} = ({\bf 28} \oplus {\bf 27})_{{sym.}} \oplus ({\bf 35} \oplus {\bf 10}^* )_{{anti\mathchar`-sym.}} ,} \nonumber \end{aligned}$$ where “sym.” and “anti-sym.” stand for the flavor symmetry under the exchange of two baryons. Only possible stable state under strong decay is the $\Omega\Omega$ system in the symmetric ${\bf 28}$ representation. Again, the quark Pauli principle does not operate in this channel [@Oka:2000wj]. Note that the celebrated ABC resonance ($\Delta\Delta$ in the spin-3 and isospin-0 channel) [@Dyson:1964xwa; @Clement:2016vnl] belongs to the anti-symmetric ${\bf 10}^*$ representation, while $\Delta\Delta$ in the spin-0 and isospin-3 channel is in the same multiplet with $\Omega\Omega$. The $\Omega\Omega$ interaction at low energies has been investigated so far by using phenomenological quark models or by using lattice QCD simulations with heavy quark masses. Very recently, the chiral effective field theory has also been applied to the scattering of the $\Omega$ baryons [@Haidenbauer:2017sws]. In some quark models, strong attraction is reported [@zhang:1997; @zhang:2000], while other models show weak repulsion [@wang:1992; @wang:1997]. A (2+1)-flavor lattice QCD study with $m_{\pi}\simeq 390$ MeV by using the finite volume method shows weak repulsion [@Buchoff:2012prd], while a study with $m_\pi \simeq 700$ MeV by using the HAL QCD method shows moderate attraction [@Yamada:2015cra]. Under such a controversial situation, it is most important to carry out first-principles lattice QCD simulations in a large volume with the pion mass close to the physical point. [***HAL QCD method:***]{}   In the HAL QCD method [@Ishii:2006ec; @Aoki:2010ptp; @Ishii:2012plb; @Aoki:2012tk], the observables such as the binding energy and phase shifts are obtained from the equal-time Nambu-Bethe-Salpeter (NBS) wave function $\psi({\mbox{\boldmath $r$}})$ and associated two-baryon irreducible kernel $U({\mbox{\boldmath $r$}},{\mbox{\boldmath $r$}}')$. The traditional finite volume method with the plateau fitting [@luscher:1991] is challenging in practice for $B=2$ systems in large volumes because of the difficulty in differentiating each scattering state [@Lepage:1990; @Iritani:2016jie; @Iritani:2017rlk; @Aoki:2017byw]. On the other hand, the time-dependent HAL QCD method [@Ishii:2012plb] can avoid such a problem since all the elastic scattering states are dictated by the same kernel $U({\mbox{\boldmath $r$}},{\mbox{\boldmath $r$}}')$ and there is no need to identify each scattering state in a finite box. The equal-time NBS wave function $\psi({\mbox{\boldmath $r$}})$ has the property that its asymptotic behavior at large distance reproduces the phase shifts, which can be proven from the unitarity of $S$-matrix in quantum field theories [@Ishii:2006ec]. Moreover, it is related to the following [*reduced*]{} four-point (4-pt) function, $$\begin{aligned} R({\mbox{\boldmath $r$}}, t>0) &=& \langle 0 \vert \Omega({\mbox{\boldmath $r$}},t) \Omega({\mbox{\boldmath $0$}},t) {\overline {\cal J}}(0) \vert 0 \rangle /e^{-2m_{_\Omega} t } \nonumber \\ &=& \sum_n a_n \psi_n({\mbox{\boldmath $r$}}) e^{- (\delta W_n) t} + O(e^{- (\Delta E^*) t}). \label{eq:4pt} \end{aligned}$$ Here $ {\overline {\cal J}}(0)$ is a source operator creating the $(B,S)=(2,-6)$ system at Euclidean time 0, and $a_n$ is the matrix element defined by $\langle n \vert {\overline {\cal J}}(0) \vert 0 \rangle$ with $ | n \rangle $ representing the elastic states in a finite volume. The energy is represented as $\delta W_n = 2\sqrt{m_{\Omega}^{2}+{\mbox{\boldmath $k$}}_n^{2}} - 2m_{\Omega} $ with the $\Omega$ baryon mass $m_\Omega$ and the relative momentum ${\mbox{\boldmath $k$}}_n$. Typical excitation energy of a [*single*]{} $ \Omega$-baryon is denoted by $\Delta E^*$, so that the last term in Eq.(\[eq:4pt\]) is exponentially suppressed as long as $t \gg (\Delta E^*)^{-1} \sim \Lambda_{\rm QCD}^{-1}$ [@Ishii:2012plb] with $\Lambda_{\rm QCD} \sim 200 - 300$ MeV being the QCD scale parameter. A local interpolating operator for the $\Omega$ baryon has a general form $$\begin{aligned} \Omega(x) &\equiv& \varepsilon^{abc} (s_{a}^{T}(x)C\gamma_{k}s_{b}(x))s_{c,\alpha}(x), \label{eq:Omega}\end{aligned}$$ with $a$, $b$ and $c$ being color indices, $\gamma_{k}$ being the Dirac matrix, $\alpha$ being the spinor index, and $C\equiv\gamma_{4}\gamma_{2}$ being charge conjugation. An appropriate spin projection is necessary from this operator to single out a particular spin state as mentioned later. The reduced 4-pt function $R$ has been shown to satisfy the following master equation [@Ishii:2012plb], $$\begin{aligned} (\frac{\triangledown^{2}}{m_{_\Omega}}-\frac{\partial}{\partial t}+\frac{1}{4m_{_\Omega}}\frac{\partial^{2}}{\partial t^{2}})R({\mbox{\boldmath $r$}}, t) = \int d{\mbox{\boldmath $r$}}' U({\mbox{\boldmath $r$}},{{\mbox{\boldmath $r$}}}')R({{\mbox{\boldmath $r$}}'}, t), \nonumber \\ \label{eq:tdep} \end{aligned}$$ which is valid as long as $t \gg (\Delta E^*)^{-1}$. We emphasize that we do not need to isolate each scattering state with the energy $\delta W_n$, so that only the moderate values of $t $ up to 1.5-2 fm are sufficient for a reliable extraction of the kernel $U$. This is a crucial difference from the finite volume method which requires $t \gg (\delta W_1)^{-1} \sim 10 $ fm (for the lattice volume as large as 8 fm) to identify each $\delta W_n$. (See a recent summary [@Aoki:2017byw] and references therein.) At low energies, one can make the derivative expansion with respect to the non-locality of the kernel [@Aoki:2010ptp; @Murano:2011nz], $U({\mbox{\boldmath $r$}},{\mbox{\boldmath $r$}}')=V _0(r)\delta({\mbox{\boldmath $r$}}-{\mbox{\boldmath $r$}}')+\sum_{n=1}V_{2n}(r)\nabla^{2n}\delta({\mbox{\boldmath $r$}}-{\mbox{\boldmath $r$}}')$. Then, we introduce an “effective" leading-order potential $V (r)$, $$\begin{aligned} V (r) = R^{-1}({\mbox{\boldmath $r$}}, t) \left(\frac{\nabla^{2}}{m_{_\Omega}}-\frac{\partial}{\partial t}+\frac{1}{4m_{_\Omega}}\frac{\partial^{2}}{\partial t^{2}}\right)R({\mbox{\boldmath $r$}}, t), \label{eq:Potential}\end{aligned}$$ which provides a good leading-order approximation of $U({\mbox{\boldmath $r$}},{\mbox{\boldmath $r$}}')$ to obtain physical observables at low energies, as long as its $t$-dependence is sufficiently small. [***Interpolating operator:***]{}   The interpolating operator $\Omega_{s_z}(x)$ for the $\Omega$ baryon with spin-$\frac{3}{2}$ and the $z$-component $s_z=\pm \frac{3}{2}, \pm \frac{1}{2}$ can be readily constructed by the appropriate spin projection of the upper two components of Eq.(\[eq:Omega\]) as shown in [@Yamada:2015cra]. The asymptotic $\Omega\Omega$ system can now be characterized by $^{2s+1}L_J$ with the total spin $(s)$, the orbital angular momentum $(L)$ and the total angular momentum $(J)$. We consider $L=0$ where maximum attraction is expected at low energies. Then, the Fermi statistics leads $s$ to be even (either $s$=0 or 2). Here we consider an $s=0$ system with the interpolating operator $$\begin{aligned} \label{eq:OO_00} [\Omega\Omega]_{0} = \frac{1}{2} \left( \Omega_{\frac{3}{2}}\Omega_{-\frac{3}{2}} -\Omega_{\frac{1}{2}}\Omega_{-\frac{1}{2}} +\Omega_{-\frac{1}{2}}\Omega_{\frac{1}{2}} -\Omega_{-\frac{3}{2}}\Omega_{\frac{3}{2}} \right) . \nonumber \\\end{aligned}$$ For ${\overline {\cal J}}(0)$, we use the wall-type quark source with the $s=0$ projection given above. With this source, the total momentum of the system is automatically zero. Also, it has good overlap with the scattering states where $|{{\mbox{\boldmath $r$}}}|$ in Eq.(\[eq:tdep\]) is larger than the typical baryon size. To extract the $L=0$ and $s=0$ state at $t$ on the lattice, we employ Eq.(\[eq:OO\_00\]) for the sink operator together with the projection to the $A_1$ representation of the cubic-group, $$P^{(A_1)} R({{\mbox{\boldmath $r$}}}, t) = \frac{1}{24}\sum_{i=1}^{24} R({\cal R}_{i}[{{\mbox{\boldmath $r$}}}],t),$$ where ${\cal R}_{i}$ is an element of the cubic group acting on the relative distance ${\mbox{\boldmath $r$}}$. Note here that $R({{\mbox{\boldmath $r$}}}, t)$ and $U({{\mbox{\boldmath $r$}}},{{\mbox{\boldmath $r$}}'})$ depend on the choice of interpolating operators, while observables calculated from these quantities are independent of the choice thanks to the Nishijima-Zimmermann-Haag theorem [@Aoki:2010ptp]. [***Lattice Setup:***]{}   By using the 11PFlops supercomputer K at RIKEN Center for Computational Science, $(2+1)$-flavor gauge configurations on the $96^4$ lattice are generated with the Iwasaki gauge action at $\beta = 1.82$ and nonperturbatively ${\cal O}(a)$-improved Wilson quark action with stout smearing. The lattice spacing is $a \simeq 0.0846$ fm ($a^{-1} \simeq 2.333$ GeV) [@Ishikawa:2015rho] and the pion mass, the kaon mass and the nucleon masses are $m_\pi \simeq 146$ MeV, $m_K \simeq 525$ MeV and $m_N \simeq 964$ MeV, respectively. (These masses are higher than the physical values by about 8%, 6 % and 3 %, respectively, due to slightly larger quark masses at the simulation point.) The lattice size, $La \simeq 8.1$ fm, is sufficiently large to accommodate two baryons in a box. We employ the wall quark source with the Coulomb gauge fixing, and the periodic (Dirichlet) boundary condition is used for spatial (temporal) directions. Forward and backward propagations are averaged to reduce the statistical fluctuations. We pick 1 configuration per each 5 trajectories, and make use of the rotation symmetry and the translational invariance for the source position to increase the statistics. The total statistics in this Letter amounts to 400 configurations $\times$ 2 (forward/backward) $\times$ 4 rotations $\times$ 48 source positions. The quark propagators are obtained by the domain-decomposed solver [@Boku:2012zi; @Terai:2013; @Nakamura:2011my; @Osaki:2010vj] and the correlation functions are calculated using the unified contraction algorithm [@Doi:2012xd]. The $\Omega$-baryon mass extracted from the effective mass $m_{\rm eff}(t) \equiv \ln G(t)/G(t+a)$ with $G(t)$ being the baryonic two-point function is $m_{_\Omega} = 1712 \pm 1 $ MeV (from the plateau in $t/a=17-22$) and $m_{_\Omega}= 1713 \pm 1 $ MeV (from $t/a =18-25$) with the statistical errors. These numbers are about 2% higher than the physical value, 1672 MeV. We take the former number in the following analysis. [***Numerical results:***]{}    The $^1S_0$ potential $V(r)$ obtained from Eq.(\[eq:Potential\]) with the lattice measurement of $R({\mbox{\boldmath $r$}}, t)$ is shown in Fig. \[fig:Potential\] for $t/a=16, 17,$ and 18. Here Laplacian and the time-derivative in Eq.(\[eq:Potential\]) are approximated by the central (symmetric) difference. The statistical errors for $V(r)$ at each $r$ are estimated by the jackknife method with the bin size of 40 configurations. A comparison with the bin size of 20 configurations shows that the bin size dependence is small. The particular region $t/a=17 \pm 1$ in Fig. \[fig:Potential\] is chosen to suppress contamination from excited states in the single $\Omega$ propagator at smaller $t$ and simultaneously to avoid large statistical errors at larger $t$. We observe that the potentials at $t/a= 16, 17,$ and 18 are nearly identical within statistical errors as expected from the time-dependent HAL QCD method [@Ishii:2012plb]. The $\Omega\Omega$ potential $V(r)$ has qualitative features similar to the central potential of the nucleon-nucleon ($NN$) interaction, i.e., the short range repulsion and the intermediate range attraction [@Doi:2017cfx]. There are, however, two quantitative differences: (i) the short range repulsion is much weaker in the $\Omega\Omega$ case possibly due to the absence of quark Pauli exclusion effect, and (ii) the attractive part is much short-ranged due to the absence of pion exchanges. ![The $\Omega \Omega$ potential $V(r)$ in the $^1S_0$ channel at Euclidean time $t/a=16,17,$ and 18.[]{data-label="fig:Potential"}](POT.pdf) For the purpose of converting the potential to the physical observables such as the scattering phase shifts and the binding energy, we fit $V(r)$ in Fig. \[fig:Potential\] in the range $r =0-6\ {\rm fm}$ by three Gaussians, $V_{\rm fit}(r) = \sum_{j=1,2,3} c_j \exp(-(r/d_j)^2)$. For example, the uncorrelated fit in the case of $t/a=17$ gives the following parameters: $(c_1, c_2, c_3)= (914(52), 305(44), -112(13))$ in MeV and $(d_1, d_2, d_3)=0.143(5), 0.305(29), 0.949(58))$ in fm with $\chi^2/{\rm d.o.f.} \sim 1.3$. Another functional form such as two Gaussians + (Yukawa function)$^2$ provides equally well fit, and the results are not affected within errors. The finite volume effect on the potential is expected to be small due to the large lattice size. The naive order estimate of the finite $a$ effect for the physical observables is also small ($(\Lambda a)^2 \le 1$ %) thanks to the non-perturbative $O(a)$ improvement, but an explicit confirmation would be desirable in the future. The $\Omega \Omega$ scattering phase shifts $\delta(k)$ in the $^1S_0$ channel obtained from $V_{\rm fit}(r)$ are shown in Fig.\[fig:Phase-shift\] for $t/a=16, 17,$ and $18$ as a function of the kinetic energy in the center of mass frame, $E_{_{\rm CM}}=2 \sqrt{k^2+m_{_\Omega}^2 } - 2m_{_\Omega}$. The error bands reflect the statistical uncertainty of the potential in Fig. \[fig:Potential\]. All three cases show that $\delta(0)$ starts from 180$^{\circ}$, which indicates the existence of a bound $\Omega\Omega$ system. ![The $\Omega \Omega$ phase shift $\delta(k)$ in the $^1S_0$ channel for $t/a=16, 17$ and 18 as a function of the center of mass kinetic energy $E_{_{\rm CM}}=2 \sqrt{k^2+m_{_\Omega}^2 } - 2m_{_\Omega}$. []{data-label="fig:Phase-shift"}](PS.pdf) The scattering length $a_0$ and the effective range $r_{\rm eff}$ in the $^1S_0$ channel is extracted from $\delta(k)$ through the effective range expansion (ERE), $k \cot \delta(k) = - \frac{1}{a_0} + \frac{1}{2} r_{\rm eff} k^2 + \cdots$, with the sign convention of nuclear and atomic physics: $$\begin{aligned} a_0^{(\Omega \Omega)} &=& 4.6 (6)(^{+1.2}_{-0.5}) \ \ \ {\rm fm}, \label{eq:a0} \\ r_{\rm eff}^{(\Omega \Omega)} &=& 1.27 (3)(^{+0.06}_{-0.03}) \ \ {\rm fm}. \label{eq:r0}\end{aligned}$$ The central values and the statistical errors in the first parentheses are extracted from $\delta(k)$ at $t/a=17$, and the systematic errors in the second parentheses are estimated from the results at $t/a=16$ and 18. The origin of this systematic error is the truncation of the higher-derivatives of the non-local potential as well as the contaminations from inelastic states. To get a feel for the magnitude of these values, we recapitulate here the experimental values of $a_0$ and $r_{\rm eff}$ in the $NN$ systems; $(a_0, r_{\rm eff})_{\rm spin\mathchar`- triplet}=(5.4112(15) {\rm fm}, 1.7436(19) {\rm fm})$ and $(a_0, r_{\rm eff})_{\rm spin\mathchar`-singlet}=(-23.7148(43) {\rm fm}, 2.750(18) {\rm fm})$ [@Hackenburg:2006qd]. There exists no symmetry reason that the scattering parameters in the $NN$ systems and those in the $\Omega\Omega$ system should be similar. Nevertheless, it is remarkable that they are all close to the unitary region where $r_{\rm eff}/a_0 $ is substantially smaller than 1 as shown in Fig.\[fig:Scattering\]. ![The dimensionless ratio of the effective range $r_{\rm eff}$ and the scattering length $a_0$ as a function of $r_{\rm eff}$ for the $\Omega\Omega$ system in the $^1S_0$ channel as well as for the spin-triplet $NN$ system (the deuteron channel) and for the spin-singlet $NN$ system (the neutron-neutron channel). The error bar for $\Omega\Omega$ is the quadrature of the statistical and systematic errors in Eqs. (\[eq:a0\]) and (\[eq:r0\]).[]{data-label="fig:Scattering"}](RoverC.pdf) ![Bound state energy of the $\Omega\Omega$ system and the root-mean-square distance between $\Omega$s obtained from the potential. Filled diamond (triangle) corresponds to the result at $t/a=17$ without (with) the Coulomb repulsion. The statistical errors are shown by the solid lines, while the systematic errors estimated from the difference between the data at $t/a=17$ and those at $t/a=16, 18$ are shown by the dashed lines. []{data-label="fig:Binding"}](BEdash.pdf) Shown in Fig.\[fig:Binding\] are the bound state energy given by the opposite sign of the binding energy, $-B_{\Omega\Omega}$, and the root-mean-square distance ($\sqrt{\langle r^2 \rangle}$) of the $\Omega\Omega$ bound state obtained from the potential. The blue diamond is taken from the data at $t/a=17$ without the Coulomb repulsion. The blue solid and dashed lines are the statistical error at $t/a=17$ and the systematic error estimated from the data at $t/a=17\pm 1$, respectively: $$\begin{aligned} B_{\Omega \Omega}^{\rm (QCD)} = 1.6 (6) (^{+0.7}_{-0.6}) \ \ {\rm MeV}.\end{aligned}$$ As an alternative estimate, the truncation error of the derivative expansion on the binding energy is evaluated perturbatively, and is found to be less than $20\%$ even if the magnitude of dimensionless next-to-leading order potential is the same order as that of the effective leading-order potential. It is an important future subject to determine higher-order potentials explicitly by using the method of multiple quark sources [@Iritani:2017wvu]. The binding energy is consistent with the value obtained from the general formula for loosely bound states [@Naidon:2016dpf] with (\[eq:a0\]) and (\[eq:r0\]); $B_{\Omega\Omega}= \frac{1}{m_{\Omega} r_{\rm eff}^2} \left( 1-\sqrt{1-\frac{2r_{\rm eff}}{a_0}} \right)^2 \simeq 1.5$ MeV. Associated with this small binding energy, $\sqrt{\langle r^2 \rangle}$ is as large as 3-4 fm which is consistent with the expectation, $\sqrt{\langle r^2 \rangle} \sim a_0$, for loosely bound states. The Coulomb repulsion can be evaluated by adding $\alpha/r$ with $\alpha = e^2/4\pi$ to the potential obtained from lattice QCD, i.e. $V^{(\mathrm{QCD}+ \mathrm{Coulomb})} \equiv V^{(\mathrm{QCD})} + \alpha/r$. This reduces the above binding energy by a factor of two, $B_{\Omega \Omega}^{\rm (QCD+Coulomb)} = 0.7 (5) (5) \ {\rm MeV}$ as shown in Fig.\[fig:Binding\] by the red triangle. It is in order here to remark that there are three energy scales in the present problem: $2m_{_\Omega} \simeq $ 3400 MeV $\gg$ $ \vert V(r\simeq 0.5 {\rm fm})\vert \sim$ 50 MeV $\gg$ $B_{\Omega\Omega} \sim 1$ MeV. Since only the relative difference between the interacting and non-interacting two-$\Omega$ systems matters, the absolute magnitude of the uncertainty of $2m_{_\Omega}$ is not reflected directly in $V(r)$. This is why we could extract $V(r)$ rather accurately as shown in Fig.\[fig:Potential\] despite of the large scale difference between $2m_{_\Omega}$ and $V(r)$. Then the small binding energy $B$ as well as the large scattering length $a_0$ are the natural consequence of the large cancellation between the long-range attraction and the short-range repulsion of $V(r)$, a situation common in nuclear and atomic physics. Although $V(r)$ is not a direct observable, it provides an important intermediate step to link the QCD scale (GeV) to nuclear physics scale (MeV), since it is difficult to measure the binding energy directly from lattice QCD using the finite volume method for large lattice volumes and physical quark masses (see the critical review [@Aoki:2017byw]). Finally let us estimate the effect of slightly heavy quark masses in our simulation. First of all, the attractive part of the $\Omega\Omega$ potential would be slightly long-ranged at the physical point due to the kaon mass, $m_K^{(\rm present)}\simeq 525$ MeV $\rightarrow$ $m_K^{(\rm phys.)} \simeq 495$ MeV. On the other hand, the effect of the $\Omega$ mass, $m_{_\Omega}^{(\rm present)}\simeq 1712$ MeV $\rightarrow$ $m_{_\Omega}^{(\rm phys.)} \simeq 1672$ MeV, would lead to less-binding due to the larger kinetic energy. Therefore, conservative estimate is obtained by keeping the same $V(r)$ in Fig.\[fig:Potential\] and to adopt $m_{_\Omega}^{(\rm phys.)} $ to calculate the phase shifts and the binding energy. This results in $(a_0, r_{\rm eff})= (4.9(8) {\rm fm}, 1.29(3){\rm fm})$ and $(B_{\Omega \Omega}^{\rm (QCD)},B_{\Omega \Omega}^{\rm (QCD+Coulomb)} )=(1.3(5) {\rm MeV}, 0.5(5) {\rm MeV})$ for the potential at $t/a=17$. These numbers are well within errors of the results of the present simulation shown in Figs. \[fig:Scattering\] and \[fig:Binding\]. [***Summary and Discussions:***]{}    In this Letter, we presented a first realistic calculation on the most strange dibaryon, $\Omega\Omega$, in the $^1S_0$ channel on the basis of the (2+1)-flavor lattice QCD simulations with a large volume and nearly physical quark masses. The scattering length, effective range and the binding energy obtained by the HAL QCD method strongly indicate that the $\Omega\Omega$ system in the $^1S_0$ channel has an overall attraction and is located in the vicinity of the unitary regime. From the phenomenological point of view, such a system can be best searched by the measurement of pair-momentum correlation $C(Q)$ with $Q$ being the relative momentum between two baryons produced in relativistic heavy-ion collisions [@Cho:2017dcy]. Experimentally, each $\Omega$ can be identified through a successive weak decay, $\Omega^- \rightarrow \Lambda + K^- \rightarrow p+\pi^- + K^-$. Note that a large scattering length (not the existence of a bound state) is the important element for $C(Q)$ to have characteristic enhancement at small relative momentum $Q$. Moreover, the effect of the Coulomb interaction can be effectively eliminated by taking a ratio of $C(Q)$ between small and large collision systems [@Morita:2016auo]. We are currently underway to increase the statistics of the lattice data with the same lattice setup. These results together with the detailed examination of the spectrum analysis in a finite lattice volume and the effective range expansion will be reported. We thank members of PACS Collaboration for the gauge configuration generation. The lattice QCD calculations have been performed on the K computer at RIKEN, AICS (hp120281, hp130023, hp140209, hp150223, hp150262, hp160211, hp170230), HOKUSAI FX100 computer at RIKEN, Wako (G15023, G16030, G17002) and HA-PACS at University of Tsukuba (14a-20, 15a-30). We thank ILDG/JLDG [@conf:ildg/jldg] which serves as an essential infrastructure in this study. We thank the authors of cuLGT code [@Schrock:2012fj] used for the gauge fixing. This study is supported in part by Grant-in-Aid for Scientific Research on Innovative Areas(No.2004:20105001, 20105003) and for Scientific Research(Nos. 25800170, 26400281), SPIRE (Strategic Program for Innovative REsearch), MEXT Grant-in-Aid for Scientific Research (JP15K17667, JP16H03978, JP16K05340), “Priority Issue on Post-K computer” (Elucidation of the Fundamental Laws and Evolution of the Universe), and by Joint Institute for Computational Fundamental Science (JICFuS). S.G. is supported by the Special Postdoctoral Researchers Program of RIKEN. T.D. and T.H. are partially supported by RIKEN iTHES Project and iTHEMS Program. T.H. is grateful to the Aspen Center for Physics, supported in part by NSF Grants PHY1066292 and PHY1607611. The authors thank C. M. Ko for drawing out attention to the $\Omega\Omega$ system, and K. Yazaki for fruitful discussions on the short range part of baryon-baryon interactions, and Y. Namekawa for his careful reading of the manuscript. [99]{} P. J. Mulders, A. T. M. Aerts and J. J. de Swart, Phys. Rev. D [**21**]{}, 2653 (1980). M. Oka, Phys. Rev. D [**38**]{}, 298 (1988). A. Gal, Acta Phys. Polon. B [**47**]{}, 471 (2016) \[arXiv:1511.06605 \[nucl-th\]\]. H. Clement, Prog. Part. Nucl. Phys.  [**93**]{}, 195 (2017) \[arXiv:1610.05591 \[nucl-ex\]\]. S. Cho [*et al.*]{} \[ExHIC Collaboration\], Prog. Part. Nucl. Phys.  [**95**]{}, 279 (2017) \[arXiv:1702.00486 \[nucl-th\]\]. T. Doi [*et al.*]{}, PoS LATTICE [**2016**]{}, 110 (2017) \[arXiv:1702.01600 \[hep-lat\]\]. H. Nemura [*et al.*]{}, PoS LATTICE [**2016**]{}, 101 (2017) \[arXiv:1702.00734 \[hep-lat\]\]. K. Sasaki [*et al.*]{}, PoS LATTICE [**2016**]{}, 116 (2017) \[arXiv:1702.06241 \[hep-lat\]\]. N. Ishii [*et al.*]{}, PoS LATTICE [**2016**]{}, 127 (2017) \[arXiv:1702.03495 \[hep-lat\]\]. K.-I. Ishikawa [*et al.*]{} \[PACS Coll.\], PoS LATTICE [**2015**]{}, 075 (2016) \[arXiv:1511.09222 \[hep-lat\]\]. R.L. Jaffe, Phys. Rev. Lett. [**38**]{}, 195 (1977). T. Inoue [*et al.*]{} \[HAL QCD Collaboration\], Phys. Rev. Lett.  [**106**]{}, 162002 (2011) \[arXiv:1012.5928 \[hep-lat\]\]. S. R. Beane [*et al.*]{} \[NPLQCD Collaboration\], Phys. Rev. Lett.  [**106**]{}, 162001 (2011) \[arXiv:1012.3812 \[hep-lat\]\]. T. Goldman, K. Maltman, G. J. Stephenson, Jr., K. E. Schmidt and F. Wang, Phys. Rev. Lett.  [**59**]{}, 627 (1987). F. Etminan [*et al.*]{} \[HAL QCD Collaboration\], Nucl. Phys. A [**928**]{}, 89 (2014) \[arXiv:1403.7284 \[hep-lat\]\]. K. Morita, A. Ohnishi, F. Etminan and T. Hatsuda, Phys. Rev. C [**94**]{}, 031901 (2016) \[arXiv:1605.06765 \[hep-ph\]\]. M. Oka, K. Shimizu and K. Yazaki, Prog. Theor. Phys. Suppl.  [**137**]{}, 1 (2000). F. Dyson and N. H. Xuong, Phys. Rev. Lett.  [**13**]{}, 815 (1964). J. Haidenbauer, S. Petschauer, N. Kaiser, U. G. Meißner and W. Weise, Eur. Phys. J. C [**77**]{}, no. 11, 760 (2017) doi:10.1140/epjc/s10052-017-5309-4 \[arXiv:1708.08071 \[nucl-th\]\]. Z.Y. Zhang, Y.W. Yu, P.N. Shen, L.R. Dai, A. Faessler, and U. Straub, Nucl. Phys. A [**625**]{}, 59 (1997). Z.Y. Zhang, Y.W.Yu, C.R.Ching, T.H.Ho, and Z.-D.Lu, Phys. Rev. C [**61**]{}, 065204 (2000). F. Wang, J. L. Ping, G. H. Wu, L. J. Teng, and T Goldman, Phys. Rev. C [**51**]{}, 3411 (1995). F. Wang, G. H. Wu, L. J. Teng, and T. Goldman, Phys. Rev. Lett. [**69**]{}, 2901 (1992) . M. I. Buchoff, T. C. Luu and J. Wasem, Phys. Rev. D [**85**]{}, 094511 (2012) \[arXiv:1201.3596 \[hep-lat\]\]. M. Yamada [*et al.*]{} \[HAL QCD Collaboration\], PTEP [**2015**]{}, 071B01 (2015) \[arXiv:1503.03189 \[hep-lat\]\]. N. Ishii, S. Aoki and T. Hatsuda, Phys. Rev. Lett. [**99**]{}, 022001 (2007). S. Aoki, T. Hatsuda and N. Ishii, Prog. Theor. Phys. [**23**]{}, 89 (2010) \[arXiv:0909.5585 \[hep-lat\]\]. N. Ishii [*et al.*]{} \[HAL QCD Collaboration\], Phys. Lett. B [**712**]{}, 437 (2012) \[arXiv:1203.3642 \[hep-lat\]\]. S. Aoki [*et al.*]{} \[HAL QCD Collaboration\], PTEP [**2012**]{}, 01A105 (2012) \[arXiv:1206.5088 \[hep-lat\]\]. M. Lüscher, Nucl. Phys. B[**354**]{}, 531 (1991). G. P. Lepage, in From [*Actions to Answers: Proceedings of the TASI 1989*]{}, edited by T. Degrand and D. Toussaint (World Scientific, Singapore, 1990). T. Iritani [*et al.*]{}, JHEP [**1610**]{}, 101 (2016) \[arXiv:1607.06371 \[hep-lat\]\]. T. Iritani [*et al.*]{}, Phys. Rev. D [**96**]{}, no. 3, 034521 (2017) \[arXiv:1703.07210 \[hep-lat\]\]. S. Aoki, T. Doi and T. Iritani, arXiv:1707.08800 \[hep-lat\]. K. Murano, N. Ishii, S. Aoki and T. Hatsuda, Prog. Theor. Phys.  [**125**]{}, 1225 (2011) \[arXiv:1103.0619 \[hep-lat\]\]. T. Iritani \[HALQCD Collaboration\], arXiv:1710.06147 \[hep-lat\]. T. Boku [*et al.*]{}, PoS LATTICE [**2012**]{}, 188 (2012) \[arXiv:1210.7398 \[hep-lat\]\]. M. Terai, K. I. Ishikawa, Y. Sugisaki, K. Minami, F. Shoji, Y. Nakamura, Y. Kuramashi, M. Yokokawa, “Performance Tuning of a Lattice QCD code on a node of the K computer,” IPSJ Transactions on Advanced Computing Systems, Vol.6 No.3 43-57 (Sep. 2013) (in Japanese). Y. Nakamura, K.-I. Ishikawa, Y. Kuramashi, T. Sakurai and H. Tadano, Comput. Phys. Commun.  [**183**]{}, 34 (2012) doi:10.1016/j.cpc.2011.08.010 \[arXiv:1104.0737 \[hep-lat\]\]. Y. Osaki and K. I. Ishikawa, PoS LATTICE [**2010**]{}, 036 (2010) \[arXiv:1011.3318 \[hep-lat\]\]. T. Doi and M. G. Endres, Comput. Phys. Commun.  [**184**]{}, 117 (2013) \[arXiv:1205.0585 \[hep-lat\]\]. R. W. Hackenburg, Phys. Rev. C [**73**]{}, 044002 (2006). P. Naidon and S. Endo, Rept. Prog. Phys.  [**80**]{}, 056001 (2017) `"http://www.lqcd.org/ildg"`, `"http://www.jldg.org"` M. Schröck and H. Vogt, Comput. Phys. Commun.  [**184**]{} (2013) 1907 \[arXiv:1212.5221 \[hep-lat\]\].
--- abstract: 'In many practical transfer learning scenarios, the feature distribution is different across the source and target domains (i.e. non-$i.i.d.$). Maximum mean discrepancy (MMD), as a domain discrepancy metric, has achieved promising performance in unsupervised domain adaptation (DA). We argue that MMD-based DA methods ignore the data locality structure, which, to some extent, would cause the negative transfer effect. The locality plays an important role in minimizing the nonlinear local domain discrepancy underlying the marginal distributions. For better exploiting the domain locality, a novel local generative discrepancy metric (LGDM) based intermediate domain generation learning called Manifold Criterion guided Transfer Learning (MCTL) is proposed in this paper. The merits of the proposed MCTL are four-fold: 1) the concept of manifold criterion (MC) is first proposed as a measure validating the distribution matching across domains, and domain adaptation is achieved if the MC is satisfied; 2) the proposed MC can well guide the generation of the intermediate domain sharing similar distribution with the target domain, by minimizing the local domain discrepancy; 3) a global generative discrepancy metric (GGDM) is presented, such that both the global and local discrepancy can be effectively and positively reduced; 4) a simplified version of MCTL called MCTL-S is presented under a perfect domain generation assumption for more generic learning scenario. Experiments on a number of benchmark visual transfer tasks demonstrate the superiority of the proposed manifold criterion guided generative transfer method, by comparing with other state-of-the-art methods. The source code is available in https://github.com/wangshanshanCQU/MCTL.' author: - 'Lei Zhang,  Shanshan Wang, Guang-Bin Huang,  Wangmeng Zuo,  Jian Yang,  David Zhang, [^1]' bibliography: - 'egbib.bib' title: Manifold Criterion Guided Transfer Learning via Intermediate Domain Generation --- [Shell : Bare Demo of IEEEtran.cls for Computer Society Journals]{} Transfer Learning, domain adaptation, manifold criterion, discrepancy metric, domain generation. Introduction ============ machine learning models rely heavily on the assumption that the data used for training and test are drawn from the same or similar distribution, i.e. independent identical distribution ($i.i.d.$). However, in real world, it is impossible to guarantee that assumption. Hence, in visual recognition tasks, classifier or model usually does not work well because of data bias between the distributions of the training and test data[@Nguyen2015DASH],[@csurka2017domain][@hoffman2012discovering],[@pan2010survey],[@Duan2012Domain],[@li2017domain],[@gopalan2014unsupervised]. The domain discrepancy constitutes a major obstacle in training the predictive models across domains. For example, an object recognition model trained on labeled images may not generalize well on the testing images under various variations in the pose, occlusion, or illumination. In Machine Learning this problem is labeled as domain mismatch. Failing to model such a distribution shift may cause significant performance degradation. Also, the models trained with only a limited number of labeled patterns are usually not robust for pattern recognition tasks. Furthermore, manual labeling of sufficient training data for diverse application domains may be prohibitive. However, by leveraging the labeled data drawn from another sufficiently labeled source domain that describes related contents with target domain, establishing an effective model is possible. Therefore, the challenging objective is how to achieve knowledge transfer across domains such that the distribution mismatch is reduced. Underlying techniques for addressing this challenge, such as domain adaptation[@Duan2010Visual][@kulis2011you], which aims to learn domain-invariant models across source and target domain, has been investigated. Domain adaptation (DA)[@saenko2010adapting],[@pan2011domain],[@Duan2009Domain] as one kind of transfer learning (TL) perspective, addresses the problem that data is from two related but different domains[@baktashmotlagh2013unsupervised],[@ben2007analysis]. Domain adaptation establishes knowledge transfer from the labeled source domain to the unlabeled target domain by exploring domain-invariant structures that bridge different domains with substantial distribution discrepancy. In terms of the accessibility of target data labels in transfer learning, domain adaptation methods can be divided into three categories: supervised[@duan2012learning],[@pan2008transfer], semi-supervised[@li2014learning],[@yao2015semi],[@Duan2012Domain] and unsupervised[@long2014transfer],[@baktashmotlagh2014domain],[@long2016unsupervised]. In this paper, we focus on unsupervised transfer learning where the target data labels are unavailable in transfer model learning phase. Unsupervised setting is more challenging due to the common data scarcity problem. In unsupervised transfer learning[@ganin2014unsupervised], Maximum Mean Discrepancy (MMD)[@gretton2007kernel] is widely used and has achieved promising performance. MMD, that aims at minimizing the domain distribution discrepancy, is generally exploited to reduce the difference of conditional distributions and marginal distributions across domains by utilizing the unlabeled domain data in a Reproducing Kernel Hilbert Space (RKHS). Also, in the framework of deep transfer learning[@hu2015deep], MMD-based adaptation layers are further integrated in deep neural networks to improve the transferable capability between the source and domains[@long2015learning]. MMD actually acts as a discrepancy metric or criterion to evaluate the distribution mismatch across domains and works well in aligning the global distribution. However, it only considers the domain discrepancy and generally ignores the intrinsic data structure of target domain, e.g. local structure just as Fig.\[fig1\](b). It is known that geometric structure is indispensable for domain distance minimization, which, thus, can well exploit the internal local structure of target data. Particularly, in unsupervised learning, the local structure of target data often plays a more important role than the global structure. This is originated from manifold assumption that the data with local similarity is with similar labels. Motivated by manifold assumption, a novel manifold criterion (MC) is proposed in our work, which is similar but very different from conventional manifold algorithms that the MC actually acts as a generative transfer criterion for unsupervised domain adaptation. Intuitively, we hold the assumption that if a new target domain can be automatically generated by using the source domain data, the domain transfer issue can be naturally addressed. To this end, a criterion that measures the generative effect can be explored. In this paper, considering the locality property of target data, we wish that the generative target data should hold similar local structure with the true target domain data. Naturally, motivated by manifold assumption[@hoffman2014continuous], an objective generative transfer metric, manifold criterion (MC), is proposed. Suppose that two samples $x_{i}$ and $x_{j}$ in target domain are close to each other, and if the generative target sample $x_{i}^g$ by using the source data is also close to $x_{j}$, we recognize that the generated intermediate domain data shares similar distribution with the target domain. This is the basic idea of our generative transfer learning in this paper. But how to construct the generative target domain? From the perspective of manifold learning, we expect that the new target data is generated by using a locality structure preservation metric. This idea can be interpreted under the commonly investigated case of independent identically distribution ($i.i.d.$) that the affinity structure in high-dimensional space can still be preserved in some projected low-dimensional subspace (i.e. manifold structure embedding). In general, the internal intrinsic structure can remain unchanged by using graph Laplacian regularization[@Yang2017EPR], which reflects the affinity of the raw data. Specifically, with the proposed manifold criterion, a $\bf{M}$anifold $\bf{C}$riterion guided $\bf{T}$ransfer $\bf{L}$earning (MCTL) is proposed, which aims to pursue a latent common subspace via a projection matrix $\bm{\mathcal {P}}$ for source and target domain. In the common subspace, a generative transfer matrix $\bm{\mathcal {Z}}$ is solved by leveraging the source domain data and the MC generative metric, for a new generative data that holds similar marginal distribution with source data in a unsupervised manner. The findings and analysis show that the proposed manifold criterion can be used to reduce the local domain discrepancy. Additionally, in MCTL model, the embedding of low-rank constraint (LRC) on the transfer matrix ensures that the data from source domains can be well interpreted during generation, which can show an approximated block-diagonal property. With the LRC exploited, the local structure based MC can be guaranteed as we wish without distortion[@kanamori2009efficient]. ![ Motivation of MCTL. The lines represent the classification boundary of source domain. The centroid represents the geometric center of all data points.[]{data-label="fig1"}](motivation.pdf){width="1\linewidth"} The idea of our MCTL is described in Fig.\[fig2\]. In summary, the main contribution and novelty of this work are fourfold: - We propose a unsupervised manifold criterion generative transfer learning (MCTL) method, which aims to generate a new intermediate target domain that holds similar distribution with true target data by leveraging source data as basis. The proposed manifold criterion (MC) is modeled by a novel local generative discrepancy metric (LGDM) for local cross-domain discrepancy measure, such that the local transfer can be effectively aligned. - In order to keep the global distribution consistency, a global generative discrepancy metric (GGDM), that offers a “linear” method to compare the high-order statistics of two distributions, is proposed to minimize the discrepancy between the generative target data and the true target data. Therefore, the local and global affinity structures across domains are simultaneously guaranteed. - For improving the correlation between the source data and the generative target data, LRC regularization on the transfer matrix $\bm{\mathcal {Z}}$ is integrated in MCTL, such that the block-diagonal property can be utilized for preventing the domain transfer from distortion and negative transfer. - Under the MCTL framework, for a more generic case, a simplified version of MCTL (i.e. MCTL-S) method is proposed, which constrains that the generative data should be seriously consistent with the target domain in a simple yet generic manner. Interestingly, with this constraint, the LGDM loss in MCTL-S is naturally degenerated into a generic manifold regularization. The remainder of this paper is organized as follows. In Section II, we review the related work in transfer learning. In Section III, we present the preliminary idea of the proposed manifold criterion. In Section IV, the proposed MCTL method and optimization are formulated. In Section V, the simplified version of MCTL is introduced and preliminarily analyzed. In Section VI, the classification method is described. In Section VII, the experiments in cross-domain visual recognition are presented. The discussion is presented in Section VIII. Finally, the paper is concluded in Section IX. Related Work ============ Shallow Transfer Learning ------------------------- A lot of transfer learning methods are proposed to tackle heterogeneous domain adaptation problems. Generally, these methods can be divided into three categories in the follows. **Classifier based approaches**. A generic way is to directly learn a common classifier on auxiliary domain data by leveraging a few labeled target data. Yang et al.[@yang2007cross] proposed an adaptive SVM (A-SVM) to learn a new target classifier $f^T(x)$ by supposing that $f^T(x)= f^S(x) + \Delta f(x)$, where the classifier $f^S(x)$ is trained with the labeled source samples and $\Delta f(x)$ is the perturbation function. Bruzzone et al.[@Bruzzone2010Domain] developed an approach to iteratively learn the SVM classifier by labeling the unlabeled target samples and simultaneously removing some labeled samples in the source domain. Duan et al.[@Duan2010Visual] proposed an adaptive multiple kernel learning (AMKL) for consumer video event recognition from annotated web videos. Also, a domain transfer MKL (DTMKL)[@Duan2012Domain], which learn a SVM classifier and a kernel function simultaneously for classifier adaptation. Zhang et al.[@eda2016zhang] proposed a robust classifier transfer method (EDA) which was modelled based on ELM and manifold regularization for visual recognition. **Feature augmentation/transformation based approaches**. Li et al.[@li2014hfa] proposed a heterogeneous feature augmentation (HFA)which tends to learn a transformed feature space for domain adaptation. Kulis et al.[@kulis2011you] proposed an asymmetric regularized cross-domain transform (ARC-t) method for learning a transformation metric. In [@Hoffman2014Asymmetric], Hoffman et al. proposed a Max-Margin Domain Transforms (MMDT) which a category specific transformation was optimized for domain transfer. Gong et al. proposed a Geodesic Flow Kernel (GFK)[@Gong2012Geodesic] method which integrates an infinite number of linear subspaces on the geodesic path to learn the domain-invariant feature representation. Gopalan et al.[@Gopalan2011Domain] proposed an unsupervised method (SGF) for low dimensional subspace transfer in which a group of subspaces along the geodesic between source and target data is sampled, and the source data is projected into the subspaces for discriminative classifier learning. An unsupervised feature transformation approach, Transfer Component Analysis (TCA)[@pan2011domain], was proposed to discover common features having the same marginal distribution by using Maximum Mean Discrepancy (MMD) as non-parametric discrepancy metric. MMD[@gretton2007kernel],[@gretton2012jmlr],[@iyer2014maximum] is often used in transfer learning. Long et al.[@long2013transfer] proposed a Transfer Sparse Coding (TSC) approach to construct robust sparse representations by using empirical MMD as the distance measure. The Transfer Joint Matching (TJM) proposed by Long et al.[@long2014transfer] tends to learn a non-linear transformation by minimizing the MMD based distribution discrepancy. **Feature representation based approaches**. Different from those methods above, domain adaptation is achieved by representing across domain features. Jhuo et al.[@jhuo2012robust] proposed a RDALR method, in which the source data is reconstructed with target domain by using low-rank modeling. Similarly, Shao et al. [@shao2014generalized] proposed a LTSL method by pre-learning a subspace using PCA or LDA, then low-rank representation across domain is modeled. Zhang et al. [@zhang2016lsdt],[@Zhang2016Discriminative] proposed Latent Sparse Domain Transfer (LSDT) and Discriminative Kernel Transfer Learning (DKTL) methods for visual adaptation, by jointly learning a subspace projection and sparse reconstruction across domain. Further, Xu et al. [@Xu2015] proposed a DTSL method, which combines the low-rank and sparse constraint on the reconstruction matrix. In this paper, the proposed method is different from the existing shallow transfer learning methods that a generative transfer idea is motivated, which tends to achieve domain adaptation by generating an intermediate domain that has similar distribution with the true target domain. Deep Transfer Learning ---------------------- Deep learning, as a data-driven transfer learning method, has witnessed a great achievements in many fields[@tzeng2015simultaneous],[@glorot2011domain],[@oquab2014learning],[@xie2015transfer]. However, when solving domain data problems by deep learning technology, massive labeled training data are required. For the small-size tasks, deep learning may not work well. Therefore, deep transfer learning methods have been studied. Donahue et al.[@donahue2014decaf] proposed a deep transfer method for small-scale object recognition, and the convolutional network (AlexNet) was trained on ImageNet. Similarly, Razavian et al.[@sharif2014cnn] also proposed to train a network based on ImageNet for high-level feature extractor. Tzeng et al.[@tzeng2015simultaneous] proposed a DDC method which simultaneously achieves knowledge transfer between domains and tasks by using CNN. Long et al.[@long2015learning] proposed a deep adaptation network (DAN) method by imposing MMD loss on the high-level features across domains. Additionally, Long et al.[@long2016unsupervised] also proposed a residual transfer network (RTN) which tends to learn a residual classifier based on softmax loss. Oquab et al.[@oquab2014learning] proposed a CNN architecture for middle level feature transfer, which is trained on large annotated image set. Additionally, Hu et al.[@hu2015deep] proposed a non-CNN based deep transfer metric learning (DTML) method to learn a set of hierarchical nonlinear transformations for achieving cross-domain visual recognition. Recently, GAN inspired adversarial domain adaptation has been preliminarily studied. Tzeng et al. proposed a novel ADDA method [@Tzeng2017Adversarial] for adversarial domain adaptation, in which CNN is used for adversarial discriminative feature learning, and achieves the state-of-the-art performance. In this work, although the proposed MCTL method is a shallow transfer learning paradigm, the competitive capability comparing to these deep transfer learning methods has been validated on the pre-extracted deep features. Differences Between MCTL and Other Reconstruction Transfer Methodologies ------------------------------------------------------------------------ The proposed MCTL is partly related by reconstruction transfer methods, such as DTSL[@Xu2015], LSDT[@zhang2016lsdt] and LTSL[@shao2014generalized], but essentially different from them. These methods aim to learn a common subspace where a feature reconstruction matrix between domains is learned for adaptation. Sparse reconstruction and low-rank based constraints were considered, respectively. Different from reconstruction transfer, the proposed MCTL is a generative transfer learning paradigm, which is partly inspired by the idea of GAN[@Goodfellow2014Generative] and manifold learning. The differences and relations are as follows. **Reconstruction Transfer**. As the name implies, a reconstruction matrix is expected for domain correspondence. In LTSL, subspace projection $\bf{W}$ is pre-learned by off-the-shelf methods such as PCA, LDA, etc. Then projected source data $\bf{WX_S}$ is used to reconstruct the projected target data $\bf{{WX}_T}$ via low-rank constraint. The subspace may be suboptimal leading to a possible local optimum of $\bf{\mathcal{Z}}$. Further, the LSDT method was proposed for realizing domain adaptation by exploiting cross-domain sparse reconstruction in some latent subspace, simultaneously. The DTSL was proposed by posing hybrid regularization of sparsity and low-rank constraints for learning a more robust reconstruction transfer matrix. Reconstruction transfer always expresses target domain by leveraging source domain, however, this expression is not accurate due to the limited number of target domain data in calculating the reconstruction error loss, and the robustness is decreased. **Generative Transfer**. The proposed MCTL method introduces a generative transfer learning concept, which aims to realize an intermediate domain generation by constructing a Manifold Criterion loss. The motivation is that the domain adaptation problem can be solved by generating a similar domain that shares the same distribution with the true target domain. The essential differences of our work from reconstruction lie in that: (1) Domain adaptation is recognized to be a domain generation problem, instead of a domain alignment problem. (2) The manifold criterion loss is well constructed for generation, instead of the least-square based reconstruction error loss. In addition, the GGDM based global domain discrepancy loss and LRC regularization are also integrated in MCTL for global distribution discrepancy reduction and domain correlation enhancement, simultaneously. **Similarity and Relationship**. The reconstruction transfer and generative transfer are similar and related in three aspects. (1) Both aim at pursuing a more similar domain with the target data by leveraging the source domain data. (2) Both are unsupervised transfer learning, which do not need the data label information in domain adaptation. (3) Both have similar model formulation and solvers for obtaining the domain correspondence matrix and transformation. ![ Illustration of the proposed Manifold Criterion Guided Transfer Learning (MCTL). (a) represents the source domain $\bm{\mathcal{X}}_{S}$ which is used to generate an intermediate target domain $\bm{\mathcal{X}}_{GT}$ shown as (b), that is similar to the true target domain $\bm{\mathcal{X}}_{T}$ shown in (c). The intermediate domain generation is carried out by the learned generative matrix $\bm{\mathcal {Z}}$ based on the manifold criterion (MC) in an unsupervised manner. MC interprets the distribution discrepancy, which implies that if the local discrepancy is minimized, the distribution consistency is then achieved. Further, a projection matrix $\bm{\mathcal {P}}$ is learned for domain feature embedding. Notably, the $\varphi (.)$ is used as the implicit mapping function of data, which can be kernelized in implementation with inner product.[]{data-label="fig2"}](MC1.pdf){width="1\linewidth"} Manifold Criterion Preliminary ============================== Manifold learning[@baktashmotlagh2014domain],[@Yang2017EPR] as a typical unsupervised learning method has been widely used. Manifold hypothesis means that an intrinsic geometric low-dimensional structure is embedded in high-dimensional feature space and the data with affinity structure own similar labels. This demonstrates that manifold hypothesis works but under the data of independent identically distribution ($i.i.d.$). Therefore, we could have a try to build a manifold criterion to measure the $i.i.d.$ condition (i.e. domain discrepancy minimization) and guide the transfer learning across domains through an intermediate domain. In this paper, manifold hypothesis is used in the process of generating domain as shown in Fig.\[fig2\]. Essentially different from manifold learning and regularization, we propose a novel manifold criterion (MC) that is utilized as generative discrepancy metric. In semi-supervised learning (SSL), manifold regularization is often used but under $i.i.d.$ condition. However, transfer learning is different from SSL that domain data does not satisfy $i.i.d.$ condition. In this paper, it should be figure out that if the intermediate domain can be generated via the manifold criterion guided objective function, then the distribution of the generated intermediate domain and the true target domain is recognized to be matched. The idea of manifold criterion is described in Fig.\[fig2\]. We observe that a projection matrix $\bm{\mathcal {P}}$ is first learned for some common subspace projection, and then a generative transfer matrix $\bm{\mathcal {Z}}$ is learned for intrinsic structure preservation and distribution discrepancy minimization between the true target data and generative target data by source domain data. That is, if the generative data has similar affinity structure with the true target domain, i.e. manifold criterion is satisfied, we can have a conclusion that the generative data shares similar distribution with target domain. Notably, different from reconstruction based domain adaptation methods, in this work, we tend to generate an intermediate domain by leveraging source domain, i.e. generative transfer instead of reconstruction transfer. Moreover, we show Fig.\[fig1\] to imply that MC (local) and MMD (global) can be jointly considered in transfer learning models. Frankly, the idea of this paper is intuitive, simple and easy to follow. The key point lies in that how to generate the intermediate domain data such that the generated data complies with manifold assumption originated from the true target domain data. If the manifold criterion is satisfied (i.e. $i.i.d.$ is achieved), then domain adaptation or distribution alignment is completed, which is the principle of MCTL. MCTL: Manifold Criterion Guided Transfer Learning ================================================= Notations --------- In this paper, source and target domain are defined by subscript “$S$” and “$T$”. Training set of source and target domain are defined as $\varphi {\bf{(}}{{\bf{X}}_S}{\bf{)}} \in {{\bf{R}}^{m\times {n_S}}}$ and $\varphi {\bf{(}}{{\bf{X}}_T}{\bf{)}} \in {{\bf{R}}^{m\times {n_T}}}$. $\varphi {\bf{(}}{{\bf{X}}_{{\rm{G}}T}}{\bf{)}} \in {{\bf{R}}^{m\times {n_T}}}$ denotes generative target domain, where $\varphi$ denotes an implicit but generic transformation, $m$ denotes dimensionality, $n_{S}$ and $n_T$ denote the number of samples in source and target domain, respectively. Let ${\bf{X}} = [{{\bf{X}}_S}, {{\bf{X}}_T}]$, then $\varphi {\bf{(}}{{\bf{X}}}{\bf{)}} \in {{\bf{R}}^{m\times n}}$, where ${n = {n_S}+{n_T}}$. Let ${\bm{\mathcal {P}}} \in {{\bf{R}}^{m\times d}}$$(m\geq d)$ be the basis transformation that maps raw data space from ${\bf{R}}^m$ to a latent subspace ${\bf{R}}^d$. ${\bm{\mathcal{Z}}} \in {{\bf{R}}^{{n_S}\times{n_T}}}$ represents generative transfer matrix, $\mathbf{I}$ denotes identity matrix, $\parallel\bullet\parallel_{F}$ and $\parallel\bullet\parallel_{2}$ denote $l_{F}$-norm and $l_{2}$-norm, respectively. The superscript $\mathbf{T}$ denotes transpose operator and $\mathbf{Tr(\bullet)}$ denotes matrix trace operator. In RKHS, the kernel Gram matrix $\bm{\mathcal{K}}$ is defined as $\begin{bmatrix}{\bf{K}}\end{bmatrix}_{i,j}=<\varphi(\mathbf{x}_i), \varphi(\mathbf{x}_j)>_\mathcal{H}=\varphi(\mathbf{x}_i)^H\varphi(\mathbf{x}_j)=k(\mathbf{x}_i,\mathbf{x}_j)$, where $k$ is a kernel function. In the following sections, let ${\bf{{K}}}{\rm{ = }}\varphi {{\bf{(X)}}^{\bf{T}}}\varphi {\bf{(X)}}$, ${{\bf{K}}_S} = \varphi {{\bf{(X)}}^{\bf{T}}}\varphi ({{\bf{X}}_S})$ and ${{\bf{K}}_T} = \varphi {{\bf{(X)}}^{\bf{T}}}\varphi ({{\bf{X}}_T})$, and it is easy to get that ${\bf{K}} \in {{\bf{R}}^{n \times n}}$, ${{\bf{K}}_S} \in {{\bf{R}}^{n \times {n_S}}}$ and ${{\bf{K}}_T} \in {{\bf{R}}^{n \times {n_T}}}$. Problem Formulation ------------------- In this section, the proposed MCTL method is presented in Fig.\[fig2\], in which the same distribution between the $\bf{G}$enerated intermediate $\bf{T}$arget domain ($D_{GT}$) and the true $\bf{T}$arget domain ($D_T$) under common subspace is what we expected. That is, the intermediate target domain is generated to share the approximated distribution as the true target domain by exploiting the proposed Manifold Criterion as domain discrepancy metric. Specifically, two generative discrepancy metrics (LGDM vs. GGDM) for measuring the domain discrepancy locally and globally are proposed. Overall, the model is composed of three items. The $1^{st}$ item is MC-based LGDM loss which is used to measure the local domain discrepancy with the manifold criterion by exploiting the locality of target data. The $2^{nd}$ item is the GGDM loss which is applied to minimize the global domain discrepancy of marginal distributions between the generated intermediate target domain and the true target domain. The $3^{rd}$ item is the LRC regularization (low-rank constraint) which is carried out to keep the generalization of $\bm{\mathcal {Z}}$. A detailed MCTL method is described in the follows. ### MC based Local Generative Discrepancy Metric The MC based local generative discrepancy metric (LGDM) loss is used to enhance the distribution consistency between source and target domain indirectly, by constraining the generative target data with manifold criterion. For convenience, $\varphi {\bf{(}}x_{GT}^p{\bf{)}}$ is defined as a sample in $\varphi ({{\bf{X}}_{GT}})$ and $\varphi {\bf{(}}x_T^q{\bf{)}}$ is defined as a sample in $\varphi ({{\bf{X}}_T})$. We claim that the distribution consistency between $\varphi ({{\bf{X}}_{GT}})$ and $\varphi ({{\bf{X}}_T})$ is achieved, i.e. domain transfer is done, only if two sets satisfy the following manifold criterion, which can be formulated as $$\begin{aligned} \mathop {LGDM({D_{GT}},{D_T})=}\limits_{} &\sum\limits_{p,q}^{{n_T}} {{W_{pq}}} \left\| {\varphi {\bf{(}}x_{GT}^p{\bf{)}} - \varphi {\bf{(}}x_T^q{\bf{)}}} \right\|_2^2\\ &{\rm{ = }}\mathop {}\limits_{} Tr(\varphi ({{\bf{X}}_{GT}}){\bf{D}}(\varphi {({{\bf{X}}_{GT}})^{\bf{T}}}) \\ &\hspace*{0.3cm}+ Tr(\varphi ({{\bf{X}}_T}){\bf{D}}(\varphi {({{\bf{X}}_T})^{\bf{T}}})\\ &\hspace*{0.3cm}- 2Tr(\varphi ({{\bf{X}}_{GT}}){\bf{W}}(\varphi {({{\bf{X}}_T})^{\bf{T}}})\\ \end{aligned} \label{fun501}$$ where ${\bf{W}} \in {R^{{n_T} \times {n_T}}}$ is the affinity matrix described as ${{{W}}_{pq}} = \left\{ {\begin{array}{*{20}{c}} {1, if\ {x_{GT}^p} \in {NN_k}({x_T^q}) or \ {x_T^q} \in {NN_k}({x_{GT}^p})}\\ {0, \ otherwise} \end{array}} \right.$ and ${NN_k}({\bf{x}})$ represents the $k^{th}$ nearest neighbors of sample ${\bf{x}}$. The matrix ${\bf{D}} \in {R^{{n_T} \times {n_T}}}$ is a diagonal matrix with entries ${{{D}}_{pp}} = \sum\limits_q {{{{W}}_{pq}}} $, $p=1,...,n_T$. As claimed before, ${\bm{\mathcal {P}}^{\bf{T}}}{\bf{ = }}{{\bf{\Phi }}^{\bf{T}}}\varphi {{\bf{(X)}}^{\bf{T}}}$, the projected source data and target data can be expressed as ${{\bf{\Phi }}^{\bf{T}}}\varphi {{\bf{(X)}}^{\bf{T}}}\varphi {\bf{(}}{{\bf{X}}_S}{\bf{)}}$ and ${{\bf{\Phi }}^{\bf{T}}}\varphi {{\bf{(X)}}^{\bf{T}}}\varphi {\bf{(}}{{\bf{X}}_T}{\bf{)}}$. By substituting $\varphi {\bf{(}}{{\bf{X}}_{GT}}{\bf{)}}=\varphi {\bf{(}}{{\bf{X}}_S}{\bf{){\bm{\mathcal{Z}}}}}$ and the Gram matrix after projection (i.e. ${{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}$ and ${{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_T}$ ) into Eq. (\[fun501\]), the MC based LGDM loss can be further formulated as $$\begin{aligned} &\mathop {\mathop {\min }\limits_{{\bf{\Phi }},{\bm{\mathcal{Z}}}} \frac{1}{{{{({n_T})}^2}}}}\limits_{} Tr({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{D}}{({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bm{\mathcal{Z}}})^{\bf{T}}}) \\ &\hspace*{0.3cm}+\frac{1}{{{{({n_T})}^2}}} Tr({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_T}{\bf{D}}{({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_T})^{\bf{T}}})\\ &\hspace*{0.3cm}- \frac{2}{{{{({n_T})}^2}}}Tr({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{W}}{({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_T})^{\bf{T}}}) \end{aligned} \label{fun6}$$ From Eq.(\[fun6\]), the motivation is clearly demonstrated which tends to achieve local structure consistency (i.e. manifold consistency) between the generative target data and the true target data. The intrinsic difference between Eq.(\[fun6\]) and the manifold embedding or regularization is that we aim to produce the $i.i.d.$ assumption with a manifold criterion, while the conventional manifold learning relies on this assumption. ### Global Generative Discrepancy Metric Loss In order to reduce the distribution mismatch between the generative target data and the true target data, a generic MMD for global generative discrepancy metric (GGDM) is proposed by minimizing the discrepancy as follows. $$\begin{aligned} GGDM({D_{GT}},{D_T}) = \frac{1}{{{n_T}}}\sum\limits_{i = 1}^{{n_T}} \left\| {({\varphi {\bf{(X}}_{GT}^{{i}}{\bf{)}} - } {\varphi {\bf{(X}}_T^{{i}}{\bf{))}}} } \right\|_2^2 \end{aligned}$$ where $D_{GT}$ and $D_T$ denote the distribution of generated target domain and true target domain, respectively. However, model may not transfer knowledge directly and it is unclear where a test sample is from ( source or target domain ) if there is not a common subspace. We consider to find a latent common subspace for source and target domain by using a projection matrix $\bm{\mathcal {P}}$. Therefore, by projecting $\varphi ({{\bf{X}}_{GT}})$ and $\varphi ({{\bf{X}}_T})$ to the subspace, the GGDM loss after projection can be formulated as follows. Considering that $\varphi {\bf{(}}{{\bf{X}}_{GT}}{\bf{)}}=\varphi {\bf{(}}{{\bf{X}}_S}{\bf{){\bm{\mathcal{Z}}}}}$, by substituting it in the equation, there is $$\begin{aligned} GGDM({D_{GT}},{D_T}) &= \frac{1}{{{n_T}}}\sum\limits_{i = 1}^{{n_T}} \left\| {{{\bm{\mathcal {P}}^{\bf{T}}}(\varphi {\bf{(X}}_{GT}^{{i}}{\bf{)}} - } {\varphi {\bf{(X}}_T^{{i}}{\bf{))}}} } \right\|_2^2 \\ &=\frac{1}{{{n_T}}}\left\| {{{\bf{P}}^{\bf{T}}}(\varphi {\bf{(}}{{\bf{X}}_S}{\bf{){\bm{\mathcal{Z}}}}} - \varphi {\bf{(}}{{\bf{X}}_T}){\bf{)1}}} \right\|_2^2 \end{aligned}$$ where ${\bf{1}}$ represents a full one column vector. The projection matrix $\bm{\mathcal {P}}$ is a linear transformation, which can be represented as some linear combination of the training data, i.e. ${\bm{\mathcal {P}}^{\bf{T}}}{\bf{ = }}{{\bf{\Phi }}^{\bf{T}}}\varphi {{\bf{(X)}}^{\bf{T}}}$, where $\bf{\Phi }$ denotes the linear combination coefficient matrix. Then the projected source data can be expressed as ${{\bf{\Phi }}^{\bf{T}}}\varphi {{\bf{(X)}}^{\bf{T}}}\varphi {\bf{(}}{{\bf{X}}_S}{\bf{)}}$ and the projected target data can be expressed as ${{\bf{\Phi }}^{\bf{T}}}\varphi {{\bf{(X)}}^{\bf{T}}}\varphi {\bf{(}}{{\bf{X}}_T}{\bf{)}}$. With the kernel trick, the inner product of implicit transformation is represented as Gram matrix, from raw space to RKHS. As described in section 4.1, let ${{\bf{K}}_S} = \varphi {{\bf{(X)}}^{\bf{T}}}\varphi ({{\bf{X}}_S})$ and ${{\bf{K}}_T} = \varphi {{\bf{(X)}}^{\bf{T}}}\varphi ({{\bf{X}}_T})$, the source domain and target domain can be expressed simply as ${{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}$ and ${{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_T}$, respectively. Therefore, the GGDM loss is formulated as $$\begin{aligned} &\mathop {\mathop {\min }\limits_{{\bf{\Phi }},{\bm{\mathcal{Z}}}}} \frac{1}{{{n_T}}}\left\| {{{\bf{\Phi }}^{\bf{T}}}({{\bf{K}}_S}{\bm{\mathcal{Z}}}-{{\bf{K}}_T}){\bf{1}}} \right\|_2^2\\ \end{aligned} \label{fun5}$$ ### LRC for Domain Correlation Enhancement In domain transfer, the loss functions are designed for interpreting the generative target data and the true target data. Significantly, the generative target data plays an critical role in the proposed model. In this work, a general transfer matrix $\bm{\mathcal {Z}}$ is used to bridge the source domain data and the generative data (intermediate result). It is known that for structural consistency between different domains is our goal, therefore, it is natural to consider the low-rank structure of $\bm{\mathcal {Z}}$ as a choice for enhancing the domain correlation. In our MCTL, low-rank constraint (LRC), that is effective in showing the global structure of different domain data, is finally used. The LRC regularization ensures that the data from different domains can be well interlaced during domain generation, which is significant to reduce the disparity of domain distributions. Furthermore, if the projected data lies in the same manifold, each sample in target domain can be represented by its neighbors in source domain. This requires that the generative transfer matrix $\bm{\mathcal {Z}}$ is approximately block-wise. Therefore, LRC regularization is necessary. Considering the non-convexity property of rank function which is NP-hard, the nuclear norm $||{\bm{\mathcal{Z}}}|{|_*}$ is used as a rank approximation in this work. ### Completed Model of MCTL By reviewing the MC based LGDM loss in Eq.(\[fun6\]), the GGDM loss in Eq.(\[fun5\]), and the LRC regularization, the objective function of our MCTL method is finally formulated as follows. $$\begin{aligned} &\mathop {\mathop {\min }\limits_{{\bf{\Phi }},{\bm{\mathcal{Z}}}} }\limits_{} \frac{1}{{{{({n_T})}^2}}}Tr({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{D}}{({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bm{\mathcal{Z}}})^{\bf{T}}}) \\ &\hspace*{0.3cm}+\frac{1}{{{{({n_T})}^2}}} Tr({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_T}{\bf{D}}{({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_T})^{\bf{T}}}) \\ &\hspace*{0.3cm}- \frac{2}{{{{({n_T})}^2}}}Tr({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{W}}{({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_T})^{\bf{T}}})\\ &\hspace*{0.3cm}+ \tau \frac{1}{{{n_T}}}\left\| {{{\bf{\Phi }}^{\bf{T}}}({{\bf{K}}_S}{\bm{\mathcal{Z}}}-{{\bf{K}}_T}){\bf{1}}} \right\|_2^2 \\ &\hspace*{0.3cm} + {\lambda _1}||{\bm{\mathcal{Z}}}|{|_*}\\ &s.t.{{\bf{\Phi }}^{\bf{T}}}{\bf{K\Phi }} = {\bf{I}} \end{aligned} \label{fun7}$$ where $\tau$ and ${\lambda _1}$ are the trade-off parameters. The rows of $\bm{\mathcal {P}}$ are required to be orthogonal and normalized to unit norm for preventing trivial solutions by enforcing ${\bm{\mathcal {P}}^{\bf{T}}}\bm{\mathcal {P}} = {\bf{I}}$, which can be further rewritten as ${{\bf{\Phi }}^{\bf{T}}}{\bf{K\Phi }} = {\bf{I}}$, an equality constraint. Obviously, the model is non-convex with respect to two variables, but can be solved with the variable alternating strategy, and the optimization algorithm is formulated. Optimization ------------ There are two variables $\mathbf\Phi$ and $\bm{\mathcal {Z}}$ in the MCTL model (\[fun7\]), therefore an efficient variable alternating optimization strategy is naturally considered, i.e. one variable is solved while frozen the other one. First, when $\bm{\mathcal {Z}}$ is fixed, a general Eigen-value decomposition is used for solving $\mathbf\Phi$. Second, when $\mathbf\Phi$ is fixed, the inexact augmented Lagrangian multiplier (IALM) and gradient descent are used to solve $\bm{\mathcal {Z}}$. In the following, the optimization details of the proposed method are presented. By introducing an auxiliary variable $\bm{\mathcal {J}}$, the problem (\[fun7\]) can be written as follows. Furthermore, with the augmented Lagrange function[@Lin2011Linearized], the model can be written as $$\begin{aligned} &\mathop {\mathop {\min }\limits_{{\bf{\Phi }},{\bm{\mathcal{Z}}},{\bm{\mathcal {J}}}} }\limits_{} \frac{1}{{{{({n_T})}^2}}}(Tr({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{D}}{({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bm{\mathcal{Z}}})^{\bf{T}}})\\ &\hspace*{0.1cm}+ Tr({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_T}{\bf{D}}{({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_T})^{\bf{T}}}) - 2Tr({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{W}}{({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_T})^{\bf{T}}}))\\ &\hspace*{0.1cm}+\frac{\tau }{{{{({n_T})}^2}}}{{\bf{\Phi }}^{\bf{T}}}({{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{1}}{({{\bf{K}}_S}{\bm{\mathcal{Z}}})^{\bf{T}}} - {{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{1}}{({{\bf{K}}_T})^{\bf{T}}}\\ &\hspace*{0.1cm}- {{\bf{K}}_T}{\bf{1}}{{\bm{\mathcal{Z}}}^{\bf{T}}}{({{\bf{K}}_S})^{\bf{T}}} + {{\bf{K}}_T}{\bf{1}}{({{\bf{K}}_T})^{\bf{T}}}){\bf{\Phi }} + {\lambda _1}||{\bm{\mathcal {J}}}|{|_*}\\ &\hspace*{0.1cm}+ Tr({\bm{\mathcal {R}}}_{\bf{1}}^{\bf{T}}{\bf{({\bm{\mathcal{Z}}} - \bm{\mathcal {J}})}}) + \frac{\mu }{2}{\rm{(||}}{\bf{{\bm{\mathcal{Z}}} - \bm{\mathcal {J}}}}||_F^2) \end{aligned} \label{fun9}$$ where [**[1]{}**]{} represents a full one matrix instead of a full one vector as the problem (\[fun7\]) is unfolded. $\bm{\mathcal {R}}_1$ denotes the Lag-multiplier and $\mu$ is a penalty parameter. In the following, we present how to optimize the three variables $\mathbf\Phi$, $\bm{\mathcal {J}}$, and $\bm{\mathcal{Z}}$ in the problem (\[fun9\]) based on Eigen-value decomposition, IALM and gradient descent in step-wise. ### Update $\mathbf\Phi$ By frozen $\bm{\mathcal{Z}}$ and $\bm{\mathcal {J}}$, $\mathbf\Phi$ can be solved as $$\begin{aligned} &{{\bf{\Phi }}^ * } = {\rm{arg }}\mathop {\mathop {\min }\limits_{\bf{\Phi }} }\limits_{} \frac{1}{{{{({n_T})}^2}}}(Tr({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{D}}{({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bm{\mathcal{Z}}})^{\bf{T}}}) \\ &\hspace*{0.1cm}+ Tr({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_T}{\bf{D}}{({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_T})^{\bf{T}}}) - 2Tr({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{W}}{({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_T})^{\bf{T}}}))\\ &\hspace*{0.1cm} + \frac{\tau }{{{{({n_T})}^2}}}{{\bf{\Phi }}^{\bf{T}}}({{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{1}}{{\bm{\mathcal{Z}}}^{\bf{T}}}{({{\bf{K}}_S})^{\bf{T}}} - {{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{1}}{({{\bf{K}}_T})^{\bf{T}}}\\ &\hspace*{0.1cm}- {{\bf{K}}_T}{\bf{1}}{{\bm{\mathcal{Z}}}^{\bf{T}}}{({{\bf{K}}_S})^{\bf{T}}} + {{\bf{K}}_T}{\bf{1}}{({{\bf{K}}_T})^{\bf{T}}}){\bf{\Phi }}\\ &s.t.{{\bf{\Phi }}^{\bf{T}}}{\bf{K\Phi }} = {\bf{I}}\\ \end{aligned} \label{fun10}$$ We can derive the solution $\mathbf\Phi_{K}$ of the $K^{th}$ iteration in column-wise. To obtain the $i^{th}$ column vector in $\mathbf\Phi_{K}$, by setting the partial derivative of problem (\[fun10\]) with respect to $\mathbf\Phi_{K(:,i)}$ to be zero, there is $$\begin{aligned} &\frac{1}{{{{({n_T})}^2}}}({{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{D}}{{\bm{\mathcal{Z}}}^{\bf{T}}}{({{\bf{K}}_S})^{\bf{T}}} + {{\bf{K}}_T}{\bf{D}}{({{\bf{K}}_T})^{\bf{T}}} - {{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{W}}{({{\bf{K}}_T})^{\bf{T}}} \\ &- {{\bf{K}}_T}{\bf{W}}{{\bm{\mathcal{Z}}}^{\bf{T}}}{({{\bf{K}}_S})^{\bf{T}}}){{\bf{\Phi }}_{K(:,i)}} + \frac{\tau }{{{{({n_T})}^2}}}({{\bf{K}}_S}{\bm{\mathcal{Z}}}1{{\bm{\mathcal{Z}}}^{\bf{T}}}{({{\bf{K}}_S})^{\bf{T}}}\\ &- {{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{1}}{({{\bf{K}}_T})^{\bf{T}}} - {{\bf{K}}_T}{\bf{1}}{{\bm{\mathcal{Z}}}^{\bf{T}}}{({{\bf{K}}_S})^{\bf{T}}} + {{\bf{K}}_T}{\bf{1}}{({{\bf{K}}_T})^{\bf{T}}}){{\bf{\Phi }}_{K(:,i)}}\\ &= - \lambda {\bf{K}}{{\bf{\Phi }}_{K(:,i)}} \end{aligned} \label{fun11}$$ It is clear that $\mathbf\Phi_{K}$ can be obtained by solving an Eigen-decomposition problem, and $\mathbf\Phi_{K(:,i)}$ is the $i^{th}$ eigenvector corresponding to the $i^{th}$ smallest eigenvalue. ### Update ${\mathcal{J}}$ By frozen $\mathbf\Phi$ and $\bm{\mathcal{Z}}$, the problem is solved with respect to $\bm{\mathcal{J}}$. After dropping out the irrelevant terms with respect to $\bm{\mathcal{J}}$, $\bm{\mathcal{J}_{K+1}}$ in iteration $K+1$ can be solved as $$\begin{aligned} \bm{\mathcal{J}}_{K+1}=&\min_{\bm{\mathcal{J}}_K}\lambda_1\parallel\bm{\mathcal{J}}_{K}\parallel_* +\mathbf{Tr}(\bm{\mathcal{R}}_{1K}^T(\bm{\mathcal{Z}}_{K}-\bm{\mathcal{J}}_{K}))\\ &+\frac{\mu_K}2\parallel\bm{\mathcal{Z}}_{K}-\bm{\mathcal{J}}_{K}\parallel_F^2 \end{aligned} \label{fun12}$$ It can be further rewritten as $$\begin{aligned} \bm{\mathcal{J}}_{K+1}=\min_{\bm{\mathcal{J}}_K}\lambda_1\parallel\bm{\mathcal{J}}_{K}\parallel_* +\frac{\mu_K}2\parallel\bm{\mathcal{J}}_{K}-(\bm{\mathcal{Z}}_{K}+\frac{\bm{\mathcal{R}}_{1K}}{\mu_K})\parallel_F^2 \end{aligned} \label{fun13}$$ Problem (\[fun13\]) can be efficiently solved using the singular value thresholding (SVT) operator[@Cai2008A], which contains two major steps. First, singular value decomposition (SVD) is conducted on matrix $\bm{\mathcal{S}} = \bm{\mathcal{Z}}_{K}+\frac{\bm{\mathcal{R}}_{1K}}{\mu_K}$, and get $\bm{\mathcal{S}} = \mathbf{U_S}\mathbf{\sum_S}\mathbf{V_S}$, where $\mathbf{\sum_S} = diag(\{{\sigma_i}\}_{1\le i\le r})$, ${\sigma_i}$ is the singular value with rank $r$. Second, the optimal solution $\bm{\mathcal{J}}_{K+1}$ is then obtained by thresholding the singular values as $\bm{\mathcal{J}}_{K+1}=\mathbf{U_S}\mathbf{\Omega_{(1/{\mu_k})}}(\mathbf{\sum_S})\mathbf{V_S}$, where $\mathbf{\Omega_{(1/{\mu_k})}}(\mathbf{\sum_S})=diag(\{{\sigma_i}-(1/{\mu_k})\}_+)$, and $\{\bullet\}_+$ denotes the positive value operator. ### Update ${\mathcal{Z}}$ By frozen $\mathbf\Phi$ and $\bm{\mathcal{J}}$, the problem is solved with respect to $\bm{\mathcal{Z}}$. By dropping out those terms independent of $\bm{\mathcal{Z}}$ in (\[fun9\]), there is $$\begin{aligned} &\min_{\bm{\mathcal{Z}}}\frac{1}{{{{({n_T})}^2}}}(Tr({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{D}}{({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bm{\mathcal{Z}}})^{\bf{T}}})\\ &\hspace*{0.5cm}- 2Tr({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{W}}{({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_T})^{\bf{T}}}))+ Tr({\bf{R}}_{\bf{1}}^{\bf{T}}{\bf{({\bm{\mathcal{Z}}} - J)}})\\ &\hspace*{0.5cm}+ \frac{\mu }{2}{\rm{(||}}{\bf{{\bm{\mathcal{Z}}} - J}}||_F^2) + \frac{\tau }{{{{({n_T})}^2}}}{{\bf{\Phi }}^{\bf{T}}}({{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{1}}{{\bm{\mathcal{Z}}}^{\bf{T}}}{({{\bf{K}}_S})^{\bf{T}}}\\ &\hspace*{0.5cm}- {{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{1}}{({{\bf{K}}_T})^{\bf{T}}} - {{\bf{K}}_T}{\bf{1}}{\bm{\mathcal{Z}}^{\bf{T}}}{({{\bf{K}}_S})^{\bf{T}}}){\bf{\Phi }} \end{aligned} \label{fun14}$$ We can see from problem (\[fun14\]) that it is hard to obtain a closed-form solution of $\bm{\mathcal{Z}}$. Therefore, the general gradient descent operator[@Rosasco2009Iterative] is used, and the solution of $\bm{\mathcal{Z}}_{K+1}$ in the $(K+1)^{th}$ iteration is presented as $$\begin{aligned} \bm{\mathcal{Z}}_{K+1}=\bm{\mathcal{Z}}_{K}-\alpha\bullet\bigtriangledown(\bm{\mathcal{Z}}) \end{aligned} \label{fun15}$$ where $\bigtriangledown(\bm{\mathcal{Z}})$ denotes the gradient, which is calculated as $$\begin{aligned} &\nabla ({\bm{\mathcal{Z}}}) = \frac{2}{{{{({n_T})}^2}}}({({{\bf{K}}_S})^{\bf{T}}}{\bf{\Phi }}{{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{D}} - {({{\bf{K}}_S})^{\bf{T}}}{\bf{\Phi }}{{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_T}{\bf{W}})\\ &\hspace*{1.0cm} + {{\bf{R}}_1} + \mu {\rm{(}}{\bf{{\bm{\mathcal{Z}}} - J}})+ \frac{{2\tau }}{{{{({n_T})}^2}}}{({{\bf{K}}_S})^{\bf{T}}}{\bf{\Phi }}{{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{1}}\\ &\hspace*{1.0cm} - \frac{{2\tau }}{{{{({n_T})}^2}}}{({{\bf{K}}_S})^{\bf{T}}}{\bf{\Phi }}{{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_T}{\bf{1}} \end{aligned} \label{fun16}$$ In detail, the iterative optimization procedure of the proposed MCTL is summarized in **Algorithm 1**. **Algorithm 1** The Proposed MCTL ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- **Input:** $\bm{\mathcal{X}}_{S}\in\mathcal{R}^{m\times n_{S}}$, $\bm{\mathcal{X}}_{T}\in\mathcal{R}^{m\times n_{T}}$, $\tau$, $\lambda_1$ **Procedure:** 1\. Compute $\bm{\mathcal{K}}_T=\varphi(\bm{\mathcal{X}})^T\varphi(\bm{\mathcal{X}}_T)$, $\bm{\mathcal{K}}_S=\varphi(\bm{\mathcal{X}})^T\varphi(\bm{\mathcal{X}}_S)$, $\bm{\mathcal{K}}=\varphi(\bm{\mathcal{X}})^T\varphi(\bm{\mathcal{X}})$, $\bm{\mathcal{X}}=[\bm{\mathcal{X}}_S, \bm{\mathcal{X}}_T]$ 2.Initialize: $\bm{\mathcal{J}}$=$\bm{\mathcal{Z}}$=$\bf{0}$ 3\. **While** not converge **do** 3.1 **Step1**: Fix $\bm{\mathcal{J}}$ and $\bm{\mathcal{Z}}$, and update $\mathbf{\Phi}$ by solving eigenvalue decomposition problem (\[fun11\]). 3.2 **Step2**: Fix $\mathbf{\Phi}$, and update $\bm{\mathcal{Z}}$ using IALM: 3.2.1. Fix $\bm{\mathcal{Z}}$ and update $\bm{\mathcal{J}}$ by using the singular value thresholding (SVT) [@Cai2008A] operator on problem (\[fun13\]). 3.2.2. Fix $\bm{\mathcal{J}}$ and update $\bm{\mathcal{Z}}$ according to gradient descent operator, i.e. Equation (\[fun15\]). 3.3 Update the multiplier $\bm{\mathcal{R}}_1$: $\bm{\mathcal{R}}_1=\bm{\mathcal{R}}_1+\mu(\bm{\mathcal{Z}}-\bm{\mathcal{J}})$ 3.4 Update the parameter $\mu$: $\mu=min(\mu \times 1.01, max_{\mu})$ 3.5 Check convergence **end while** **Output:** $\mathbf\Phi$ and $\bm{\mathcal{Z}}$. \[Tab\] MCTL-S: Simplified Version of MCTL ================================== As illustrated in MCTL, which aims to minimize the distribution discrepancy between the generative target data and the true target data as close as possible, by using the manifold criterion. In this section, considering the generic manifold embedding, for model simplicity, we rewrite a simplified version of MCTL (MCTL-S in short) as illustrated in Fig.\[figAMCTL\]. Formulation of MCTL-S --------------------- With the description of Fig.\[figAMCTL\] (right), suppose an extreme case of $perfect$ domain generation, that is, the generated target data is strictly the same as the true target data, i.e. ${\textbf{X}_{GT}}={\textbf{X}_T}$ ($D_{GT}$ coincides with $D_{T}$), then MCTL-S is formulated as, $$\begin{aligned} &\mathop {\min }\limits_{{\bf{\Phi }},\bm{\mathcal{Z}}} \frac{2}{{{{({n_T})}^2}}}Tr({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bf{\bm{\mathcal{Z}}L}}{({{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}\bm{\mathcal{Z}})^{\bf{T}}})\\ &\hspace*{0.5cm} + \tau \left\| {\frac{1}{{{n_T}}}{{\bf{\Phi }}^{\bf{T}}}({{\bf{K}}_S}{\bm{\mathcal{Z}}}-{{\bf{K}}_T}){\bf{1}}} \right\|_2^2 \\ &\hspace*{0.5cm}+ {\lambda _1}||\bm{\mathcal{Z}}|{|_*} \end{aligned} \label{fun17}$$ where ${\bf{L}}{\rm{ = }}{\bf{D - W}}$ is the conventional Laplacian matrix. Also, the objective function (\[fun17\]) contains three items such as the MC based LGDM loss, the GGDM loss and LRC regularization. From the MC-S loss term in Equation (\[fun17\]), we observe a generic manifold regularization term with Laplacian matrix. Therefore, the MC loss can be degenerated into a conventional manifold constraint by implying ${{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_T}{\rm{ = }}{{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}\bm{\mathcal{Z}}$, which shows that MCTL-S model is harsher than MCTL model. The following experimental results in Table \[tab12\] and \[tab13\] also prove that both the harsh MCTL-S model and the MCTL can achieve good performance. This demonstrates that manifold criterion based intermediate domain generation is a very effective scheme for transfer learning. ![Difference between MCTL (left) and MCTL-S (right). In MCTL, there is error between the true target domain $D_T$ and the generative target domain $D_{GT}$. In MCTL-S, the $D_{GT}$ is supposed to be coincided with the true target domain $D_T$.[]{data-label="figAMCTL"}](MCTLS.pdf){width="1\linewidth"} Optimization of MCTL-S ---------------------- MCTL-S has a similar mechanism with MCTL, therefore, the MCTL-S optimization is almost the same as MCTL. With two updating steps for $\mathbf\Phi$ and $\bm{\mathcal {Z}}$, the optimization procedure of the MCTL-S method is illustrated as follows. $\bullet$ Update $\mathbf\Phi$. In the MCTL-S model, by frozen $\bm{\mathcal {Z}}$ and $\bm{\mathcal {J}}$, the derivative of the objective function (\[fun17\]) w.r.t. $\mathbf\Phi_{K(:,i)}$ is set as zero, there is $$\begin{aligned} &\frac{2}{{{{({n_T})}^2}}}({{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{L}}{{\bm{\mathcal{Z}}}^{\bf{T}}}{({{\bf{K}}_S})^{\bf{T}}}){{\bf{\Phi }}_{K(:,i)}} + \frac{\tau }{{{{({n_T})}^2}}}({{\bf{K}}_S}{\bm{\mathcal{Z}}}1{{\bm{\mathcal{Z}}}^{\bf{T}}}{({{\bf{K}}_S})^{\bf{T}}}\\ &- {{\bf{K}}_S}{\bm{\mathcal{Z}}}1{({{\bf{K}}_T})^{\bf{T}}} - {{\bf{K}}_T}1{{\bm{\mathcal{Z}}}^{\bf{T}}}{({{\bf{K}}_S})^{\bf{T}}} + {{\bf{K}}_T}1{({{\bf{K}}_T})^{\bf{T}}}){{\bf{\Phi }}_{K(:,i)}}\\ &= - \lambda {\bf{K}}{{\bf{\Phi }}_{K(:,i)}} \end{aligned} \label{fun18}$$ Therefore, $\mathbf\Phi_{K}$ in iteration $K$ can be obtained by solving an Eigenvalue decomposition problem, and $\mathbf\Phi_{K(:,i)}$ is the $i^{th}$ eigenvector corresponding to the $i^{th}$ smallest eigenvalue. $\bullet$ Update $\bm{\mathcal{J}}$. The variable $\bm{\mathcal{J}}$ can be effectively solved by the singular value thresholding (SVT) operator[@Cai2008A], which is similar to the problem (\[fun13\]). $\bullet$ Update $\bm{\mathcal{Z}}$. The variable $\bm{\mathcal {Z}}$ can be updated according to section 4.3.3 by using gradient descent algorithm. The gradient with respect to $\bm{\mathcal{Z}}$ can be expressed as $$\begin{aligned} &\nabla ({\bm{\mathcal{Z}}}) = \frac{4}{{{{({n_T})}^2}}}{({{\bf{K}}_S})^{\bf{T}}}{\bf{\Phi }}{{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bm{\mathcal{Z}}}{\bf{L}} + {{\bf{R}}_1} + \mu {\rm{(}}{\bf{{\bm{\mathcal{Z}}} - J}})\\ &\hspace*{0.0cm}+ \frac{{2\tau }}{{{{({n_T})}^2}}}({({{\bf{K}}_S})^{\bf{T}}}{\bf{\Phi }}{{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_S}{\bm{\mathcal{Z}}}1 -{({{\bf{K}}_S})^{\bf{T}}}{\bf{\Phi }}{{\bf{\Phi }}^{\bf{T}}}{{\bf{K}}_T}1) \end{aligned} \label{fun19}$$ Classification ============== For classification, the projected source data and target data can be represented as ${\bm{\mathcal {X}}_S}'=\mathbf\Phi^{\bf{T}}\varphi(\bm{\mathcal {X}})^{\bf{T}}\varphi(\bm{\mathcal {X}}_S)$, ${\bm{\mathcal {X}}_T}'=\mathbf\Phi^{\bf{T}}\varphi(\bm{\mathcal {X}})^{\bf{T}}\varphi(\bm{\mathcal {X}}_S){\bm{\mathcal{Z}}}$. Then, existing classifiers (e.g., SVM, least square method[@kanamori2009least], SRC[@wright2009robust]) can be trained on the domain aligned and augmented training data $[{\bm{\mathcal {X}}_S}',{\bm{\mathcal {X}}_T}']$ with label $\bm{\mathcal {Y}}=[{\bm{\mathcal {Y}}_S},{\bm{\mathcal {Y}}_T}]$ by following the experimental setting as LSDT[@zhang2016lsdt]. Notably, for the COIL-20, MSRC and VOC2007 experiments, in order to follow the same experimental setting with DTSL[@Xu2015], the classifier is trained only on ${\bm{\mathcal {X}}_S}'$ with label ${\bm{\mathcal {Y}}_S}$. Finally, classification on those unlabeled target test data, i.e. ${\bm{\mathcal {X}}_{Tu}}'=\mathbf\Phi^{\bf{T}}\varphi(\bm{\mathcal {X}})^{\bf{T}}\varphi(\bm{\mathcal {X}}_{Tu})$, is achieved, and the recognition accuracy is reported and compared. ![Some images from 4DA datasets[]{data-label="fig3"}](C_4.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](C_6.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](C_8.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](C_10.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](C_12.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](C_14.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](A_frame_0003.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](A_frame_0004.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](A_frame_0008.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](A_frame_0010.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](A_frame_0012.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](A_frame_0014.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](D_frame_0004.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](D_frame_0006.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](D_frame_0008.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](D_frame_0010.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](D_frame_0012.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](D_frame_0014.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](W_frame_0004.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](W_frame_0006.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](W_frame_0009.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](W_frame_0011.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](W_frame_0012.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} ![Some images from 4DA datasets[]{data-label="fig3"}](W_frame_0014.jpg "fig:"){width="0.3\linewidth" height="1.3cm"} Experiments =========== In this section, the experiments on several benchmark datasets[@gaidon2014self] have been exploited for evaluating the proposed MCTL method, including (1) cross-domain object recognition[@gong2014learning],[@liu2010hierarchical]: 4DA office data, 4DA-CNN office data, COIL-20 data, and MSRC-VOC 2007 datasets [@long2013transfer]; (2) cross-pose face recognition: Multi-PIE face dataset; (3) cross-domain handwritten digit recognition: USPS, SEMEION and MNIST datasets. Several related transfer learning methods based on feature transformation and reconstruction, such as SGF[@Gopalan2011Domain], GFK[@Gong2012Geodesic], SA[@Fernando2014Unsupervised], LTSL[@shao2014generalized], DTSL[@Xu2015], and LSDT[@zhang2016lsdt] have been compared and discussed. Cross-domain Object Recognition ------------------------------- For cross-domain object/image recognition, 5 benchmark datasets are used, where several sample images in 4DA office dataset are shown in Fig. \[fig3\], several sample images in COIL-20 object dataset are shown in Fig. \[fig4\], several sample images in MSRC and VOC 2007 datasets are described in Fig. \[fig5\]. **Results on 4DA Office dataset (Amazon, DSLR, Webcam[^2] and Caltech 256[^3])**[@Gong2012Geodesic]: Four domains such as Amazon (A), DSLR (D), Webcam (W), and Caltech (C) are included in 4DA dataset, which contains 10 object classes. In our experiment, the configuration is followed in[@Gong2012Geodesic] where 20 samples per class are selected from Amazon, 8 samples per class from DSLR, Webcam and Caltech when they are used as source domains; 3 samples per class are chosen when they are used as target training data, while the rest data in target domains are used for testing. Note that the 800-bin SURF features [@Gong2012Geodesic],[@saenko2010eccv] are extracted. ![ Comparison with deep transfer learning methods[]{data-label="figzhu"}](zhuzhuangtu.pdf){width="1\linewidth"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj1__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj2__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj3__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj4__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj5__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj6__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj7__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj8__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj9__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj10__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj11__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj12__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj13__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj14__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj15__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj16__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj17__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj18__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj19__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} ![Some examples from COIL-20 dataset[]{data-label="fig4"}](COIL20_obj20__0.png "fig:"){width="0.16\linewidth" height="1.3cm"} [ | c | c | c | c | c | c |c | c |c |c | c |c |]{} 4DA Tasks&[Naive Comb]{}& HFA[@duan2012learning]&ARC-t[@kulis2011you]&MMDT[@Hoffman2014Asymmetric]&SGF[@Gopalan2011Domain]&GFK[@Gong2012Geodesic]&SA[@Fernando2014Unsupervised]& ------ LTSL -PCA ------ [@shao2014generalized]& ------ LTSL -LDA ------ [@shao2014generalized]&LSDT[@zhang2016lsdt]&$\bf{MCTL}$\ $A \to D$&$55.9$&$52.7$&$50.2$&$56.7$&$46.9$&$50.9$&$55.1$&$50.4$&$\bf{59.1}$&$52.9$&$56.1$\ $C \to D$&$55.8$&$51.9$&$50.6$&$56.5$&$50.2$&$55.0$&$56.6$&$49.5$&$\bf{59.6}$&$56.0$&$57.3$\ $W \to D$&$55.1$&$51.7$&$71.3$&$67.0$&$78.6$&$75.0$&$82.3$&$\bf{82.6}$&$\bf{82.6}$&$75.7$&$73.4$\ $A \to C$&$32.0$&$31.1$&$37.0$&$36.4$&$37.5$&$39.6$&$38.4$&$41.5$&$39.8$&$42.2$&$\bf{43.0}$\ $W \to C$&$30.4$&$29.4$&$31.9$&$32.2$&$32.9$&$32.8$&$34.1$&$36.7$&$\bf{38.5}$&$36.9$&$37.5$\ $D \to C$&$31.7$&$31.0$&$33.5$&$34.1$&$32.9$&$33.9$&$35.8$&$36.2$&$36.7$&$37.6$&$\bf{37.8}$\ $D \to A$&$45.7$&$45.8$&$42.5$&$46.9$&$44.9$&$46.2$&$45.8$&$45.7$&$\bf{47.4}$&$46.6$&$47.0$\ $W \to A$&$45.6$&$45.9$&$43.4$&$47.7$&$43.0$&$46.2$&$44.8$&$41.9$&$47.8$&$46.6$&$\bf{48.8}$\ $C \to A$&$45.3$&$45.5$&$44.1$&$49.4$&$42.0$&$46.1$&$45.3$&$49.3$&$\bf{50.4}$&$47.7$&$42.8$\ $C \to W$&$60.3$&$60.5$&$55.9$&$\bf{63.8}$&$54.2$&$57.0$&$60.7$&$50.4$&$59.5$&$57.6$&$59.6$\ $D \to W$&$62.1$&$62.1$&$78.3$&$74.1$&$78.6$&$80.2$&$\bf{84.8}$&$81.0$&$78.3$&$83.1$&$82.1$\ $A \to W$&$62.4$&$61.8$&$55.7$&$\bf{64.6}$&$54.2$&$56.9$&$60.3$&$52.3$&$59.5$&$57.2$&$55.7$\ $Average$&$48.5$&$47.4$&$49.5$&$52.5$&$49.7$&$51.6$&$53.7$&$51.5$&$\bf{54.9}$&$53.3$&$54.0$\ \[tab2\] 4DA-CNN Tasks(f7) [SourceOnly]{} [Naive Comb]{} SGF[@Gopalan2011Domain] TCA GFK[@Gong2012Geodesic] LTSL[@shao2014generalized] LSDT[@zhang2016lsdt] $\bf{MCTL}$ ------------------- ---------------- ---------------- ------------------------- ------------- ------------------------ ---------------------------- ---------------------- ------------- $A \to D$ $81.3$ $94.1$ $92.0$ $82.8$ $94.3$ $94.5$ $\bf{96.0}$ $95.9$ $C \to D$ $77.6$ $92.8$ $92.4$ $87.9$ $91.9$ $93.5$ $94.6$ $\bf{94.8}$ $W \to D$ $96.2$ $98.9$ $97.6$ $\bf{99.4}$ $98.5$ $98.8$ $99.3$ $99.3$ $A \to C$ $79.3$ $83.4$ $77.4$ $81.2$ $79.1$ $85.4$ $87.0$ $\bf{87.1}$ $W \to C$ $68.1$ $81.2$ $76.8$ $75.5$ $76.1$ $82.6$ $84.2$ $\bf{84.7}$ $D \to C$ $74.3$ $82.7$ $78.2$ $79.6$ $77.5$ $84.8$ $86.2$ $\bf{86.4}$ $D \to A$ $81.8$ $90.9$ $88.0$ $90.4$ $90.1$ $91.9$ $92.5$ $\bf{92.7}$ $W \to A$ $73.4$ $90.6$ $86.8$ $85.6$ $85.6$ $91.0$ $91.7$ $\bf{92.1}$ $C \to A$ $86.5$ $90.3$ $89.3$ $92.1$ $88.4$ $90.9$ $92.5$ $\bf{92.7}$ $C \to W$ $67.8$ $90.6$ $87.8$ $88.1$ $86.4$ $90.8$ $\bf{93.5}$ $93.1$ $D \to W$ $95.1$ $98.0$ $95.7$ $96.9$ $96.5$ $97.8$ $98.3$ $\bf{98.5}$ $A \to W$ $71.6$ $91.1$ $88.1$ $84.4$ $88.6$ $91.5$ $\bf{92.9}$ $92.8$ $Average$ $79.4$ $90.4$ $87.5$ $87.0$ $87.8$ $91.1$ $92.4$ $\bf{92.5}$ \[tab3\] Tasks SVM TSL RDALR[@Chang2013Robust] DTSL[@Xu2015] LTSL[@shao2014generalized] LSDT[@zhang2016lsdt] $\bf{MCTL}$ -------------- -------- -------- ------------------------- --------------- ---------------------------- ---------------------- ------------- $ C1 \to C2$ $82.7$ $80.0$ $80.7$ $84.6$ $75.4$ $81.7$ $\bf{84.8}$ $ C2 \to C1$ $84.0$ $75.6$ $78.8$ $\bf{84.2}$ $72.2$ $81.5$ $83.7$ $Average$ $83.3$ $77.8$ $79.7$ $\bf{84.4}$ $73.8$ $81.6$ $84.3$ \[tab5\] ![Some examples from MSRC and VOC 2007 datasets[]{data-label="fig5"}](MSRC_103_0366.JPG "fig:"){width="0.3\linewidth"} ![Some examples from MSRC and VOC 2007 datasets[]{data-label="fig5"}](MSRC_104_0404.JPG "fig:"){width="0.3\linewidth"} ![Some examples from MSRC and VOC 2007 datasets[]{data-label="fig5"}](MSRC_106_0678.JPG "fig:"){width="0.3\linewidth"} ![Some examples from MSRC and VOC 2007 datasets[]{data-label="fig5"}](MSRC_108_0893.JPG "fig:"){width="0.3\linewidth"} ![Some examples from MSRC and VOC 2007 datasets[]{data-label="fig5"}](MSRC_164_6500.JPG "fig:"){width="0.3\linewidth"} ![Some examples from MSRC and VOC 2007 datasets[]{data-label="fig5"}](MSRC_165_6531.JPG "fig:"){width="0.3\linewidth"} ![Some examples from MSRC and VOC 2007 datasets[]{data-label="fig5"}](VOC2007_184_8411.JPG "fig:"){width="0.3\linewidth"} ![Some examples from MSRC and VOC 2007 datasets[]{data-label="fig5"}](VOC2007_104_0415.JPG "fig:"){width="0.3\linewidth"} ![Some examples from MSRC and VOC 2007 datasets[]{data-label="fig5"}](VOC2007_108_0858.JPG "fig:"){width="0.3\linewidth"} ![Some examples from MSRC and VOC 2007 datasets[]{data-label="fig5"}](VOC2007_109_0932.JPG "fig:"){width="0.3\linewidth"} ![Some examples from MSRC and VOC 2007 datasets[]{data-label="fig5"}](VOC2007_101_0119.JPG "fig:"){width="0.3\linewidth"} ![Some examples from MSRC and VOC 2007 datasets[]{data-label="fig5"}](VOC2007_166_6674.JPG "fig:"){width="0.3\linewidth"} The recognition accuracies are reported in Table \[tab2\], from which we observe that the propose MCTL ranks the second ($54\%$) in average but slightly inferior to LTSL-LDA ($54.9\%$). The reason may be that the discrimination of LDA helps improve the performance, because LTSL-PCA only achieves $51.5\%$, and our MCTL also outperforms other methods. Notably, the 4DA task is a challenging benchmark, which attracts many competitive approaches for evaluation and comparison. Therefore, excellent baselines have been achieved. **Results on 4DA-CNN dataset (Amazon, DSLR, Webcam and Caltech 256)**[@Krizhevsky2012ImageNet],[@saenko2010eccv]: In 4DA-CNN dataset, the CNN features are extracted by feeding the raw 4DA data (10 object classes) into the well trained convolutional neural network (AlexNet with 5 convolutional layers and 3 fully connected layers) on ImageNet [@Krizhevsky2012ImageNet]. The features from the $6^{th}$ and $7^{th}$ layers (i.e. DeCAF [@donahue2014decaf]) are explored. The feature dimensionality is 4096. In experiments, a standard configuration and protocol is used by following [@Gong2012Geodesic]. In this paper, the features of the $7^{th}$ layer are experimented. The recognition accuracies by using the $7^{th}$ layer outputs for 12 cross-domain tasks are shown in Table \[tab3\], from which we can observe that the average recognition accuracy of the proposed method shows the best performance. The superiority of generative transfer learning is demonstrated. We can see that our MCTL outperforms LTSL-LDA, this may be because there has been a better discrimination of CNN features, and discriminative learning may not significantly work. The compared methods in Table \[tab3\] are shallow transfer learning. It is interesting to compare with deep transfer learning methods, such as AlexNet[@Krizhevsky2012ImageNet], DDC[@tzeng2015simultaneous], DAN[@long2015learning] and RTN[@long2016unsupervised]. The comparison is described in Fig.\[figzhu\], from which we can observe that our proposed method ranks the second in average performance ($92.5\%$), which is inferior to the residual transfer network (RTN), but still better than other three deep transfer learning models. The comparison shows that the proposed MCTL, as a shallow transfer learning method, has a good competitiveness. Tasks SVM TSL RDALR[@Chang2013Robust] DTSL[@Xu2015] LTSL[@shao2014generalized] LSDT[@zhang2016lsdt] $\bf{MCTL}$ ------------ -------- -------- ------------------------- --------------- ---------------------------- ---------------------- ------------- $ M \to V$ $37.1$ $32.4$ $37.5$ $38.0$ $38.0$ $\bf{47.4}$ $\bf{47.4}$ $ V \to M$ $55.5$ $43.2$ $62.3$ $56.4$ $\bf{67.1}$ $63.9$ $64.8$ $Average$ $46.3$ $37.8$ $49.9$ $47.2$ $52.6$ $55.6$ $\bf{56.1}$ \[tab6\] ![Facial images of one person from CMU Multi-PIE[]{data-label="fig6"}](PIEdata_013_01_01_041_08.png "fig:"){width="0.12\linewidth" height="1.2cm"} ![Facial images of one person from CMU Multi-PIE[]{data-label="fig6"}](PIEdata_013_01_01_050_08.png "fig:"){width="0.12\linewidth" height="1.2cm"} ![Facial images of one person from CMU Multi-PIE[]{data-label="fig6"}](PIEdata_013_01_01_051_08.png "fig:"){width="0.12\linewidth" height="1.2cm"} ![Facial images of one person from CMU Multi-PIE[]{data-label="fig6"}](PIEdata_013_01_01_080_08.png "fig:"){width="0.12\linewidth" height="1.2cm"} ![Facial images of one person from CMU Multi-PIE[]{data-label="fig6"}](PIEdata_013_01_01_130_08.png "fig:"){width="0.12\linewidth" height="1.2cm"} ![Facial images of one person from CMU Multi-PIE[]{data-label="fig6"}](PIEdata_013_01_01_140_08.png "fig:"){width="0.12\linewidth" height="1.2cm"} ![Facial images of one person from CMU Multi-PIE[]{data-label="fig6"}](PIEdata_013_01_01_190_08.png "fig:"){width="0.12\linewidth" height="1.2cm"} ![Facial images of one person from CMU Multi-PIE[]{data-label="fig6"}](PIEdata_013_01_02_041_08.png "fig:"){width="0.12\linewidth" height="1.2cm"} ![Facial images of one person from CMU Multi-PIE[]{data-label="fig6"}](PIEdata_013_01_02_050_08.png "fig:"){width="0.12\linewidth" height="1.2cm"} ![Facial images of one person from CMU Multi-PIE[]{data-label="fig6"}](PIEdata_013_01_02_051_08.png "fig:"){width="0.12\linewidth" height="1.2cm"} ![Facial images of one person from CMU Multi-PIE[]{data-label="fig6"}](PIEdata_013_01_02_080_08.png "fig:"){width="0.12\linewidth" height="1.2cm"} ![Facial images of one person from CMU Multi-PIE[]{data-label="fig6"}](PIEdata_013_01_02_130_08.png "fig:"){width="0.12\linewidth" height="1.2cm"} ![Facial images of one person from CMU Multi-PIE[]{data-label="fig6"}](PIEdata_013_01_02_140_08.png "fig:"){width="0.12\linewidth" height="1.2cm"} ![Facial images of one person from CMU Multi-PIE[]{data-label="fig6"}](PIEdata_013_01_02_190_08.png "fig:"){width="0.12\linewidth" height="1.2cm"} ![ Some images from handwritten digits datasets[]{data-label="fig7"}](handwritten.pdf){width="0.8\linewidth"} **Results on COIL-20 dataset[^4]: Columbia Object Image Library[@Rate2011Columbia]**: The COIL-20 dataset contains 20 objects with 1440 gray scale images (72 multi-pose images per object). The image size is $128\times 128$ of 256 gray levels. In experiments, by following the experimental protocol in[@Xu2015], the size of each image is cropped into $32\times 32$ and the dataset is divided into two subsets C1 and C2, with each 2 quadrants are included. Specifically, the C1 set contains the directions of \[$0^\circ$, $85^\circ$\] and \[$180^\circ$, $265^\circ$\], from quadrants 1 and 3. The C2 set contains the directions of \[$90^\circ$, $175^\circ$\] and \[$270^\circ$, $355^\circ$\], from quadrants 2 and 4. The two subsets are distribution different but relevant in semantic, and therefore come to a DA problem. By taking C1 and C2 as source and target domain alternatively, the cross-domain recognition rates of different methods are shown in Table \[tab5\], from which we see that the proposed MCTL ($84.3\%$) is a little inferior to DTSL ($84.4\%$), but shows a superior performance over other related methods, especially the recent LSDT method ($81.6\%$). **Results on MSRC[^5] and VOC 2007[^6] datasets:[@Xu2015]**: The MSRC dataset contains 4323 images with 18 classes and the VOC 2007 dataset contains 5011 images with 20 concepts. The two datasets share 6 semantic classes: airplane, bicycle, bird, car, cow and sheep. We follow[@long2014transfer] to construct a cross-domain image dataset MSRC vs. VOC ($M\to V$) by selecting 1269 images from MSRC as the source domain, and 1530 images from VOC 2007 as the target domain. Then we switch the two datasets: VOC vs. MSRC ($V\to M$). All images are uniformly rescaled to 256 pixels, and 128-dimensional dense SIFT (DSIFT) features using the VLFeat open source package are extracted. Then $K$-means clustering is used to obtain a 240-dimensional codebook. In this way, the source and target domain data are constructed to share the same label set. The experimental results of different domain adaptation methods are shown in Table \[tab6\], from which we observe that the performance of our method is $0.5\%$ higher than state-of-the-art LSDT method and $3.5\%$ higher than LTSL method in average cross-domain recognition performance. Cross-poses Face Recognition ---------------------------- It is known that 3D pose change in faces is a nonlinear transfer problem, general recognition models are very sensitive to pose change. Therefore, it is challenging to handle the pose based face recognition issue. In this section, the popular CMU Multi-PIE face dataset[^7] with 337 subjects is used. Each subject contains 4 different sessions with 15 poses, 20 illuminations, and 6 expressions. The facial images in Session 1 and Session 2 of one person are shown in Fig. \[fig6\]. In our experiment, we select the first 60 subjects from Session 1 and Session 2. As a result, a smaller session 1 (S1) with 7 images of different poses per class under neutral expression and a smaller session 2 (S2) that is similar to S1 but under smile expression are constructed as domain data. Notably, the raw image pixels are used as features. Specifically, the experimental configurations are set as follows. **S1**: One frontal face ($0^\circ$) per subject is used as source data, one $60^\circ$ posed face is used as the target training data, and the remaining 5 facial images are used as the target test data. **S2**: The experimental configuration is the same as S1. **S1+S2**: The two frontal faces ($0^\circ$) and the two $60^\circ$ posed faces under neutral and smile expression are used as source data and target training data in the two sessions, respectively. The remaining 10 facial images are used as target test data. **S1 $\to$ S2**: S1 is used as source data, the frontal and $60^\circ$ posed faces in S2 are used as the target training data, and the remaining data in S2 are used as test data. With above settings, the recognition accuracies of different methods have been shown in Table \[tab7\]. It is clear that the proposed method performs significantly better, which is $5\%$ over other DA methods in handling such pose variation based nonlinear transfer problem. This also demonstrates that the proposed intermediate domain generation based transfer learning can better interpret local generative discrepancy metric (LGDM) and improve the nonlinear local transfer problem. The manifold criterion is then validated. Tasks [Naive Comb]{} [A-SVM]{} SGF[@Gopalan2011Domain] GFK[@Gong2012Geodesic] SA[@Fernando2014Unsupervised] [LTSL[@shao2014generalized]]{} LSDT[@zhang2016lsdt] $\bf{MCTL}$ -------------------------------- ---------------- ----------- ------------------------- ------------------------ ------------------------------- -------------------------------- ---------------------- ------------- $S1$ ($0^\circ \to 60^\circ$) $61.0$ $57.0$ $53.7$ $61.0$ $51.3$ $56.0$ $59.7$ $\bf{65.3}$ $S2$ ($0^\circ\to60^\circ$) $62.7$ $62.7$ $55.0$ $58.7$ $62.7$ $62.7$ $63.3$ $\bf{70.0}$ $S1+S2$ ($0^\circ\to60^\circ$) $60.2$ $60.1$ $53.8$ $56.3$ $61.7$ $60.2$ $61.7$ $\bf{68.3}$ $S1\to S2$ $93.6$ $94.3$ $92.5$ $96.7$ $98.3$ $97.2$ $95.8$ $\bf{98.7}$ $Average$ $69.4$ $68.5$ $63.8$ $67.0$ $68.5$ $70.3$ $70.1$ $\bf{75.6}$ \[tab7\] Tasks [Naive Comb]{} [A-SVM]{} SGF[@Gopalan2011Domain] GFK[@Gong2012Geodesic] SA[@Fernando2014Unsupervised] LTSL[@shao2014generalized] LSDT[@zhang2016lsdt] $\bf{MCTL}$ ----------- ---------------- ----------- ------------------------- ------------------------ ------------------------------- ---------------------------- ---------------------- ------------- $M \to U$ $78.8$ $78.3$ $79.2$ $82.6$ $78.8$ $83.2$ $79.3$ $\bf{87.8}$ $S \to U$ $83.6$ $76.8$ $77.5$ $82.7$ $82.5$ $83.6$ $84.7$ $\bf{84.8}$ $M \to S$ $51.9$ $70.5$ $51.6$ $70.5$ $\bf{74.4}$ $72.8$ $69.1$ $74.0$ $U \to S$ $65.3$ $74.5$ $70.9$ $76.7$ $74.6$ $65.3$ $67.4$ $\bf{83.0}$ $U \to M$ $71.7$ $73.2$ $71.1$ $74.9$ $72.9$ $71.7$ $70.5$ $\bf{81.2}$ $S \to M$ $67.6$ $69.3$ $66.9$ $74.5$ $72.9$ $67.6$ $70.0$ $\bf{74.0}$ $Average$ $69.8$ $73.8$ $69.5$ $77.0$ $76.0$ $74.0$ $73.5$ $\bf{80.8}$ \[tab8\] All Transfer Tasks LTSL[@shao2014generalized] LSDT[@zhang2016lsdt] MCTL -------------------- ---------------------------- ---------------------- --------- $Average~($%$)$ $69.45$ $71.08$ $73.88$ : Average performance of all transfer tasks \[tab15\] Cross-domain Handwritten Digits Recognition ------------------------------------------- Three handwritten digits datasets including MNIST (M)[^8], USPS (U)[^9] and SEMEION (S)[^10] with 10 classes from digit $0\sim9$ are used for evaluating the proposed MCTL. The MNIST dataset consists of 70,000 instances of $28\times28$, the USPS dataset consists of 9298 examples of $16\times16$, and the SEMEION dataset consists of 2593 images of $16\times16$. The MNIST dataset is cropped into $16\times16$. Several images from three datasets are shown in Fig. \[fig7\]. Each dataset is used as source and target domain alternatively, and 6 cross-domain tasks are explored. Also, 100 samples per class from source domain and 10 samples per class from target domain are randomly selected for training. 5 random splits are used, and the average classification accuracies are reported in Table \[tab8\]. From the results, we observe that our MCTL outperforms other state-of-the-art methods with $3\%$, and the significant superiority is therefore proved. From the whole experiments on 4DA, 4DA-CNN, COIL-20, MSRC and VOC2007, Multi-PIE, and Handwritten digits, we can see that the proposed MCTL shows competitive performance. Although our MCTL shows very slight improvement on several tasks by comparing to state-of-the-art method, the comprehensive superiority of MCTL in all datasets is clearly demonstrated in Table \[tab15\], which shows the mean value of all the cross-domain tasks in the datasets. From the results, we can observe that our MCTL outperforms state-of-the-art LTSL and LSDT about 2.8% in average performance on all the transfer tasks explored in this paper. Discussion ========== Analysis of MCTL-S ------------------ When the condition ${\textbf{X}_{GT}}={\textbf{X}_T}$ is strictly satisfied, i.e. perfect domain generation, our model is degenerated into the MCTL-S model, which can be simply formulated as problem (\[fun17\]). The MC-S loss is more similar to a generic manifold regularization, which is built in an ideal condition focusing on the locality structure. Under this case, domain generation relies more on local manifold, regardless of the global property. Therefore, the performance of the MCTL-S with ideal and perfect condition will degrade when global shift of domain data is encountered. The GGDM loss that measures the global structure can be an effective relaxation. The experimental comparisons on 4DACNN dataset between MCTL and MCTL-S are presented in Table \[tab12\] and the comparisons on COIL-20 dataset are shown in Table \[tab13\]. From the results, we observe that the proposed MCTL and the harsh MCTL-S performs similar performance. This demonstrates that domain generation is a feasible way for unsupervised domain transfer learning. It is also encouraging for us to use deep generative method (e.g. GAN) for transfer learning in the future. The potential problem of GAN is that the similar high-level semantic information across domain may be generated, but the distribution may still be inconsistent. Parameter Setting and Ablation Analysis --------------------------------------- In our method, the trade-off coefficients $\tau$ and ${\lambda _1}$ are fixed as 1 in experiments. Dimensions of common subspace is set as $d=n$. The Gaussian kernel function ${k}(\mathbf{x}_i,\mathbf{x}_j)=$exp$(-\parallel\mathbf{x}_i-\mathbf{x}_j\parallel^2/2\sigma^2)$ is used, where $\sigma$ can be tuned for different tasks, e.g. $\sigma=1.2$ for 4DA-CNN and $\sigma=0.8$ for COIL-20. But the linear kernel function is adopted for discussion as it can effectively avoid the influence of kernel parameter. The least square classifier[@kanamori2009least] is used in DA experiments except that in COIL-20 experiment, the SVM classifier is used because of its good performance. 4DA-CNN Tasks MCTL MCTL-S --------------- --------- --------- $A \to D$ $95.67$ $95.71$ $C \to D$ $94.69$ $94.72$ $W \to D$ $99.25$ $99.29$ $A \to C$ $87.11$ $87.05$ $W \to C$ $84.73$ $84.74$ $D \to C$ $86.37$ $86.34$ $D \to A$ $92.66$ $92.65$ $W \to A$ $92.06$ $92.07$ $C \to A$ $92.68$ $92.06$ $C \to W$ $93.08$ $93.04$ $D \to W$ $98.49$ $98.51$ $A \to W$ $92.79$ $92.83$ $Average$ $92.47$ $92.47$ : Recognition accuracy ($\%$) in 4DACNN dataset \[tab12\] COIL-20 MCTL MCTL-S -------------- --------- --------- $ C1 \to C2$ $84.83$ $85.00$ $ C2 \to C1$ $83.67$ $83.67$ Average $84.25$ $84.34$ : Recognition accuracy ($\%$) in COIL-20 \[tab13\] In MCTL model, three items such as MMD loss based GGDM term, MC loss based LGDM term and LRC regularization term are included. For better interpreting the effect of each term, the ablation analysis by removing one of them is discussed. Therefore, some extra experiments on the COIL-20 object recognition task (i.e. $C1 \to C2$), Handwritten Digits recognition task (i.e. $M \to U$) and MSRC-VOC 2007 image recognition task (i.e. $V \to M$) are studied for ablation analysis. The experimental results are shown in Table \[tab14\]. We can observe that the LGDM loss plays more important role than GGDM loss, with $2.4\%$ improvement in average. This is reasonable because in many real cross-domain tasks, global transfer may result in negative transfer, due to the local bias problem of domain discrepancy. This further demonstrates the superiority and validity of the proposed MCTL because local discrepancy metric is deserved for transfer learning. Model Dimensionality and Parameter Analysis ------------------------------------------- **Dimensionality Analysis**. In MCTL model, a latent common subspace $\bm{\mathcal {P}}$ is learned. Therefore, the performance variation with different subspace dimensions is studied on the COIL-20 ($C1 \to C2$ and $C1 \to C2$) and CMU Multi-PIE face datasets including $S1$, $S2$, and $S1+S2$ tasks. The performance curve with increasing number of the dimensionality $d$ is shown in Fig. \[fig9\] (a) and (b). Generally, the recognition performance can be improved with increasing number of dimension. **Parameter Sensitivity Analysis**. In MCTL model, there are two trade-off parameters $\tau$ and ${\lambda _1}$ involved in parameter tuning. To have an insight of their sensitivity to model, the parameter sensitivity analysis is studied on COIL-20 ($C1\to C2$ and $C2\to C1$) task by tuning the parameters from $\{0,1,10,100,1000\}$, respectively. Fig. \[fig9\] (c) shows the parameter analysis of $\lambda _1$ by fixing ${\tau}=1$. Fig. \[fig9\] (d) shows the parameter analysis of $\tau$ by fixing ${\lambda _1}=1$. For tuning the two parameters simultaneously, we have also provided the 3D surface on COIL-20 dataset in Fig.\[fig14\] (a) ($C1\to C2$) and Fig.\[fig14\] (b) ($C2\to C1$). We can see that the model is robust to the model parameters, without serious fluctuation. Tasks MCTL no LGDM no LRC no GGDM -------------- ------------- --------- -------- ------------- $ C1 \to C2$ $\bf{77.0}$ $73.0$ $76.7$ $76.8$ $ M \to U$ $71.0$ $70.0$ $67.0$ $\bf{73.0}$ $ V \to M$ $70.2$ $70.1$ $70.1$ $\bf{70.3}$ $Average$ $72.7$ $71.0$ $71.2$ $\bf{73.4}$ : Results of ablation analysis \[tab14\] ![Dimensionality and parameter sensitivity analysis[]{data-label="fig9"}](jiangwei.pdf){width="1\linewidth"} Tasks SGF[@Gopalan2011Domain] GFK[@Gong2012Geodesic] SA[@Fernando2014Unsupervised] LTSL[@shao2014generalized] $\bf{MCTL}$ ------------- ------------------------- ------------------------ ------------------------------- ---------------------------- -------------------- $S1 \to S2$ $10.9s$ ($92.5\%$) $1.5s$ ($96.7\%$) $4.18s$ ($98.3\%$) $7.21s$ ($97.2\%$) $7.62s$ ($97.3\%$) $M \to U$ $75s$ ($79.2\%$) $12.2s$ ($82.6\%$) $30.5s$ ($78.8\%$) $62.1s$ ($83.2\%$) $98.8s$ ($87.8\%$) \[tab11\] Computational Complexity and Time Analysis ------------------------------------------ In this section, the computational complexity of the Algorithm 1 is presented. The algorithm includes three basic steps: update $\bm{\mathcal {Z}}$, update $\bm{\mathcal{J}}$, and update $\mathbf\Phi$. The computation of $\mathbf\Phi$ involves eigen-decomposition and matrix multiplication, and the complexity is $O(n^3)$. The computation of updating $\bm{\mathcal{J}}$ and $\bm{\mathcal{Z}}$ is $O(n^2)$. Suppose that the number of iterations is $T$, then the total computational complexity of MCTL can be expressed as $O(T n^3)+ O(T n^2)$. It is noteworthy that the complexity of Gram matrix computation is not included, because it can be computed in advance without computing in Algorithm 1. Further, Table \[tab11\] shows the computational time comparisons on CMU Multi-PIE data ($S1 \to S2$) and handwritten digits data ($M \to U$). From Table \[tab11\], we observe that the proposed MCTL has also a low computational time. We should claim that the proposed method is better used together with deep models for large-scale data, due to the stronger feature representation capability of deep methods with large-scale data. Notably, all algorithms in experiments are implemented in computer of Intel i5-4460 CPU, 3.20GHz, and 16GB RAM. Model Visualization and Convergence ----------------------------------- In this section, the visualization and convergence will be discussed. Pose alignment is a difficult task. Therefore, for better insight of the MCTL model, the feature visualization is explored. We have shown the visualization of CMU PIE. The first row in Fig. \[fig11\] illustrates the pose transfer process under Session 1 via MCTL, from which we observe that the generated intermediate domain data by source data inherits similar distribution property of target data. ![Visualization of MCTL alignment[]{data-label="fig11"}](visual.pdf){width="0.9\linewidth"} ![Parameter sensitivity analysis[]{data-label="fig14"}](3DC2toC1.pdf){width="1\linewidth"} Further, COIL-20 and handwritten digits datasets are also exploited. The second row of Fig. \[fig11\] shows the pose transfer process, and the generative data shows a compromise of source and target data in visual disparity. Similarly, the visualization of the generated handwritten digits (intermediate domain) by taking MNIST as the source domain and SEMEION as target domain is shown in the third row of Fig. \[fig11\]. The effect of domain generation is clearly shown. Additionally, the convergence of our MCTL method is explored by observing the variation of the objective function. In the experiments, the number of iterations is set to be 15, and the variation of the objective function (i.e. $F_{min}$) is described in Fig. \[fig13\]. It is clear that the objective function decreases to a constant value after several iterations, by running the algorithm, on COIL-20 ($C1\to C2$) and 4DACNN ($A \to D$), respectively. Also, the convergence of each term in the MCTL, such as $F_{MC}$ (i.e. MC based LGDM loss), $F_{MMD}$ (i.e. GGDM loss), and $F_{\bm{\mathcal {Z}}}$ (i.e. LRC regularization) are also presented in Fig. \[fig13\]. We can observe the fast convergence of MCTL after several iterations. Notably, the optimization solver in this paper may not be optimal selection, and the performance may be further fine-tuned with better solvers. ![Convergence of MCTL algorithm[]{data-label="fig13"}](convergence.pdf){width="1\linewidth"} Conclusion ========== In this paper, we propose a new transfer learning perspective with intermediate domain generation. Specifically, a Manifold Criterion Guided Transfer Learning (MCTL) method is introduced. In previous work, MMD is commonly used for global domain discrepancy minimization and achieves good performance in domain adaptation. However, an open problem, that MMD neglects the locality geometric structure of domain data, is preserved. In order to overcome the bottleneck, motivated by manifold criterion, MCTL is proposed, which aims at generating a new intermediate domain sharing similar distribution with the true target domain. The manifold criterion (MC) implies that the domain adaptation is achieved if MC is satisfied (i.e. minimal domain discrepancy). The rationale behind MC is that if the locality structure is preserved between the generated intermediate domain and the true target domain, then the $i.i.d.$ condition is achieved. Finally, with a MC based LGDM loss, GGDM loss and LRC regularization jointly constructed, MCTL is established. Extensive experiments on benchmark DA datasets demonstrate the superiority of the proposed method over several state-of-the-art DA methods. Acknowledgments {#acknowledgments .unnumbered} =============== Acknowledgment {#acknowledgment .unnumbered} ============== The authors would like to thank the editor and the anonymous reviewers for their valuable comments and suggestions. [Lei Zhang]{} (M’14-SM’18) received his Ph.D degree in Circuits and Systems from the College of Communication Engineering, Chongqing University, Chongqing, China, in 2013. He was selected as a Hong Kong Scholar in China in 2013, and worked as a Post-Doctoral Fellow with The Hong Kong Polytechnic University, Hong Kong, from 2013 to 2015. He is currently a Professor/Distinguished Research Fellow with Chongqing University. He has authored more than 70 scientific papers in top journals and conferences, including the IEEE T-NNLS, IEEE T-IP, IEEE T-MM, IEEE T-CYB, IEEE T-IM, IEEE T-SMCA, etc. His current research interests include machine learning, pattern recognition, computer vision and intelligent systems. Dr. Zhang was a recipient of Outstanding Reviewer Award for more than 10 journals, Outstanding Doctoral Dissertation Award of Chongqing, China, in 2015, Hong Kong Scholar Award in 2014, Academy Award for Youth Innovation of Chongqing University in 2013 and the New Academic Researcher Award for Doctoral Candidates from the Ministry of Education, China, in 2012. [Shanshan Wang]{} received BE and ME from the ChongQing University in 2010 and 2013, respectively. She is currently pursuing the Ph.D. degree at ChongQing University. Her current research interests include machine learning, pattern recognition, computer vision. [Guangbin Huang]{} (M’98-SM’04) received the B.Sc. degree in applied mathematics and the M.Eng. degree in computer engineering from Northeastern University, China, in 1991 and 1994, respectively, and Ph.D. degree in electrical engineering from Nanyang Technological University, Singapore, in 1999. During undergraduate period, he also concurrently studied with the Applied Mathematics Department and Wireless Communication Department, Northeastern University, China. He serves as an Associate Editor of Neurocomputing, Neural Networks, Cognitive Computation, and the IEEE Transactions on Cybernetics. His current research interests include machine learning, computational intelligence, and extreme learning machines. He was a Research Fellow with the Singapore Institute of Manufacturing Technology, from 1998 to 2001, where he has led/implemented several key industrial projects (e.g., a Chief Designer and a Technical Leader of Singapore Changi Airport Cargo Terminal Upgrading Project). From 2001, he has been an Assistant Professor and an Associate Professor with the School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore. He received the best paper award of the IEEE Transactions on Neural Networks and Learning Systems in 2013. [Wangmeng Zuo]{} received the Ph.D. degree in computer application technology from the Harbin Institute of Technology, Harbin, China, in 2007. From July 2004 to December 2004, from November 2005 to August 2006, and from July 2007 to February 2008, he was a Research Assistant at the Department of Computing, Hong Kong Polytechnic University, Hong Kong. From August 2009 to February 2010, he was a Visiting Professor in Microsoft Research Asia. He is currently an Associate Professor in the School of Computer Science and Technology, Harbin Institute of Technology. Dr. Zuo has published more than 60 papers in top tier academic journals and conferences. His current research interests include image modeling and blind restoration, discriminative learning, biometrics, and 3D vision. Dr. Zuo is an Associate Editor of the IET Biometrics. He is a senior member of the IEEE. [Jian Yang]{} received the PhD degree from Nanjing University of Science and Technology (NUST), on the subject of pattern recognition and intelligence systems in 2002. In 2003, he was a postdoctoral researcher at the University of Zaragoza. From 2004 to 2006, he was a Postdoctoral Fellow at Biometrics Centre of Hong Kong Polytechnic University. From 2006 to 2007, he was a Postdoctoral Fellow at Department of Computer Science of New Jersey Institute of Technology. Now, he is a Chang-Jiang professor in the School of Computer Science and Technology of NUST. He is the author of more than 100 scientific papers in pattern recognition and computer vision. His journal papers have been cited more than 4000 times in the ISI Web of Science, and 9000 times in the Web of Scholar Google. His research interests include pattern recognition, computer vision and machine learning. Currently, he is/was an associate editor of Pattern Recognition Letters, IEEE Trans. Neural Networks and Learning Systems, and Neurocomputing. He is a Fellow of IAPR. [David Zhang]{} (F’09) graduated in Computer Science from Peking University in 1974. He received his MSc in 1982 and his PhD in 1985 in Computer Science from the Harbin Institute of Technology (HIT), respectively. From 1986 to 1988 he was a Postdoctoral Fellow at Tsinghua University and then an Associate Professor at the Academia Sinica, Beijing. In 1994 he received his second PhD in Electrical and Computer Engineering from the University of Waterloo, Ontario, Canada. He is a Chair Professor since 2005 at the Hong Kong Polytechnic University where he is the Founding Director of the Biometrics Research Centre (UGC/CRC) supported by the Hong Kong SAR Government in 1998. He also serves as Visiting Chair Professor in Tsinghua University, and Adjunct Professor in Peking University, Shanghai Jiao Tong University, HIT, and the University of Waterloo. He is the Founder and Editor-in-Chief, International Journal of Image and Graphics (IJIG); Book Editor, Springer International Series on Biometrics (KISB); Organizer, the International Conference on Biometrics Authentication (ICBA); Associate Editor of more than ten international journals including IEEE TRANSACTIONS and so on; and the author of more than 10 books, over 300 international journal papers and 30 patents from USA/Japan/HK/China. Professor Zhang is a Croucher Senior Research Fellow, Distinguished Speaker of the IEEE Computer Society, and a Fellow of both IEEE and IAPR. [^1]: This work was supported by the National Science Fund of China under Grants (61771079, 91420201 and 61472187), Chongqing Science and Technology Project (No. cstc2017zdcy-zdzxX0002, cstc2018jcyjAX0250), the 973 Program No.2014CB349303, and Program for Changjiang Scholars. *(Corresponding author: Lei Zhang)* [^2]: <http://www.eecs.berkeley.edu/~mfritz/domainadaptation/> [^3]: <http://www.vision.caltech.edu/Image_Datasets/Caltech256/> [^4]: <http://www.cs.columbia.edu/CAVE/software/softlib/coil-20.php> [^5]: <http://research.microsoft.com/en-us/projects/objectclassrecognition> [^6]: <http://pascallin.ecs.soton.ac.uk/challenges/VOC/voc2007> [^7]: <http://www.cs.cmu.edu/afs/cs/project/PIE/MultiPie/Multi-Pie/Home.html> [^8]: <http://yann.lecun.com/exdb/mnist/> [^9]: <http://www-i6.informatik.rwth-aachen.de/~keysers/usps.html> [^10]: <http://archive.ics.uci.edu/ml/datasets/Semeion+Handwritten+Digit>
--- abstract: 'For a lattice $L$, let ${\textup{Princ}(L)}$ denote the ordered set of principal congruences of $L$. In a pioneering paper, G. Grätzer characterized the ordered sets ${\textup{Princ}(L)}$ of finite lattices $L$; here we do the same for countable lattices. He also showed that each bounded ordered set $H$ is isomorphic to ${\textup{Princ}(L)}$ of a bounded lattice $L$. We prove a related statement: if an ordered set $H$ with least element is the union of a chain of principal ideals, then $H$ is isomorphic to ${\textup{Princ}(L)}$ of some lattice $L$.' address: 'University of Szeged, Bolyai Institute. Szeged, Aradi vértanúk tere 1, HUNGARY 6720' author: - Gábor Czédli date: 'May 7, 2013' title: The ordered set of principal congruences of a countable lattice --- [^1] Introduction {#introsection} ============ Historical background --------------------- A classical theorem of Dilworth [@dilwcollect] states that each finite distributive lattice is isomorphic to the congruence lattice of a finite lattice. Since this first result, the *congruence lattice representation problem* has attracted many researchers, and dozens of papers belonging to this topic have been written. The story of this problem were mile-stoned by Huhn [@huhn] and Schmidt [@schmidtidnl], reached its summit in Wehrung [@wehrung] and Ržička [@ruzicka], and was summarized in Grätzer [@grbypict]; see also Czédli [@czgrepres] for some additional, recent references. In [@ggprincl], Grätzer started an analogous new topic of Lattice Theory. Namely, for a lattice $L$, let ${\textup{Princ}(L)}={{\langle {\textup{Princ}(L)},\subseteq\rangle}}$ denote the ordered set of principal congruences of $L$. A congruence is *principal* if it is generated by a pair ${{\langle a,b\rangle}}$ of elements. Ordered sets and lattices with 0 and 1 are called *bounded*. Clearly, if $L$ is a bounded lattice, then ${\textup{Princ}(L)}$ is a bounded ordered set. The pioneering theorem in Grätzer [@ggprincl] states the converse: each bounded ordered set $P$ is isomorphic to ${\textup{Princ}(L)}$ for an appropriate bounded lattice $L$. Actually, the lattice he constructed is of length 5. Up to isomorphism, he also characterized finite bounded ordered sets as the ${\textup{Princ}(L)}$ of finite lattices $L$. Terminology ----------- Unless otherwise stated, we follow the standard terminology and notation of Lattice Theory; see, for example, Grätzer [@GGLT]. Our terminology for weak perspectivity is the classical one taken from Grätzer [@grglt]. *Ordered sets* are nonempty sets equipped with *orderings*, that is, with reflexive, transitive, antisymmetric relations. Note that an ordered set is often called a *partially ordered set*, which is a rather long expression, or a *poset*, which is not tolerated by spell-checkers, or an *order*, which has several additional meanings. Our result ---------- Motivated by Grätzer’s theorem mentioned above, our goal is to prove the following theorem. A set $X$ is *countable* if it is finite or countably infinite, that is, if $|X|\leq \aleph_0$. An ordered set $P$ is *directed* if each two-element subset of $P$ has an upper bound in $P$. Nonempty down-sets of $P$ and subsets ${\mathord\downarrow c}={\{x\in P: x\leq c\}}$ are called *order ideals* and *principal $($order$)$ ideals*, respectively. \[thmmain\] \[thmmaina\] An ordered set $P={\langle P;\leq\rangle}$ is isomorphic to ${\textup{Princ}(L)}$ for some *countable* lattice $L$ if and only if $P$ is a countable directed ordered set with zero. \[thmmainb\] If $P$ is an ordered set with zero and it is the union of a chain of principal ideals, then there exists a lattice $L$ such that $P\cong {\textup{Princ}(L)}$. An alternative way of formulating the condition in part is to say that $0\in P$ and there is a cofinal chain in $P$. For a pair ${{\langle a,b\rangle}}\in L^2$ of elements, the least congruence collapsing $a$ and $b$ is denoted by ${\textup{con}(a,b)}$ or ${\textup{con}_{L}(a,b)}$. As it was pointed out in Grätzer [@ggprincl], the rule $${\textup{con}(a_i,b_i)}\subseteq {\textup{con}(a_1\wedge b_1\wedge a_2\wedge b_2,a_1\vee b_1\vee a_2\vee b_2)}\, \text{ for }i\in{\{1,2\}} \label{prdirT}$$ implies that ${\textup{Princ}(L)}$ is always a directed ordered set with zero. Therefore, the first part of the theorem will easily be concluded from the second one. To compare part of our theorem to Grätzer’s result, note that a bounded ordered set $P$ is always a union of a (one-element) chain of principal ideals. Of course, no *bounded* lattice $L$ can represent $P$ by $P\cong{\textup{Princ}(L)}$ if $P$ has no greatest element. Method ------ First of all, we need the key idea, illustrated by Figure \[fig4\], from Grätzer [@ggprincl]. Second, we feel that without the quasi-coloring technique developed in Czédli [@czgrepres], the investigations leading to this paper would have not even begun. As opposed to colorings, the advantage of quasi-colorings is that we have joins (equivalently, the possibility of generation) in their range sets. This allows us to decompose our construction into a sequence of elementary steps. Each step is accompanied by a quasiordering. If several steps, possibly infinitely many steps, are carried out, then the join of the corresponding quasiorderings gives a satisfactory insight into the construction. Even if it is the “coloring versions” of some lemmas that we only use at the end, it is worth allowing their quasi-coloring versions since this way the proofs are simpler and the lemmas become more general. Third, the idea of using appropriate auxiliary structures is taken from Czédli [@112gen]. Their role is to accumulate all the assumptions our induction steps will need. Auxiliary statements and structures =================================== The rest of the paper is devoted to the proof of Theorem \[thmmain\]. Quasi-colorings and auxiliary structures ---------------------------------------- A *quasiordered set* is a structure ${\langle H;\nu\rangle}$ where $H\neq {\varnothing}$ is a set and $\nu\subseteq H^2$ is a reflexive, transitive relation on $H$. Quasiordered sets are also called preordered ones. Instead of ${{\langle x,y\rangle}}\in \nu$, we usually write $x {\mathrel{\leq_\nu}}y$. Also, we write $x{\mathrel{<_\nu}}y$ and $x{\mathrel{\parallel_\nu}}y$ for the conjunction of $x{\mathrel{\leq_\nu}}y$ and $y\not{\mathrel{\leq_\nu}}x$, and that of ${{\langle x,y\rangle}}\notin\nu$ and ${{\langle y,x\rangle}}\notin \nu$, respectively. If $g\in H$ and $x{\mathrel{\leq_\nu}}g$ for all $x\in H$, then $g$ is a *greatest element* of $H$; *least elements* are defined dually. They are not necessarily unique; if they are, then they are denoted by $1_H$ and $0_H$. If for all $x,y\in H$, there exists a $z\in H$ such that $x{\mathrel{\leq_\nu}}z$ and $y{\mathrel{\leq_\nu}}z$, then ${\langle H;\nu\rangle}$ is a *directed* quasiordered set. Given $H\neq {\varnothing}$, the set of all quasiorderings on $H$ is denoted by ${\textup{Quord}(H)}$. It is a complete lattice with respect to set inclusion. For $X\subseteq H^2$, the least quasiorder on $H$ that includes $X$ is denotes by ${\textup{quo}(X)}$. We write ${\textup{quo}(x,y)}$ instead of ${\textup{quo}({\{{{\langle a,b\rangle}}\}})}$. ![The lattice $N_6$ \[fig1\]](czg-princl1.eps) Let $L$ be a lattice. For $x,y\in L$, ${{\langle x,y\rangle}}$ is called an *ordered pair* of $L$ if $x\leq y$. The set of ordered pairs of $L$ is denoted by ${{\textup{Pairs}^{\leq}(L)}}$. Note that we shall often use that ${{\textup{Pairs}^{\leq}(S)}}\subseteq {{\textup{Pairs}^{\leq}(L)}}$ holds for sublattices $S$ of $L$; this explains why we work with ordered pairs rather than intervals. Note also that ${{\langle a,b\rangle}}$ is an ordered pair if[f]{} $b/a$ is a quotient; however, the concept of ordered pairs fits better to previous work with quasi-colorings. By a *quasi-colored lattice* we mean a structure ${{\mathcal L}}={\langle L;\gamma,H,\nu\rangle}$ where $L$ is a lattice, ${\langle H;\nu\rangle}$ is a quasiordered set, $\gamma\colon {{\textup{Pairs}^{\leq}(L)}}\to H$ is a surjective map, and for all ${{\langle u_1,v_1\rangle}},{{\langle u_2,v_2\rangle}}\in {{\textup{Pairs}^{\leq}(L)}}$, - if $\gamma({{\langle u_1,v_1\rangle}}){\mathrel{\leq_\nu}}\gamma({{\langle u_2,v_2\rangle}})$, then ${\textup{con}(u_1,v_1)}\leq {\textup{con}(u_2,v_2)}$; - if ${\textup{con}(u_1,v_1)}\leq {\textup{con}(u_2,v_2)}$, then $\gamma({{\langle u_1,v_1\rangle}}){\mathrel{\leq_\nu}}\gamma({{\langle u_2,v_2\rangle}})$. This concept is taken from Czédli [@czgrepres]. Prior to [@czgrepres], the name “coloring" was used for surjective maps onto antichains satisfying in Grätzer, Lakser, and Schmidt [@grlaksersch], and for surjective maps onto antichains satisfying in Grätzer [@grbypict page 39]. However, in [@czgrepres], [@grlaksersch], and [@grbypict], $\gamma({{\langle u,v\rangle}})$ was defined only for covering pairs $u\prec v$. To emphasize that ${\textup{con}(u_1,v_1)}$ and ${\textup{con}(u_2,v_2)}$ belong to the ordered set ${\textup{Princ}(L)}$, we usually write ${\textup{con}(u_1,v_1)}\leq {\textup{con}(u_2,v_2)}$ rather than ${\textup{con}(u_1,v_1)}\subseteq {\textup{con}(u_2,v_2)}$. It follows easily from , , and the surjectivity of $\gamma$ that if ${\langle L;\gamma,H,\nu\rangle}$ is a quasi-colored set, then ${\langle H;\nu\rangle}$ is a directed quasiordered set with least element; possibly with many least elements. We say that a quadruple ${\langle a_1,b_1,a_2,b_2\rangle}\in L^4$ is an *$N_6$-quadruple* of $L$ if $${\{b_1\wedge b_2=a_1\wedge a_2,\,\, a_1<b_1,\,\,a_2<b_2,\,\, a_1\vee a_2=b_1\vee b_2\}}$$ is a six-element sublattice, see Figure \[fig1\]. If, in addition, $b_1\wedge b_2=0_L$ and $a_1\vee a_2=1_L$, then we speak of a *spanning $N_6$-quadruple*. An $N_6$-quadruple of $L$ is called a *strong $N_6$-quadruple* if it is a spanning one and, for all $i\in{\{1,2\}}$ and $x\in L$, $$\begin{aligned} 0_L < x \leq b_i &{\mathrel{\Longrightarrow}}x\vee a_{3-i}=1_L, \text{ and} \label{labsa}\\ 1_L>x \geq a_i&{\mathrel{\Longrightarrow}}x\wedge b_{3-i}=0_L\text.\label{labsb} $$ For a subset $X$ of $L^2$, the least lattice congruence including $X$ is denoted by ${\textup{con}(X)}$. In particular, ${\textup{con}({\{{{\langle a,b\rangle}}\}})}={\textup{con}(a,b)}$. The least and the largest congruence of $L$ are denoted by $\Delta_L$ and ${\nabla_{\kern -2pt L}}$, respectively. Now, we are in the position to define the key concept we need. In the present paper, by a *auxiliary structure* we mean a structure $${{\mathcal L}}={\langle L;\gamma, H,\nu,\delta ,{\varepsilon}\rangle}\label{auxstr}$$ such that the following eight properties hold: - ${\langle L;\gamma, H,\nu\rangle}$ is a quasi-colored lattice; - the quasiordered set ${\langle H;\nu\rangle}$ has exactly one least element, $0_H$, and at most one greatest element; - $\delta $ and ${\varepsilon}$ are $H\to L$ maps such that $\delta (0_H)={\varepsilon}(0_H)$ and, for all $x\in H\setminus{\{0_H\}}$, $\delta (x)\prec{\varepsilon}(x)$; note that we often write $a_x$ and $b_x$ instead of $\delta (x)$ and ${\varepsilon}(x)$, respectively; - for all $p\in H$, $\gamma({{\langle \delta (p),{\varepsilon}(p)\rangle}})=p$; - if $p$ and $q$ are distinct elements of $H\setminus{\{0_H\}}$, then ${\langle \delta (p),{\varepsilon}(p), \delta (q),{\varepsilon}(q)\rangle}$ is an $N_6$-quadruple of $L$; - if $p,q\in H$, $p{\mathrel{\parallel_\nu}}q$, and ${\langle \delta (p),{\varepsilon}(p), \delta (q),{\varepsilon}(q)\rangle}$ is a spanning $N_6$-quadruple, then it is a strong $N_6$-quadruple of $L$; - If $L$ is a bounded lattice and $|L|>1$, then $$\begin{aligned} \bigl|\bigl\{x\in L:{} &0_L\prec x\prec 1_L\text{ and, for all elements }y \text{ in }\cr &L\setminus\{0_L,1_L,x\},\,\, x\text{ is a complement of }y\bigr\}\bigr|\geq 3;\end{aligned}$$ - if $1_H\in H$ and $|L|>1$, then ${\textup{con}\bigl( {\{{{\langle \delta (p),{\varepsilon}(p)\rangle}}: p\in H\text{ and } p\neq 1_H \}}\bigr)} \neq {\nabla_{\kern -2pt L}}$. It follows from that ${\{\delta(x),{\varepsilon}(x)\}}={\{a_x, b_x\}}$ is disjoint from ${\{0_L,1_L\}}={\varnothing}$, provided $|H|\geq 3$ and $x\in H\setminus{\{0_H\}}$. If ${\langle H;\nu\rangle}$ is a quasiordered set, then $\Theta_\nu=\nu\cap\nu^{-1}$ is an equivalence relation, and the definition ${[x]\Theta_\nu}\leq {[y]\Theta_\nu}\iff x{\mathrel{\leq_\nu}}y$ turns the quotient set $H/\Theta_\nu$ into an ordered set ${\langle H/\Theta_\nu;\leq\rangle}$. The importance of our auxiliary structures is first shown by the following lemma. \[impclM\] If ${{\mathcal L}}$ in is an auxiliary structure, then the ordered set ${\textup{Princ}(L)}$ is isomorphic to ${\langle H/\Theta_\nu;\leq\rangle}$. In particular, if $\nu$ is an ordering, then ${\textup{Princ}(L)}$ is isomorphic to the ordered set ${\langle H;\nu\rangle}$. Clearly, ${\textup{Princ}(L)}={\{{\textup{con}(x,y)}: {{\langle x,y\rangle}}\in{{\textup{Pairs}^{\leq}(L)}}\}}$. Consider the map ${\varphi}\colon {\textup{Princ}(L)}\to H/\Theta_\nu$, defined by ${\textup{con}(x,y)}\mapsto {[\gamma({{\langle x,y\rangle}})]\Theta_\nu}$. If ${\textup{con}(x_1,y_1)}={\textup{con}(x_2,y_2)}$, then ${[\gamma({{\langle x_1,y_1\rangle}})]\Theta_\nu} = {[\gamma({{\langle x_2,y_2\rangle}})]\Theta_\nu}$ follows from . Hence, ${\varphi}$ is a map. It is surjective since so is $\gamma$. Finally, it is bijective and an order isomorphism by and . We say that an auxiliary structure ${{\mathcal L}}={\langle L;\gamma, H,\nu,\delta ,{\varepsilon}\rangle}$ is *countable* if $|L|\leq\aleph_0$ and $|H|\leq\aleph_0$. Next, we give an example. \[exegy\] Let $H$ be a set, finite or infinite, such that $0_H,1_H\in H$ and $|H|\geq 3$. Let us define $\nu={\textup{quo}\bigl(({\{0_H\}}\times H) \cup (H\times {\{1_H\}})\bigr)}$; note that ${\langle H;\nu\rangle}$ is an ordered set (actually, a modular lattice of length 2). Let $L$ be the lattice depicted in Figure \[fig2\], where ${\{h,g,p,q,\dots\}}$ is the set $H\setminus{\{0_H,1_H\}}$. For $x\prec y$, $\gamma({{\langle x,y\rangle}})$ is defined by the labeling of edges. Note that, in Figure \[fig2\], we often write $0$ and $1$ rather than $0_H$ and $1_H$, because of space consideration. Let $\gamma({{\langle x,x\rangle}})=0_H$ for $x\in L$, and let $\gamma({{\langle x,y\rangle}})=1_H$ for $x<y$ if $x\not\prec y$. Let $\delta (0_H)={\varepsilon}(0_H)=x_0$. For $s\in H\setminus{\{0_H\}}$, we define $\delta (s)=a_s$ and ${\varepsilon}(s)=b_s$. Now, obviously, ${{\mathcal L}}={\langle L;\gamma, H,\nu,\delta ,{\varepsilon}\rangle}$ is an auxiliary structure. If $|H|\leq \aleph_0$, then ${{\mathcal L}}$ is countable. ![The auxiliary structure in Example \[exegy\] \[fig2\]](czg-princl2.eps) Substructures are defined in the natural way; note that $\nu=\nu'\cap H^2$ will not be required below. Namely, \[defbeagy\] Let ${{\mathcal L}}={\langle L;\gamma, H,\nu,\delta ,{\varepsilon}\rangle}$ and ${{\mathcal L}}'={\langle L';\gamma', H',\nu',\delta ',{\varepsilon}'\rangle}$ be auxiliary structures. We say that ${{\mathcal L}}$ is a *substructure* of ${{\mathcal L}}'$ if the following hold: $L$ is a sublattice of $L'$, $H\subseteq H'$, $\nu\subseteq\nu'$, and $0_{H'}=0_H$; $\gamma$ is the restriction of $\gamma'$ to ${{\textup{Pairs}^{\leq}(L)}}$, $\delta$ is the restriction of $\delta'$ to $H$, and ${\varepsilon}$ is the restriction of ${\varepsilon}'$ to $H$. Clearly, if ${{\mathcal L}}$, ${{\mathcal L}}'$, and ${{\mathcal L}}''$ are auxiliary structures such that ${{\mathcal L}}$ is a substructure of ${{\mathcal L}}'$ and ${{\mathcal L}}'$ is a substructure of ${{\mathcal L}}''$, then ${{\mathcal L}}$ is a substructure of ${{\mathcal L}}''$; this fact will be used implicitly. The following lemma indicates how easily but efficiently we can work with auxiliary structures. For an auxiliary structure ${{\mathcal L}}={\langle L;\gamma, H,\nu,\delta ,{\varepsilon}\rangle}$ and an arbitrary (possibly empty) set $K$, we define the following objects. Let ${{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$ be the disjoint union $H\cup K\cup{\{1_{{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}\}}$, and let $0_{{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}=0_H$. Define ${{\nu^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}\in{\textup{Quord}({{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}})}$ by $${{\nu^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}={\textup{quo}\bigl(\nu \cup({\{0_{{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}} \}} \times {{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}} ) \cup ({{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}\times{\{ 1_{{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}\}}) \bigr)}\text.$$ Consider the lattice ${{L^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$ defined by Figure \[fig3\], where $u,v,\dots$ denote the elements of $K$. The thick dotted lines indicate $\leq$ but not necessarily $\prec$; they are edges only if $L$ is bounded. Note that all “new” lattice elements distinct from $0_{{{L^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}$ and $1_{{{L^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}$, that is, all elements of ${{L^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}\setminus(L\cup {\{0_{{{L^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}, 1_{{{L^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}\}})$, are complements of all “old” elements. Extend $\delta$ and ${\varepsilon}$ to maps ${{\delta^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}},{{{\varepsilon}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}\colon {{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}} \to{{{L^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}$ by letting ${{\delta ^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}(w)=a_w$ and ${{{\varepsilon}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}(w)=b_w$ for $w\in K\cup{\{1_{{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}\}}$. Define ${{\gamma^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}\colon {{\textup{Pairs}^{\leq}({{L^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}})}}\to{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$ by $${{\gamma^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}( {{\langle x,y\rangle}})= \begin{cases} \gamma({{\langle x,y\rangle}}),& \text{if } {{\langle x,y\rangle}}\in{{\textup{Pairs}^{\leq}(L)}},\cr w, &\text{if } x=a_w,\,\, y=b_w,\text{ and }w\in K,\cr 0_{{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}},&\text{if }x=y,\cr 1_{{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}},&\text{otherwise.} \end{cases}$$ By space consideration again, the edge label $1$ in Figure \[fig3\] stands for $1_{{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}$. Finally, let ${{{{\mathcal L}}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}={\langle {{L^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}};{{\gamma^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}},{{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}},{{\nu^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}, {{\delta ^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}},{{{\varepsilon}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}\rangle}$. The straightforward proof of the following lemma will be omitted. \[lupstp\] If ${{\mathcal L}}$ is an auxiliary structure, then so is ${{{{\mathcal L}}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$. Furthermore, ${{\mathcal L}}$ is a substructure of ${{{{\mathcal L}}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$, and if ${{\mathcal L}}$ and $K$ are countable, then so is ${{{{\mathcal L}}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$. Moreover, if $p,q\in {{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$ such that ${\{p,q\}}\not\subseteq H$ and $p\parallel_{{{\nu^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}}q$, then ${\langle {{\delta ^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}(p), {{{\varepsilon}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}(p),{{\delta ^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}(q), {{{\varepsilon}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}(q) \rangle}$ is a strong $N_6$-quadruple. Since new bottom and top elements are added, we say that ${{{{\mathcal L}}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$ is obtained from ${{\mathcal L}}$ by a *vertical extension*; this motivates the triangle aiming upwards in its notation. ![The auxiliary structure ${{{{\mathcal L}}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$ \[fig3\]](czg-princl3.eps) ![Grätzer’s lattice ${L_{\textup{GG}}}$ \[fig4\]](czg-princl4.eps) Horizontal extensions of auxiliary structures ============================================= The key role in Grätzer [@ggprincl] is played by the lattice ${L_{\textup{GG}}}$; see Figure \[fig4\]. We also need this lattice. Assume that $$\begin{aligned} {{\mathcal L}}&={\langle L;\gamma, H,\nu,\delta ,{\varepsilon}\rangle}\text{ is an auxiliary structure, }p,q\in H\text{, } p{\mathrel{\parallel_\nu}}q,\text{ and} \cr &{\langle a_p,b_p,a_q,b_q\rangle}={\langle \delta (p),{\varepsilon}(p),\delta (q),{\varepsilon}(q) \rangle} \text{ is a}\cr &\text{spanning or, equivalently, a strong }N_6\text{-quadruple}. \end{aligned}\label{asumLpar}$$ The equivalence of “spanning” and “strong” in follows from . We define a structure ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ as follows, and it will take a lot of work to prove that it is an auxiliary structure. We call ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ a *horizontal extension* of ${{\mathcal L}}$; this explains the horizontal triangle in the notation. By changing the sublattice ${\{0_L,a_p,b_p,a_q,b_q,1_L\}}$ into an ${L_{\textup{GG}}}$ as it is depicted in Figure \[fig4\], that is, by inserting the black-filled elements of Figure \[fig4\] into $L$, we obtain an ordered set denoted by ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$; see also later for more exact details. (We will prove that ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ is a lattice and $L$ is a sublattice in it.) The construction of ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ from ${{\mathcal L}}$ is illustrated in Figure \[fig5\]. Note that there can be much more elements and in a more complicated way than indicated. The solid lines represent the covering relation but the dotted lines are not necessarily edges. The new lattice ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ is obtained from $L$ by inserting the black-filled elements. Note that while Grätzer [@ggprincl] constructed a lattice of length 5, here even the interval, say, $[b_p, 1_L]$ can be of infinite length. ![Obtaining ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ from ${{\mathcal L}}$ \[fig5\]](czg-princl5.eps) Let ${{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}=H$. In ${\textup{Quord}({{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}})}$, we define ${{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}={\textup{quo}\bigl(\nu\cup{\{{{\langle p,q\rangle}}\}}\bigr)}$. We extend $\gamma$ to ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}\colon {{\textup{Pairs}^{\leq}({{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}})}}\to {{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ by $${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle x,y\rangle}})= \begin{cases} \gamma({{\langle x,y\rangle}}),&\text{if }{{\langle x,y\rangle}}\in{{\textup{Pairs}^{\leq}(L)}},\cr p,&\text{if }{{\langle x,y\rangle}}\in{\{{{\langle d_{pq},e_{pq}\rangle}}, {{\langle f_{pq},g_{pq}\rangle}} \}},\cr q,&\text{if }{{\langle x,y\rangle}}\in{\{{{\langle c_{pq},d_{pq}\rangle}}, {{\langle c_{pq},e_{pq}\rangle}} \}},\cr 0_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}, &\text{if } x=y,\cr 1_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}, &\text{otherwise.} \end{cases}$$ The definition of ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ is also illustrated in Figure \[fig5\], where the edge color $1$ stands for $1_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$. Finally, after letting ${{\delta ^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}=\delta $ and ${{{\varepsilon}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}={\varepsilon}$, we define $${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}={\langle {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}};{{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}, {{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}},{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}},{{\delta ^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}},{{{\varepsilon}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}\rangle}\text.\label{spaldef}$$ \[spalislat\] If ${{\mathcal L}}$ satisfies , then ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ is a lattice and $L$ is a sublattice of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. First, we describe the ordering of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ more precisely; this description is the real definition of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. Let $$\begin{aligned} N_6^{pq}&={{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}\setminus L={\{c_{pq},d_{pq},e_{pq},f_{pq},g_{pq}\}},\cr B_6^{pq}&={\{0_L, a_p,b_p,a_q,b_q,1_L \}}\text{, and}\cr S_6^{pq}&=\{0_L, a_p,b_p,a_q,b_q,c_{pq},d_{pq},e_{pq},f_{pq},g_{pq}, 1_L\} = N_6^{pq}\cup B_6^{pq}\text{.} \cr \end{aligned}\label{sodiHGrj}$$ Here $S_6^{pq}$ is isomorphic to the lattice ${L_{\textup{GG}}}$, and its “boundary”, $B_6^{pq}$, to $N_6$. The elements of $L$, $N_6^{pq}$, and $B_6^{pq}$ are called *old*, *new*, and *boundary* elements, respectively. For $x,y\in {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, we define $x\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} y\iff$ $$\begin{cases} x\leq_L y ,&\text{if }x,y\in L\text{, or}\cr x\leq_{S_6^{pq}} y ,&\text{if }x,y\in S_6^{pq}\text{, or}\cr \exists z\in B_6^{pq}: x\leq_L z\text{ and }z\leq_{S_6^{pq}}y,&\text{if }x\in L\setminus S_6^{pq}\text{ and } y\in N_6^{pq}\text{, or}\cr \exists z\in B_6^{pq}: x\leq_{S_6^{pq}}z\text{ and }z\leq_L y,&\text{if }x\in N_6^{pq}\text{ and } y\in L\setminus S_6^{pq}\text. \end{cases} \label{spaorder}$$ Observe that for $u_1,u_3\in B_6^{pq}$ and $u_2\in N_6^{pq}$, the conjunction of $u_1\leq_{S_6^{pq}}u_2$ and $u_2 \leq_{S_6^{pq}} u_3$ implies ${\{0_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}},1_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}\}}\cap{\{u_1,u_3\}}\neq{\varnothing}$. Hence, it is straightforward to see that $\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ is an ordering and $\leq_L$ is the restriction of $\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ to $L$. For $x\in N_6^{pq}$, there is a unique least element ${{x^\ast}}$ of $B_6^{pq}$ such that $x\leq_{S_6^{pq}}{{x^\ast}}$ (that is, $x\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}{{x^\ast}}$). If $x\in L$, then we let ${{x^\ast}}=x$. Similarly, for $x\in N_6^{pq}$, there is a unique largest element ${{x_\ast}}$ of $B_6^{pq}$ such that ${{x_\ast}}\leq_{S_6^{pq}} x$. Again, for $x\in L$, we let ${{x_\ast}}=x$. With this notation, is clearly equivalent to $$x\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} y\iff \begin{cases} x\leq_L y,&\text{if }x,y\in L\text{, or}\cr x\leq_{S_6^{pq}} y,&\text{if }x,y\in S_6^{pq}\text{, or}\cr x\leq_L {{y_\ast}},&\text{if }x\in L\setminus S_6^{pq}\text{ and } y\in N_6^{pq}\text{, or}\cr {{x^\ast}}\leq_L y,&\text{if }x\in N_6^{pq}\text{ and } y\in L\setminus S_6^{pq}\text. \end{cases}\label{spanorder}$$ Next, for $x\parallel y\in{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, we want to show that $x$ and $y$ has a join in ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. We can assume that ${\{x,y\}}$ has an upper bound $z$ in $N_6^{pq}$, because otherwise ${{x^\ast}}\vee_L {{y^\ast}}$ would clearly be the join of $x$ and $y$ in ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. If $z$ belonged to ${\{c_{pq},d_{pq},e_{pq}\}}$, then the principal ideal ${\mathord\downarrow z}$ (taken in ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$) would be a chain, and this would contradict $x\parallel y$. Hence, $z\in {\{f_{pq},g_{pq}\}}$. If both $x$ and $y$ belong to $N_6^{pq}$, then $x\parallel y$ gives ${\{x,y\}}={\{e_{pq},f_{pq}\}}$, $z$ and $1_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ are the only upper bounds of ${\{x,y\}}$, and $z$ is the join of $x$ and $y$. Hence, we can assume that $x\in L$. If $y$ also belongs to $L$, then $x\leq{{z_\ast}}$ and $y\leq {{z_\ast}}$ yields $x\vee_L y\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} {{z_\ast}}\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} z$, and $x\vee_L y$ is the join of $x$ and $y$ in ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ since $z$ was an arbitrary upper bound of ${\{x,y\}}$ in $N_6^{pq}$. Therefore, we can assume that $x\in L$ and $y\in N_6^{pq}$. It follows from $b_p\wedge_L b_q=0_L$ that, for each $u\in L$, ${\mathord\uparrow u}\cap B_6^{pq}$ has a smallest element; we denote it by $\widehat u$. For $u\in N_6^{pq}$, we let $\widehat u=u$. Note that, for every $u\in {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, $\widehat u$ is the smallest element of ${\mathord\uparrow u}\cap S_6^{pq}$. The existence of $z$, mentioned above, implies that $\widehat x\in {\{a_p,b_p\}}$. We assert that $\widehat x\vee_{S_6^{pq}} y= \widehat x\vee_{S_6^{pq}}\widehat y$ is the join of $x$ and $y$ in ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. (Note that $\widehat x\vee_{S_6^{pq}} y\subseteq {\{f_{pq},g_{pq}\}}$.) We can assume $y\in{\{c_{pq},d_{pq},e_{pq}\}}$ since otherwise $1_L$ is the only upper bound of $y$ in $L$ and $x\vee_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}y=\widehat x\vee_{S_6^{pq}} y$ is clear. Consider an upper bound $t\in L$ of $x$ and $y$. Since $y\in{\{c_{pq},d_{pq},e_{pq}\}}$, we have $a_q\leq t$ and $x\vee_L a_q\leq t$. From $x\parallel y\in{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ and $\widehat x\in {\{a_p,b_p\}}$, we obtain $0_L<x\leq b_p$. Since ${\langle a_p,b_p,a_q,b_q\rangle}$ is a strong $N_6$-quadruple by , the validity of for ${{\mathcal L}}$ implies $\widehat x\vee_{S_6^{pq}} y\,\leq 1_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}=1_L=x\vee_L a_q\leq t$. This shows that $\widehat x\vee_{S_6^{pq}} y$ is the join of $x$ and $y$ in ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. The case $x,y\in L$ showed that ${\langle L;\vee\rangle}$ is a subsemilattice of ${\langle {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}};\vee\rangle}$. For later reference, we summarize the description of join in a concise form as follows; note that $x\parallel y$ is not assumed here: $$x\vee_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}y=\begin{cases} {{x^\ast}}\vee_L {{y^\ast}}, &\text{if } {\{x,y\}}\not\subseteq {\mathord\downarrow g_{pq}}\text{ or }{\{x,y\}}\subseteq L , \cr \widehat x\vee_{S_6^{pq}} \widehat y&\text{otherwise, that is, if } {\{x,y\}}\subseteq {\mathord\downarrow g_{pq}}\text{ and }{\{x,y\}}\not\subseteq L\text. \end{cases}\label{joindeR}$$ We have shown that any two elements of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ have a join. Although $S_6^{pq}$ and the construction of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ are not exactly selfdual, by interchanging the role of ${\{f_{pq},g_{pq}\}}$ and that of ${\{c_{pq},d_{pq},e_{pq}\}}$, we can easily dualize the argument above. Thus, we conclude that ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ is a lattice and $L$ a sublattice of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. The following lemma is due to Dilworth [@dilworth1950a], see also Grätzer [@grglt Theorem III.1.2]. \[llajtorja\] If $L$ is a lattice and ${{\langle u_1,v_1\rangle}},{{\langle u_2,v_2\rangle}}\in{{\textup{Pairs}^{\leq}(L)}}$, then the following three conditions are equivalent. ${\textup{con}(u_1,v_1)}\leq {\textup{con}(u_2,v_2)}$; ${{\langle u_1,v_1\rangle}}\in {\textup{con}(u_2,v_2)}$; there exists an $n\in\mathbb N$ and there are $x_{i} \in L$ for $i\in{\{0,\dots,n\}}$ and ${{\langle y_{i j},z_{i j}\rangle}} \in{{\textup{Pairs}^{\leq}(L)}}$ for ${{\langle i,j\rangle}}\in{\{1,\dots,n\}}\times{\{0,\dots,n\}}$ such that the following equalities and inequalities hold: $$\begin{aligned} u_1&=x_{0}\leq x_{1}\leq\dots\leq x_{n-1}\leq x_{n}=v_1\cr y_{i0} &=x_{i-1}\text{, }y_{in}=u_2\text{, }z_{i0}=x_i\text{, and }z_{in}=v_2\text{ for }1\leq i\leq n,\cr y_{i,j-1}&= z_{i,j-1}\wedge y_{ij} \text{ and } z_{i,j-1}\leq z_{ij} \text{ for }j \text{ odd, } i,j\in{\{1,\dots,n\}},\cr z_{i,j-1} & = y_{i,j-1}\vee z_{ij} \text{ and }y_{i,j-1}\geq y_{ij}\text{ for }j \text{ even, } i,j\in{\{1,\dots,n\}}\text. \end{aligned}\label{lajtorjaformula}$$ The situation of Lemma \[llajtorja\] is outlined in Figure \[fig6\]; note that not all elements are depicted, and the elements are not necessarily distinct. The second half of says that, in terms of Grätzer [@grglt], ${{\langle y_{i,j-1},z_{i,j-1}\rangle}}$ is *weakly* up or down perspective into ${{\langle y_{ij},z_{ij}\rangle}}$; up for $j$ odd and down for $j$ even. Besides weak perspectivity, we shall also need a more specific concept; recall that ${{\langle x_1,y_1\rangle}}$ is *perspective* to ${{\langle x_2,y_2\rangle}}$ if there are $i,j\in{\{1,2\}}$ such that $i\neq j$, $x_i=y_i\wedge x_j$, and $y_j=x_j\vee y_i$. ![Illustrating Lemma \[llajtorja\] for $n=4$ \[fig6\]](czg-princl6.eps) For a quasiordered set ${\langle H;\nu\rangle}$ and $p, q_1,\dots,q_n\in H$, we say that $p$ is a *join* of the elements $q_1,\dots,q_n$, in notation, $p=\bigvee_{i=1}^n q_i$, if $q_i{\mathrel{\leq_\nu}}p$ for all $i$ and, for every $r\in H$, the conjunction of $q_i{\mathrel{\leq_\nu}}r$ for $i=1,\dots,n$ implies $p{\mathrel{\leq_\nu}}r$. This concept is used in the next lemma. Note that even if a join exists, it need not be unique. \[chainlemma\] If ${\langle L; \gamma, H,\nu\rangle}$ is a quasi-colored lattice and ${\{u_0\leq u_1\leq\dots \leq u_n\}}$ is a finite chain in $L$, then $$\gamma({{\langle u_0,u_n\rangle}})=\bigvee_{i=1}^n \gamma({{\langle u_{i-1},u_i\rangle}})\quad\text{ holds in }{\langle H;\nu\rangle}\text.\label{lancjoin}$$ Let $p=\gamma({{\langle u_0,u_n\rangle}})$ and $q_i=\gamma({{\langle u_{i-1},u_i\rangle}})$. Since ${\textup{con}(u_{i-1},u_i)}\leq {\textup{con}(u_0,u_n)}$, yields $q_i{\mathrel{\leq_\nu}}p$ for all $i$. Next, assume that $r\in H$ such that $q_i{\mathrel{\leq_\nu}}r$ for all $i$. By the surjectivity of $\gamma$, there exists a ${{\langle v,w\rangle}}\in{{\textup{Pairs}^{\leq}(L)}}$ such that $\gamma({{\langle v,w\rangle}})=r$. It follows by that ${{\langle u_{i-1},u_i\rangle}}\in {\textup{con}(u_{i-1},u_i)}\leq {\textup{con}(v,w)}$. Since ${\textup{con}(v,w)}$ is transitive and collapses the pairs ${{\langle u_{i-1},u_i\rangle}}$, it collapses ${{\langle u_{0},u_n\rangle}}$. Hence, ${\textup{con}(u_{0},u_n)}\leq {\textup{con}(v,w)}$, and implies $p{\mathrel{\leq_\nu}}r$. Now, we are in the position to deal with the following lemma. \[mainlemma\] The structure ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, which is defined in with assumption , is an auxiliary structure, and ${{\mathcal L}}$ is a substructure of ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. Furthermore, if ${{\mathcal L}}$ is countable, then so is ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. Since we work both in ${{\mathcal L}}$ and ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, relations, operations and maps are often subscripted by the relevant structure. By Lemma \[spalislat\], ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ is a lattice. Obviously, and hold for ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. Since ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ is an extension of $\gamma$, ${{\delta^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}= \delta$, ${{{\varepsilon}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}= {\varepsilon}$, and $L$ is a sublattice of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, we obtain that and hold in ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. Let $r_1,r_2\in{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. Since $\nu$ is transitive, $p{\mathrel{\parallel_\nu}}q$, and ${{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}={\textup{quo}\bigl(\nu\cup {\{{{\langle p,q\rangle}}\}}\bigr)}$, we obtain that $$\label{twopssb} {{\langle r_1,r_2\rangle}}\in {{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}\iff r_1{\mathrel{\leq_\nu}}p\text{ and }q{\mathrel{\leq_\nu}}r_2\text{, or }r_1{\mathrel{\leq_\nu}}r_2\text.$$ This clearly implies that holds for ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. It follows from that if ${{\langle x,y\rangle}}\in {{\textup{Pairs}^{\leq}(L)}}$ and $\gamma({{\langle x,y\rangle}})=1_{H}$, then we have ${\textup{con}_{L}(x,y)}={\nabla_{\kern -2pt L}}$. Combining this with , we obtain easily that for all ${{\langle x,y\rangle}}\in{{\textup{Pairs}^{\leq}({{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}})}}$, $${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle x,y\rangle}})=1_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} {\mathrel{\Longrightarrow}}{\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(x,y)} = {\nabla_{\kern -2pt {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}\text.\label{haegyakok}$$ Let $\Theta$ denote the congruence of $L$ described in . Consider the equivalence relation ${{\Theta^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ on ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ whose classes (in other words, blocks) are the $\Theta$-classes, ${\{c_{pq},d_{pq},e_{pq}\}}$ and ${\{f_{pq},f_{pq}\}}$. Based on and its dual, a straightforward argument shows that, for all $x,y\in{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, ${{\langle x\wedge y,x\rangle}}\in{{\Theta^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ if[f]{} ${{\langle y,x\vee y\rangle}}\in {{\Theta^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. Clearly, the intersection of ${{\Theta^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ and the ordering of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ is transitive. Hence, we conclude that ${{\Theta^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ is a congruence on ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. Since it is distinct from ${\nabla_{\kern -2pt {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}$, ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ satisfies . Next, we prove the converse of . Assume that ${{\langle x,y\rangle}}\in{{\textup{Pairs}^{\leq}({{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}})}}$ such that ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle x,y\rangle}})\neq 1_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$; we want to show that ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(x,y)}\neq{\nabla_{\kern -2pt {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}$. Since this is clear if $x=y$, we assume $x\neq y$. First, if $x,y\in L$, then let $r=\gamma({{\langle x,y\rangle}})$. Applying to $\gamma$ and to ${{\mathcal L}}$, we obtain $ {\textup{con}_{L}(x,y)} ={\textup{con}_{L}(\delta (r),\delta (r))}$. Hence $\Theta$, which we used in the previous paragraph, collapses ${{\langle x,y\rangle}}$, and ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(x,y)}\subseteq{{\Theta^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}\subset {\nabla_{\kern -2pt {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}$. Second, if ${\{x,y\}}\cap L={\varnothing}$, then ${{\langle x,y\rangle}}$ is perspective to ${{\langle a_p,b_p\rangle}}$ or ${{\langle a_q,b_q\rangle}}$, whence $ {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(x,y)} \in {\bigl\{ {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(a_p,b_p)}, {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(a_q,b_q)} \bigr\}}$ reduces the present case to the previous one. Finally, $|L\cap{\{x,y\}}|=1$ is excluded since then ${{\langle x,y\rangle}}$ would be $1_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$-colored. Now, after verifying the converse of , we have proved that for all ${{\langle x,y\rangle}}\in{{\textup{Pairs}^{\leq}({{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}})}}$, $${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle x,y\rangle}})=1_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} \iff {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(x,y)} = {\nabla_{\kern -2pt {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}\text.\label{haiirkok}$$ Next, to prove that ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ satisfies , assume that ${{\langle u_1,v_1\rangle}},{{\langle u_2,v_2\rangle}}\in {{\textup{Pairs}^{\leq}({{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}})}}$ such that ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_1,v_1\rangle}} ){\mathrel{\leq_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}}{{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_2,v_2\rangle}} )$. Let $r_i={{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_i,v_i\rangle}} )$, for $i\in{\{1,2\}}$. We have to show ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)}\leq {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$. By , we can assume that $r_2\neq 1_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$. Thus, by , we have $r_1\neq 1_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$. We can also assume that $r_1\neq 0_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ since otherwise ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)}={\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,u_1)}=\Delta_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ would clearly imply ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)}\leq {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$. Thus, $r_1,r_2\in H\setminus{\{0_H,1_H\}}$. By the construction of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, ${{\langle u_i,v_i\rangle}}$ is perspective to some ${{\langle u_i',v_i'\rangle}}\in{{\textup{Pairs}^{\leq}(L)}}$ such that ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_i,v_i\rangle}})={{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_i',v_i\rangle}}')$, and perspectivity implies ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_i,v_i)}= {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_i',v_i')}$. Therefore, we can assume that ${{\langle u_1,v_1\rangle}}, {{\langle u_2,v_2\rangle}} \in {{\textup{Pairs}^{\leq}(L)}}$, because otherwise we could work with ${{\langle u_1',v_1'\rangle}}$ and ${{\langle u_2',v_2'\rangle}}$. According to , we distinguish two cases. First, assume that $r_1{\mathrel{\leq_\nu}}r_2$. Since ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ extends $\gamma$, we have $\gamma({{\langle u_1,v_1\rangle}} ) = {{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_1,v_1\rangle}} ) =r_1{\mathrel{\leq_\nu}}r_2={{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_2,v_2\rangle}} )= \gamma({{\langle u_2,v_2\rangle}} )$. Applying to $\gamma$, we obtain ${{\langle u_1,v_1\rangle}}\in {\textup{con}_{L}(u_1,v_1)}\leq {\textup{con}_{L}(u_2,v_2)}$. Using Lemma \[llajtorja\], first in $L$ and then, backwards, in ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, we obtain ${{\langle u_1,v_1\rangle}}\in{\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$, which yields ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)}\leq {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$. Second, assume that $r_1{\mathrel{\leq_\nu}}p$ and $q{\mathrel{\leq_\nu}}r_2$. Since ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle a_p,b_p\rangle}})=\gamma({{\langle a_p,b_p\rangle}})=p$ and ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle a_q,b_q\rangle}})=q$ by , the argument of the previous paragraph yields that we have ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)}\leq {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(a_p,b_p)}$ and ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(a_q,b_q)}\leq {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$. Clearly (or applying Lemma \[llajtorja\] within $S_6^{pq})$, we have ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(a_p,b_p)}\leq {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(a_q,b_q)}$. Hence, transitivity yields ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)}\leq {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$. Consequently, ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ satisfies . Next, to prove that ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ satisfies , assume that ${{\langle u_1,v_1\rangle}}, {{\langle u_2,v_2\rangle}}\in{{\textup{Pairs}^{\leq}({{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}})}}$ such that ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)} \leq {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$. Our purpose is to show the inequality ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_1,v_1\rangle}} ){\mathrel{\leq_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}}{{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_2,v_2\rangle}} )$. By , we can assume ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}\neq {\nabla_{\kern -2pt {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}$, and we can obviously assume $u_1\neq v_1$. That is, ${\{ {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)},\, {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)} \}}\cap {\{\Delta_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}},{\nabla_{\kern -2pt {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}\}}={\varnothing}$. A pair ${{\langle w_1,w_2\rangle}}\in{{\textup{Pairs}^{\leq}({{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}})}}$ is called *mixed* if $|{\{i: w_i\in L\}}|=1$. That is, if one of the components is old and the other one is new. It follows from the construction of ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ and that none of ${{\langle u_1,v_1\rangle}}$ and ${{\langle u_2,v_2\rangle}}$ is mixed. If ${{\langle u_1,v_1\rangle}}$ is a new pair, that is, if ${\{u_1,v_1\}}\cap L={\varnothing}$, then we can consider an old pair ${{\langle u'_1,v'_1\rangle}}$ such that ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_1',v_1'\rangle}} )={{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_1,v_1\rangle}} )$ and, by perspectivity, ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u'_1,v'_1)} = {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)}$. Hence, we can assume that ${{\langle u_1,v_1\rangle}}$ is an old pair, and similarly for the other pair. That is, we assume that both ${{\langle u_1,v_1\rangle}}$ and ${{\langle u_2,v_2\rangle}}$ belong to ${{\textup{Pairs}^{\leq}( L)}}$. The starting assumption ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_1,v_1)} \leq {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$ means that ${{\langle u_1,v_1\rangle}} \in {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$. This is witnessed by Lemma \[llajtorja\]. Let $x_j, y_{ij}, z_{ij}\in {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ be elements for $i\in{\{1,\dots,n\}}$ and $j\in{\{0,\dots,n\}}$ that satisfy ; see also Figure \[fig6\]. To ease our terminology, the ordered pairs ${{\langle y_{ij},y_{ij}\rangle}}$ will be called *witness pairs* (of the containment ${{\langle u_1,v_1\rangle}} \in {\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}$). Since ${\textup{con}_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}(u_2,v_2)}\neq {\nabla_{\kern -2pt {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}$, none of the witness pairs generate ${\nabla_{\kern -2pt {{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}$. Thus, by , $$\text{none of the witness pairs is mixed or }1_{{{H^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}\text{-colored.}\label{nonm1col}$$ Take two consecutive witness pairs, ${{\langle y_{i,j-1},z_{i,j-1}\rangle}}$ and ${{\langle y_{ij},z_{ij}\rangle}}$. Here $i,j\in{\{1,\dots,n\}}$. Our next purpose is to show that $${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{i,j-1},z_{i,j-1}\rangle}}) {\mathrel{\leq_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}}{{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{ij},z_{ij}\rangle}})\text. \label{winprs}$$ We assume $y_{i,j-1}<z_{i,j-1}$ since trivially holds if these two elements are equal. Hence, ${y_{ij}}<{z_{ij}}$ also holds. If both ${{\langle y_{i,j-1},z_{i,j-1}\rangle}}$ and ${{\langle y_{ij},z_{ij}\rangle}}$ are old pairs, that is, if they belong to ${{\textup{Pairs}^{\leq}(L)}}$, then yields $ {\textup{con}_{L}(y_{i,j-1},z_{i,j-1})}\leq {\textup{con}_{L}(y_{ij},z_{ij})}$. From this, we conclude the relation $\gamma({{\langle y_{i,j-1},z_{i,j-1}\rangle}}) {\mathrel{\leq_\nu}}\gamma({{\langle y_{ij},z_{ij}\rangle}})$ by , applied for ${{\mathcal L}}$, and we obtain the validity of for old witness pairs, because ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ extends $\gamma$. If both ${{\langle y_{i,j-1},z_{i,j-1}\rangle}}$ and ${{\langle y_{ij},z_{ij}\rangle}}$ are new pairs, that is, if they belong to ${{\textup{Pairs}^{\leq}(N_6^{pq})}}$, then and allow only two possibilities: ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{i,j-1},z_{i,j-1}\rangle}}) = {{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{ij},z_{ij}\rangle}})$, or ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{i,j-1},z_{i,j-1}\rangle}}) =p$ and $ {{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{ij},z_{ij}\rangle}})=q$. In both cases, holds. \[old2newcase\] Assume first that $j$ is odd, that is, ${{\langle y_{i,j-1},z_{i,j-1}\rangle}}$ is weakly up-perspective into ${{\langle y_{ij},z_{ij}\rangle}}$. Since $y_{ij}$, being a new element, and $z_{i,j-1}$ are both distinct from $y_{i,j-1}$, $$z_{i,j-1}\parallel y_{ij}\text.\label{osidHn}$$ Since $0_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}\leq y_{i,j-1}<z_{i,j-1} < z_{ij} $, $z_{i,j-1}$ is an old element, and $z_{ij}$ is a new one, $z_{ij}\in{\{f_{pq}, g_{pq}\}}$. Taking $y_{ij}<z_{ij}$ and into account, we obtain $y_{ij}=f_{pq}$ and $z_{ij}=g_{pq}$. Applying the definition of $\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ for the elements of the old witness pair and using the “weak up-perspectivity relations” from , we have $y_{i,j-1}\leq a_p<f_{pq}$. Similarly, but also taking $z_{i,j-1}\parallel y_{ij}$ into account, we obtain $z_{i,j-1}\leq b_p<g_{pq}$. We claim that ${{\langle y_{i,j-1},z_{i,j-1}\rangle}}$ is up-perspective to ${{\langle a_p,b_p\rangle}}$. We can assume $z_{i,j-1}<b_p$, because otherwise they would be equal, we would have $y_{i,j-1}=z_{i,j-y}\wedge f_{pq}=b_p\wedge f_{pq}=a_p$, and the two pairs would be the same. Hence, from $a_p\prec b_p$, $z_{i,j-1}<b_p$ and $z_{i,j-1}\parallel y_{ij}=f_{pq}$, we obtain $z_{i,j-1}\parallel a_p$ and $z_{i,j-1}\vee a_p=b_p$. Since $y_{i,j-1}\leq z_{i,j-1}\wedge a_p \leq z_{i,j-1}\wedge y_j=y_{i,j-1}$, the old pair ${{\langle y_{i,j-1},z_{i,j-1}\rangle}}$ is up-perspective to the old pair ${{\langle a_p,b_p\rangle}}$. Hence, ${\textup{con}_{L}(y_{i,j-1},z_{i,j-1})}={\textup{con}_{L}(a_p,b_p)}$. Applying for ${{\mathcal L}}$, we obtain $$\begin{aligned} {{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{i,j-1},z_{i,j-1}\rangle}}) &=\gamma({{\langle y_{i,j-1},z_{i,j-1}\rangle}}) \overset{{\textup{(C2)}}{}}= \gamma({{\langle a_p,b_p\rangle}}) \overset{{\textup{(A4)}}{}}=p\cr &={{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle f_{pq},f_{pq}\rangle}}) = {{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{ij},z_{ij}\rangle}}),\end{aligned}$$ which implies if $j$ is odd. Second, let $j$ be even. That is, we assume that ${{\langle y_{i,j-1},z_{i,j-1}\rangle}}$ is weakly down-perspective into ${{\langle y_{ij},z_{ij}\rangle}}$. The dual of the previous argument shows that $y_{ij}=c_{pq}$ and $z_{ij}\in{\{d_{pq},e_{pq}\}}$. However, $z_{ij}=d_{pq}$ or $z_{ij}=e_{pq}$ does not make any difference, and ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{i,j-1},z_{i,j-1}\rangle}}) =q= {{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle a_q,b_q\rangle}})= {{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{ij},z_{ij}\rangle}})$ settles for $j$ even. Like in Case \[old2newcase\], it suffices to deal with an odd $j$, because an even $j$ could be treated dually. Since ${{\langle y_{i,j-1},z_{i,j-1}\rangle}}$ is weakly up-perspective into ${{\langle y_{ij},z_{ij}\rangle}}$ and $1_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ is the only old element above $f_{pq}$, we obtain $y_{i,j-1}\in{\{c_{pq}, d_{pq},e_{pq}\}}$. We obtain as before. Taking also into account, we obtain that $y_{i,j-1}=c_{pq}$ and $z_{i,j-1}$ is one of $d_{pq}$ and $e_{pq}$. No matter which one, an argument dual to the one used in Case \[old2newcase\] yields $a_q=b_{q}\wedge y_{ij}$ and $b_q\leq z_{ij}$. Hence, ${{\langle a_q,b_q\rangle}}$ is weakly up-perspective into ${{\langle y_{ij},z_{ij}\rangle}}$, and we obtain $${\textup{con}_{L}(a_q,b_q)}\leq {\textup{con}_{L}(y_{ij},z_{ij})} \overset{{\textup{(C2)}}}{{\mathrel{\Longrightarrow}}} q\overset{{\textup{(A4)}}}= \gamma({{\langle a_q,b_q\rangle}} ){\mathrel{\leq_\nu}}\gamma({{\langle y_{ij},z_{ij}\rangle}}),$$ which implies $${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{i,j-1},z_{i,j-1}\rangle}}) =q \leq_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} {{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{ij},z_{ij}\rangle}}),$$ and follows again. Now that we have proved , observe that for $j=1,\dots,n$ and transitivity yield ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle x_{i-1},x_{i}\rangle}}) = {{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{i0},z_{i0}\rangle}}) {\mathrel{\leq_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}}{{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle y_{in},z_{in}\rangle}}) = {{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_2,v_2\rangle}})$. Hence, Lemma \[chainlemma\] implies ${{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_1,v_1\rangle}}){\mathrel{\leq_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}}}{{\gamma^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}({{\langle u_2,v_2\rangle}})$. Therefore, ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ satisfies , and holds for ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. Next, to prove that ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ satisfies , assume that $r,s\in H$ such that $r\parallel_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} s$ and ${\langle \delta (r),{\varepsilon}(r), \delta (s),{\varepsilon}(s)\rangle}={\langle a_r,b_r,a_s,b_s\rangle}$ is a spanning $N_6$-quadruple. We want to show that it is a strong $N_6$-quadruple of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$. The treatment for is almost the dual of that for , whence we give the details only for . Since the role of $r$ and $s$ is symmetric, it suffices to deal with the case $0<x\leq b_r$; we want to show $x\vee_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} a_s=1_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$. Since $r\parallel_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} s$ implies $r{\mathrel{\parallel_\nu}}s$, $L$ is a ${\{0,1\}}$-sublattice of ${{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$, and holds for ${{\mathcal L}}$, we obtain $x\vee_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} a_s=1_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ for old elements, that is, for all $x\in L$ such that $0<x\leq b_r$. Hence, we assume that $x$ is a new element, that is, $x\in N_6^{pq}$. Since $b_r$ is an old element and $x\leq b_r<b_r\vee_L b_s = 1_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$, we obtain $x\notin{\{f_{pq},g_{pq}\}}$. Hence, $x\in {\{c_{pq},d_{pq},e_{pq}\}}$. If we had $r\neq q$, then $x\leq b_r$ and the description of $\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ would imply $a_q\leq b_r$, which would be a contradiction since holds in ${{\mathcal L}}$. Consequently, $r=q$. Thus, we have $0<x\leq b_q$, and we know from $s\parallel_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} r=q$ and $p\leq_{{{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} q$ that $s\notin{\{p, q, 0_H\}}$ and $s{\mathrel{\parallel_\nu}}q$. We also know $p\neq 0_H$ since $p{\mathrel{\parallel_\nu}}q$. If we had $a_s\in{\mathord\downarrow g_{pq}}$, then the description of $\leq_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$ would yield $a_s\leq b_p$, which would contradict . Hence, $a_s\notin{\mathord\downarrow g_{pq}}$, and gives $x\vee_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} a_s={{x^\ast}}\vee_L a_s$. Therefore, since the spanning $N_6$-quadruple ${\langle a_q,b_q,a_s,b_s\rangle}={\langle a_r,b_r,a_s,b_s\rangle}$ is strong in ${{\mathcal L}}$ by and $0<x<{{x^\ast}}\leq b_q$, we conclude ${{x^\ast}}\vee_L a_s=1_L$, which implies the desired $x\vee_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}} a_s=1_{{{L^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}}$. Consequently, ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ satisfies . This completes the proof of Lemma \[mainlemma\]. Approaching infinity ==================== For an ordered set $P={\langle P;\leq\rangle}$ and a subset $C$ of $P$, the restriction of the ordering of $P$ to $C$ will be denoted by ${{\leq\rceil_{C}}}$. If each element of $P$ has an upper bound in $C$, then $C$ is a *cofinal subset* of $P$. The following lemma belongs to the folklore; having no reference at hand, we will outline its easy proof. \[cofinalitylemma\] If an ordered set $P={\langle P;\leq\rangle}$ is the union of a chain of principal ideals, then it has a cofinal subset $C$ such that ${\langle C; {{\leq\rceil_{C}}}\rangle}$ is a well-ordered set. The top elements of these principal ideals form a cofinal chain $D$ in $P$. Let $\mathcal H(D)=\{X: X\subseteq D$ and ${\langle X;{{\leq\rceil_{X}}}\rangle}$ is a well-ordered set$\}$. For $X,Y\in \mathcal H(D)$, let $X\sqsubseteq Y$ mean that $X$ is an order ideal of ${\langle Y;{{\leq\rceil_{Y}}}\rangle}$. Zorn’s Lemma yields a maximal member $C$ in ${\langle \mathcal H(D),\sqsubseteq\rangle}$. Clearly, $C$ is well-ordered and it is a cofinal subset. Now, we combine the vertical action of Lemma \[lupstp\] and the horizontal action of Lemma \[mainlemma\] into a single statement. Note that the order ideal $H$ of ${\langle {{H}^{\bullet}},{{\nu}^{\bullet}}\rangle}$ in the following lemma is necessarily a directed ordered set. \[combinlemma\] Assume that ${{\mathcal L}}={\langle L;\gamma, H,\nu,\delta ,{\varepsilon}\rangle}$ is an auxiliary structure such that ${\langle H,\nu\rangle}$ is an order ideal of a bounded ordered set ${\langle {{H}^{\bullet}},{{\nu}^{\bullet}}\rangle}$. $($In particular, $\nu$ is an ordering and $\nu={{{{\nu}^{\bullet}}\rceil_{H}}}$.$)$ Then there exists an auxiliary structure ${{{{\mathcal L}}}^{\bullet}}={\langle {{L}^{\bullet}};{{\gamma}^{\bullet}}, {{H}^{\bullet}},{{\nu}^{\bullet}},{{\delta}^{\bullet}} ,{{{\varepsilon}}^{\bullet}} \rangle}$ such that ${{\mathcal L}} $ is a substructure of ${{{{\mathcal L}}}^{\bullet}}$. Furthermore, if ${{\mathcal L}}$ and ${{H}^{\bullet}}$ are countable, then so is ${{{{\mathcal L}}}^{\bullet}}$. We can assume $H\neq {{H}^{\bullet}}$ since otherwise ${{{{\mathcal L}}}^{\bullet}}={{\mathcal L}}$ would do. Consider the set $$D={\{{{\langle p,q\rangle}}: 0_{{{H}^{\bullet}}}<_{{{\nu}^{\bullet}}} p <_{{{\nu}^{\bullet}}} q<_{{{\nu}^{\bullet}}} 1_{{{H}^{\bullet}}}\text{ and } p\not<_\nu q \}}\text.\label{ezD}$$ Since every set can be well-ordered, we can also write $D={\{{{\langle p_\iota,q_\iota\rangle}}: \iota<\kappa\}}$, where $\kappa$ is an ordinal number. In ${\textup{Quord}({{H}^{\bullet}})}$, we define $$\nu_\lambda={\textup{quo}\bigl(\nu\cup ({\{0_{{{H}^{\bullet}}}\}}\times {{H}^{\bullet}}) \cup ({{H}^{\bullet}}\times {\{1_{{{H}^{\bullet}}}\}}) \cup {\{{{\langle p_\iota,q_\iota\rangle}}:\iota<\lambda\}}\bigr)} \label{sldiHGk}$$ for $\lambda\leq \kappa$. It is an ordering on ${{H}^{\bullet}}$, because $\nu_\lambda\subseteq {{\nu}^{\bullet}}$ implies that it is antisymmetric. Note that $\nu_\kappa={{\nu}^{\bullet}}$ and $0_{{{H}^{\bullet}}}=0_H$. For each $\lambda \leq \kappa$, we want to define an auxiliary structure ${{\mathcal L}}_\lambda={\langle L_\lambda;\gamma_\lambda, H_\lambda,\nu_\lambda,\delta_{\lambda},{\varepsilon}_{\lambda}\rangle}$ such that, for all $\lambda<\kappa$, the following properties be satisfied : $$\begin{aligned} &\text{${{\mathcal L}}_\mu$ is a substructure of ${{\mathcal L}}_\lambda$ for all $\mu\leq \lambda$;} \label{dirun} \\ &\text{$H_\lambda=H_0$, $0_{L_\lambda}=0_{L_0} $, and $1_{L_\lambda}=1_{L_0}$};\label{djrun} \\ &\begin{aligned} {\langle \delta_\lambda&(p),{\varepsilon}_\lambda(p),\delta_\lambda(q),{\varepsilon}_\lambda(q)\rangle}\text{ is a spanning }N_6\text{-quadruple (equivalently, } \cr &\text{a strong }N_6\text{-quadruple) for all }{{\langle p,q\rangle}}\in D\text{ such that }p\parallel_{\nu_\lambda} q\text. \end{aligned}\label{spnnning}\end{aligned}$$ Modulo the requirement that ${{\mathcal L}}_\lambda$ should be an auxiliary structure, the equivalence mentioned in is a consequence of . We define ${{\mathcal L}}_\lambda$ by (transfinite) induction as follows. We define ${{\mathcal L}}_0$ by a vertical extension. Let $K={{H}^{\bullet}}\setminus(H\cup{\{1_{{{H}^{\bullet}}}\}} )$, let ${{\langle {{H^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}},{{\nu^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}\rangle}}={{\langle {{H}^{\bullet}},\nu_0\rangle}}$, and let ${{\mathcal L}}_0={{{{\mathcal L}}^{\scriptscriptstyle{\mathord{\pmb{\vartriangle}}}}}}$ be the auxiliary structure what we obtain from ${{\mathcal L}}$ according to Lemma \[lupstp\]. Note that, for all ${{\langle p,q\rangle}}\in D$, ${\{p,q\}}\not\subseteq H$ since $\nu={{{{\nu}^{\bullet}}\rceil_{{{H}^{\bullet}}}}}$. Hence, by Lemma \[lupstp\], holds for $\lambda=0$. Assume that $\lambda$ is a successor ordinal, that is, $\lambda=\eta+1$, and ${{\mathcal L}}_\eta={\langle L_\eta;\gamma_\eta, H_\eta,\nu_\eta,\delta_{\eta},{\varepsilon}_{\eta}\rangle}$ is already defined and satisfies , , and . Since $p_\eta <_{{{\nu}^{\bullet}}} q_\eta$ and $\nu_\eta\subseteq {{\nu}^{\bullet}}$, we have either $p_\eta <_{\nu_\eta} q_\eta$, or $p_\eta \parallel_{\nu_\eta} q_\eta$. These two possibilities need separate treatments. First, if $p_\eta <_{\nu_\eta} q_\eta$, then $\nu_\lambda=\nu_\eta$ and we let ${{\mathcal L}}_\lambda={{\mathcal L}}_\eta$. Second, let $p_\eta \parallel_{\nu_\eta} q_\eta$. We define ${{\mathcal L}}_\lambda$ from ${{\mathcal L}}_\eta$ by a horizontal extension as follows. With the notation ${{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}=\nu_\lambda$, we obtain from that ${{\nu^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}={\textup{quo}({\nu_\eta\cup{\{{{\langle p_\eta,q_\eta\rangle}} \}} })}\in{\textup{Quord}({{H}^{\bullet}})}$. Furthermore, the validity of for ${{\mathcal L}}_\eta$ yields that ${{\langle p_\eta,q_\eta\rangle}}$ is a spanning $N_6$-quadruple of ${{\mathcal L}}_\eta$. Thus, letting ${{\langle p_\eta,q_\eta\rangle}}$ and ${{\mathcal L}}_\eta$ play the role of ${{\langle p,q\rangle}}$ and ${{\mathcal L}}$ in and , respectively, we define ${{{\mathcal L}}_\lambda}$ as the auxiliary structure ${{{{\mathcal L}}^{\scriptscriptstyle{{{\mathord{ \blacktriangleright }}}}}}}$ taken from Lemma \[mainlemma\]. Since $L_\eta$ is a ${\{0,1\}}$-sublattice of $L_\lambda$, spanning $N_6$-quadruples of $L_\eta$ are also spanning in $L_\lambda$. Furthermore, it follows from $\nu_{\lambda}\supseteq \nu_\eta$ that $p\parallel_\lambda q{\mathrel{\Longrightarrow}}p\parallel_\eta q$. Hence, we conclude that is inherited by ${{{\mathcal L}}_\lambda}$ from ${{{\mathcal L}}_\eta}$. Assume that $\lambda$ is a limit ordinal. Let $$\begin{aligned} L_\lambda=\bigcup_{\eta<\lambda} L_\eta,{{\kern 5 pt}} \gamma_\lambda=\bigcup_{\eta<\lambda} \gamma_\eta,{{\kern 5 pt}} H_\lambda={{H}^{\bullet}},{{\kern 5 pt}} \nu_\lambda=\bigcup_{\eta<\lambda} \nu_\eta,{{\kern 5 pt}} \delta_{\lambda }=\bigcup_{\eta<\lambda} \delta_{\eta }, {{\kern 5 pt}} {\varepsilon}_{\lambda}=\bigcup_{\eta<\lambda} {\varepsilon}_{\eta }\text.\end{aligned}$$ We assert that ${{\mathcal L}}_\lambda={\langle L_\lambda;\gamma_\lambda, H_\lambda,\nu_\lambda,\delta_{\lambda},{\varepsilon}_{\lambda}\rangle}$ is an auxiliary structure satisfying , , and . Since all the unions defining ${{\mathcal L}}_\lambda$ are directed unions, $L_\lambda$ is a lattice, and ${\langle H_\lambda;\nu_\lambda\rangle}$ is a quasiordered set. Actually, it is an ordered set since $\nu_\lambda\subseteq {{\nu}^{\bullet}}$. By the same reason, $\gamma_\lambda$, $\delta_{\lambda}$, and ${\varepsilon}_{\lambda}$ are maps. It is straightforward to check that all of ,…, hold for ${{\mathcal L}}_\lambda$; we only do this for , that is, we verify and , and also for . Assume $\gamma_\lambda({{\langle u_1,v_1\rangle}})\leq_{\nu_\lambda} \gamma_\lambda({{\langle u_2,v_2\rangle}})$. Since the unions are directed, there exists an $\eta<\lambda$ such that $u_1,v_1,u_2,v_2\in L_\nu$, and we have $\gamma_\eta({{\langle u_1,v_1\rangle}})\leq_{\nu_\eta} \gamma_\eta({{\langle u_2,v_2\rangle}})$. Using that the auxiliary structure ${{\mathcal L}}_\eta$ satisfies , we obtain ${\textup{con}_{L_\eta}(u_1,v_1)} \leq {\textup{con}_{L_\eta}(u_2,v_2)}$, that is, ${{\langle u_1,v_1\rangle}} \in {\textup{con}_{L_\eta}(u_2,v_2)}$. Using Lemma \[llajtorja\], we conclude ${{\langle u_1,v_1\rangle}} \in {\textup{con}_{L_\lambda}(u_2,v_2)}$ in the usual way. This implies ${\textup{con}_{L_\lambda}(u_1,v_1)} \leq {\textup{con}_{L_\lambda}(u_2,v_2)}$. Therefore, ${{\mathcal L}}_\lambda$ satisfies . Similarly, if ${\textup{con}_{L_\lambda}(u_1,v_1)} \leq {\textup{con}_{L_\lambda}(u_2,v_2)}$, then Lemma \[llajtorja\] easily implies the existence of an $\eta<\lambda$ such that ${{\langle u_1,v_1\rangle}} \in {\textup{con}_{L_\eta}(u_2,v_2)}$ and ${\textup{con}_{L_\eta}(u_1,v_1)} \leq {\textup{con}_{L_\eta}(u_2,v_2)}$; for ${{\mathcal L}}_\eta$ yields $\gamma_\eta({{\langle u_1,v_1\rangle}})\leq_{\nu_\eta} \gamma_\eta({{\langle u_2,v_2\rangle}})$; and we conclude $\gamma_\lambda({{\langle u_1,v_1\rangle}})\leq_{\nu_\lambda} \gamma_\lambda({{\langle u_2,v_2\rangle}})$. Hence, ${{\mathcal L}}_\lambda$ satisfies and . Next, for the sake of contradiction, suppose that fails in ${{\mathcal L}}_\lambda$. This implies that ${{\langle 0_{L_\lambda},1_{L_\lambda}\rangle}}$ belongs to $\bigvee{\bigl\{{\textup{con}_{L_\lambda}(a_p,b_p)}: p\in H_\lambda\setminus{\{1_{{{H}^{\bullet}}}\}}\bigr\}}$, where the join is taken in the congruence lattice of $L_\lambda$. Since principal congruences are compact, there exists a finite subset $T\subseteq H_\lambda\setminus{\{1_{{{H}^{\bullet}}}\}}$ such that ${{\langle 0_{L_\lambda},1_{L_\lambda}\rangle}}$ belongs to $\bigvee{\{{\textup{con}_{L_\lambda}(a_p,b_p)}: p\in T\}}$. Thus, there exists a finite chain $0_{L_\lambda}=c_0<c_1<\dots<c_k=0_{L_\lambda}$ such that, for $i=1,\dots, k$, ${{\langle c_{i-1},c_i\rangle}}\in \bigcup{\{{\textup{con}_{L_\lambda}(a_p,b_p)}: p\in T\}}$. Each of these memberships are witnessed by finitely many “witness” elements according to ; see Lemma \[llajtorja\]. Taking all these memberships into account, there are only finitely many witness elements all together. Hence, there exists an $\eta<\lambda$ such that $L_\eta$ contains all these elements. Applying Lemma \[llajtorja\] in the converse direction, we obtain that ${{\langle 0_{L_\eta},1_{L_\eta}\rangle}}={{\langle 0_{L_\lambda},1_{L_\lambda}\rangle}}$ belongs to $\bigvee{\{{\textup{con}_{L_\eta}(a_p,b_p)}: p\in T\}}$, which is a contradiction since ${{\mathcal L}}_\eta$ satisfies . Consequently, ${{\mathcal L}}_\lambda$ is an auxiliary structure. Clearly, ${{\mathcal L}}_\lambda$ satisfies and since so do the ${{\mathcal L}}_\eta$ for $\eta<\lambda$. If ${{\langle p,q\rangle}}\in D$ and $p\parallel_\lambda q$, then $p\parallel_\eta q$ for some (actually, for every) $\eta<\lambda$. Hence, the satisfaction of for ${{\mathcal L}}_\lambda$ follows the same way as in the Successor Step since $L_\eta$ is a ${\{0,1\}}$-sublattice of $L_\lambda$. We have seen that ${{\mathcal L}}_\nu$ is an auxiliary structure for all $\lambda\leq\kappa$. Letting $\lambda$ equal $\kappa$, we obtain the existence part of the lemma. The last sentence of the lemma follows from the construction and basic cardinal arithmetics. We are now in the position to complete the paper. In order to prove part of the theorem, assume that $P={\langle P;\nu_P\rangle}$ is an ordered set with zero and it is the union of a chain of principal ideals. By Lemma \[cofinalitylemma\], there exist an ordinal number $\kappa$ and a cofinal chain $C={\{c_\iota: \iota<\kappa\}}$ in $P$ such that $0_P=c_0$ and, for $\iota,\mu<\kappa$ we have $\iota< \mu \iff c_\iota< c_\mu$. The cofinality of $C$ means that $P$ is the union of the principal ideals $H_\iota={\mathord\downarrow c_\iota}$, $\iota<\kappa$. We let $H_\kappa=\bigcup_{\iota<\kappa}H_ \iota$ and $\nu_\kappa=\bigcup_{\iota<\kappa}\nu_{H_i}$, where $\nu_{H_i}$ denotes the restriction ${{\nu_P\rceil_{H_i}}}$. Clearly, $P=H_\kappa$ and $\nu_P= \nu_\kappa$, that is, ${\langle P;\nu_P\rangle}={\langle H_\kappa;\nu_\kappa\rangle}$. Note that $H_\kappa$ is not a principal ideal in general since $P$ need not be bounded. For each $\lambda \leq \kappa$, we define an auxiliary structure ${{\mathcal L}}_\lambda={\langle L_\lambda;\gamma_\lambda, H_\lambda,\nu_\lambda,\delta_{\lambda},{\varepsilon}_{\lambda}\rangle}$ such that ${{\mathcal L}}_\mu$ is a substructure of ${{\mathcal L}}_\lambda$ for every $\mu\leq \lambda$; we do this by (transfinite) induction as follows. We start with the one-element lattice $L_0$ and $H_0={\{c_0\}}={\{0_P\}}$, and define ${{\mathcal L}}_0$ in the only possible way. Assume that $\lambda=\eta+1$ is a successor ordinal. We apply Lemma \[combinlemma\] to obtain ${{\mathcal L}}_\lambda$ from ${{\mathcal L}}_\eta$. This is possible since $H_\eta$ is an order ideal of $H_\lambda$. Note that Lemma \[combinlemma\] does not assert the uniqueness of ${{{{\mathcal L}}}^{\bullet}}$, and, in principle, it could be a problem later that ${{\mathcal L}}_\lambda$ is not uniquely defined. However, this is not a real problem since we can easily solve it as follows. Let $\tau_0$ be the smallest *infinite* ordinal number such that ${|P|}\leq |\tau_0|$, let $\tau=2^{\tau_0}$, and let $\pi$ be the smallest ordinal with $|P|=|\pi|$. Note that $|\tau|$ is at least the power of continuum but $|\pi|$ can be finite. Let $P={\{h_\iota: \iota<\pi\}}$ such that $h_\iota\neq h_\eta$ for $\iota<\eta<\pi$. Also, take a set $T={\{t_\iota: \iota<\tau\}}$ such that $t_\iota\neq t_\eta$ for $\iota<\eta<\tau$. The point is that, after selecting the well-ordered cofinal chain $C$ above, we can use the well-ordered index sets ${\{\iota: \iota<\pi\}}$ and ${\{\iota: \iota<\tau\}}$ to make every part of our compound construction unique. Namely, when we well-order $D$, defined in , we use the lexicographic ordering of the index set ${\{\iota: \iota<\pi\}}\times {\{\iota: \iota<\pi\}}$. When we define lattices, their base sets will be initial subsets of $T$; a subset $X$ of $T$ is *initial* if, for all $\mu<\iota<\tau$, $\,t_\iota\in X$ implies $t_\mu\in X$. If we have to add new lattice elements, like a new top or $c_{pq}$, etc., then we always add the first one of $T$ that has not been used previously. Cardinality arithmetics shows that $T$ is never exhausted. This way, we have made the definition of ${{\mathcal L}}_\lambda$ unique. Clearly, ${{\mathcal L}}_\iota$ is a substructure of ${{\mathcal L}}_\lambda$ for $\iota< \lambda$; either by Lemma \[combinlemma\] if $\iota=\eta$, or by the induction hypothesis and transitivity if $\iota<\eta$. If $\lambda$ is a limit ordinal, then first we form the union $${{\mathcal L}}'_\lambda={\langle L'_\lambda;\gamma'_\lambda,H'_\lambda,\nu'_\lambda,\delta'_\lambda, {\varepsilon}'_\lambda \rangle}= {\langle \bigcup_{\eta<\lambda}L_\eta;\bigcup_{\eta<\lambda}\gamma_\eta, \bigcup_{\eta<\lambda}H_\eta,\bigcup_{\eta<\lambda}\nu_\eta,\bigcup_{\eta<\lambda}\delta_\eta, \bigcup_{\eta<\lambda}{\varepsilon}_\eta \rangle}\text.$$ Note that $\nu'_{\lambda}={{\nu_P\rceil_{H'_\lambda}}}$. The same way as in the proof of Lemma \[combinlemma\], we obtain that ${{\mathcal L}}'_\lambda$ is an auxiliary structure; the only difference is that now trivially holds in ${{\mathcal L}}_\lambda$ since $H'_\lambda$ does not have a largest element. To see this, suppose for contradiction that $u$ is the largest element of $H'_\lambda$. Then $u\in H_\eta$ for some $\eta<\lambda$. Since $\lambda$ is a limit ordinal, $\eta+1<\lambda$. Hence $c_{\eta+1}\leq u\leq c_\eta$, which contradicts $c_\eta<c_{\eta+1}$. Clearly, ${\langle H'_\lambda;\nu'_\lambda\rangle}$ is an order ideal in ${\langle H_\lambda;\nu_\lambda\rangle}$. Thus, applying Lemma \[combinlemma\] to this situation, we obtain an auxiliary structure ${{{{\mathcal L}}}^{\bullet}}$, and we let ${{\mathcal L}}_\lambda= {{{{\mathcal L}}}^{\bullet}}$. Obviously, for all $\eta<\lambda$, ${{\mathcal L}}_\eta$ is a substructure of ${{\mathcal L}}_\lambda$. Now, we have constructed an auxiliary structure ${{\mathcal L}}_\lambda$ for each $\lambda\leq \kappa$. In particular, ${{\mathcal L}}_\kappa={\langle L_\kappa;\gamma_\kappa, H_\kappa,\nu_\kappa,\delta_{\kappa},{\varepsilon}_{\kappa}\rangle} = {\langle L_\kappa;\gamma_\kappa, P,\nu_P,\delta_{\kappa},{\varepsilon}_{\kappa}\rangle} $ is an auxiliary structure. Thus, by Lemma \[impclM\], ${\textup{Princ}(L_\kappa)}\cong{\langle P;\nu_P\rangle}$, which proves part of the theorem. In order to prove part , assume that $L$ is a countable lattice. Obviously, we have $|{\textup{Princ}(L)}|\leq|{{\textup{Pairs}^{\leq}(L)}}|\leq\aleph_0$, and we already mentioned that ${\textup{Princ}(L)}$ is always a directed ordered set with 0, no matter what the size $|L|$ of $L$ is. Conversely, let $P$ be a directed ordered set with 0 such that $|P|\leq\aleph_0$. Then there is an ordinal $\kappa\leq \omega$ (where $\omega$ denotes the least infinite ordinal) such that $P={\{p_i: i<\kappa\}}$. Note that ${\{i:i<\kappa\}}$ is a subset of the set of nonnegative integer numbers. For $i,j<\kappa$, there exists a smallest $k$ such that $p_i\leq p_k$ and $p_j\leq p_k$; we let $p_i\sqcup p_j=p_k$. This defines a binary operation on $P$; it need not be a semilattice operation. Let $q_0=p_0$. For $0<i<\kappa$, let $q_i=q_{i-1}\sqcup p_i$. A trivial induction shows that $q_i$ is an upper bound of ${\{p_0,p_1,\dots,p_i\}}$, for all $i<\kappa$, and $q_{i-1}\leq_P q_i$ for all $0<i<\kappa$. Hence, the principal ideals ${\mathord\downarrow q_i}$ form a chain ${\{{\mathord\downarrow q_i}: i<\kappa\}}$, and $P$ is the union of these principal ideals. Therefore, part of the Theorem yields a lattice $L$ such that $P$ is isomorphic to ${\textup{Princ}(L)}$. Since the ${\mathord\downarrow q_i}$ are countable and there are countably many of them, and since all the lemmas we used in the proof of part of the theorem preserve the property “countable”, $L$ is countable. [99]{} Bogart, K. P., Freese, R., Kung, J. P. S. (editors): The Dilworth Theorems. Selected papers of Robert P. Dilworth. Birkhäuser Boston, Inc., Boston, MA, 1990. xxvi+465 pp. ISBN: 0-8176-3434-7 Czédli, G.: (1+1+2)-generated equivalence lattices. J. Algebra [**221**]{}, 439–462 (1999) Czédli, G.: Representing homomorphisms of distributive lattices as restrictions of congruences of rectangular lattices. Algebra Universalis [**67**]{}, 313–345 (2012) Dilworth, R.P.: The structure of relatively complemented lattices. Ann. of Math. (2) [**51**]{}, 348–359 (1950) Grätzer, G.: General Lattice Theory, 2nd edn. Birkhäuser, Basel (1998) Grätzer, G.: The Congruences of a Finite Lattice. A Proof-by-picture Approach. Birkhäuser, Boston (2006) Grätzer, G.: The order of principal congruences of a bounded lattice. [<http://arxiv.org/pdf/1302.4163>]{}; Algebra Universalis, to appear. Grätzer, G.: Lattice Theory: Foundation. Birkhäuser Verlag, Basel (2011) Grätzer, G., Lakser, H., Schmidt, E.T.: Congruence lattices of finite semimodular lattices. Canad. Math. Bull. [**41**]{}, 290–297 (1998) Huhn, A. P.: On the representation of distributive algebraic lattices. III. Acta Sci. Math. (Szeged) [**53**]{}, 11–18 (1989) Ržička, P.: Free trees and the optimal bound in Wehrung’s theorem. Fund. Math. [**198**]{}, 217–228 (2008) Schmidt, E.T.: The ideal lattice of a distributive lattice with 0 is the congruence lattice of a lattice. Acta Sci. Math. (Szeged) [**43**]{}, 153–168 (1981) Wehrung, F.: A solution to Dilworth’s congruence lattice problem. Adv. Math. [**216**]{}, 610–625 (2007) [^1]: This research was supported by the European Union and co-funded by the European Social Fund under the project “Telemedicine-focused research activities on the field of Mathematics, Informatics and Medical sciences” of project number “TÁMOP-4.2.2.A-11/1/KONV-2012-0073”, and by NFSR of Hungary (OTKA), grant number K83219
--- abstract: 'We study the regime of anticipated synchronization in unidirectionally coupled model neurons subject to a common external aperiodic forcing that makes their behavior unpredictable. We show numerically and by implementation in analog hardware electronic circuits that, under appropriate coupling conditions, the pulses fired by the slave neuron anticipate (i.e. predict) the pulses fired by the master neuron. This anticipated synchronization occurs even when the common external forcing is white noise.' author: - 'M. Ciszak$^1$, O. Calvo$^1$, C. Masoller$^{1,3}$, Claudio R. Mirasso$^1$ and Raúl Toral$^{1,2}$' title: Anticipating the response of excitable systems driven by random forcing --- Synchronization of nonlinear systems is a fascinating subject that has been extensively studied on a large variety of physical and biological systems[@PRK01]. While synchronization of oscillators goes back to the work by Huygens, the last decade has witnessed an increased interest in the topic of synchronization of chaotic systems [@B02]. Recently, Voss [@VOSS] has discovered a new scheme of synchronization, called “anticipated synchronization”. Voss has shown that by using appropriate delay lines it is possible to synchronize two unidirectionally coupled systems in such a way that the [*slave*]{} system, ${\bf y}(t)$, predicts the behavior of the [*master*]{} system, ${\bf x}(t)$. Two coupling schemes were considered: [*complete replacement*]{}, $$\label{scheme1} \begin{array}{rcl} \dot {\bf x}(t)& =& -\alpha {\bf x}(t)+{\bf f}({\bf x}(t-\tau))\\ \dot {\bf y}(t) & = & -\alpha {\bf y}(t)+{\bf f}({\bf x}(t)) \end{array}$$ and [*delay coupling*]{}, $$\label{scheme2} \begin{array}{rcl} \dot {\bf x}(t)& =& {\bf f}({\bf x}(t))\\ \dot {\bf y}(t) & = & {\bf f}({\bf y}(t))+{\bf K}[{\bf x}(t)-{\bf y}(t-\tau)]. \end{array}$$ ${\bf f}({\bf x})$ is a function which defines the [*autonomous*]{} dynamical system under consideration, ${\bf K}$ is the coupling strength and $\tau$ is a delay time. It is easy to see that in both schemes the manifold is a solution of the equations, and Voss has shown that in both schemes this solution can be structurally stable. This is more remarkable when the dynamics of the master system $\bf x$ is “intrinsically unpredictable", as it is the case of a chaotic system. While in the scheme of complete replacement the anticipation time $\tau$ can be arbitrarily large, the delay coupling scheme requires some constrains on $\tau$ and $\bf K$ for the synchronization solution to be stable [@VOSS]. Despite this fact, the delay coupling scheme is more interesting since the anticipation time $\tau$ can be varied without altering the dynamics of the master system $\bf x$. In this Letter we study numerically and experimentally the regime of anticipated synchronization in excitable [*non-autonomous*]{} systems. In our case the intrinsic unpredictability of the behavior of the dynamical system $\bf x$ does not arise from a chaotic dynamics, but rather from the existence of an external forcing with some element of randomness. We consider the coupled systems $$\begin{array}{rcl} \dot {\bf x}(t)& =& {\bf f}({\bf x}(t))+{\bf I}(t)\\ \dot {\bf y}(t) & = & {\bf f}({\bf y}(t))+{\bf I}(t)+{\bf K}[{\bf x}(t)-{\bf y}(t-\tau)] \label{eq3}, \end{array}$$ where ${\bf I}(t)$ represents a common external forcing. Notice that ${\bf y}(t)={\bf x}(t+\tau)$ is no longer an exact solution of the equations \[except in the particular case of a periodic forcing ${\bf I}(t+\tau)={\bf I}(t)$\]. We show that under appropriate coupling conditions there can be a very good correlation between ${\bf y}(t)$ and ${\bf x}(t+\tau)$ which, in practice, allows the prediction of the future behavior of ${\bf x}(t)$ with a high degree of accuracy. Of course, this result is more remarkable when the external forcing is a random signal. Specifically, we have considered models of sensory neurons. Sensory neurons transform external stimuli signals as pressure, temperature, electric pulses, etc., into trains of action potentials, usually referred to as ’spikes’ or ’firings’. Their behavior is typical of excitable systems: if the forcing is above a certain threshold, the neuron fires a pulse, and after the firing, the recovery process produces an absolute refractory time during which a second firing cannot occur. In general, sensory neurons work in a noisy environment. As a consequence, the time intervals between spikes contain a significant random component, and random spikes often occur even in the absence of stimuli. The topics of synchronous oscillations and noise have received much attention (see, e.g., [@LBM91]), since it has been suggested that synchronous firing activity of sensory neurons might be a part of higher brain functions and a method for integrating distributed information into a global picture [@GM98]. Here we find that the interplay of coupling, delayed feedback, and common noise can lead to anticipated synchronization. We illustrate this effect in the well known FitzHugh-Nagumo and Hodkey-Huxley neuron models. By coupling two of such systems in an unidirectional configuration as in the scheme (\[eq3\]), we find that when both systems are subjected to the same external random forcing, the slave system fires the same train of spikes as the master system, but at a certain amount of time earlier, i.e., the slave predicts the response of the master. First we show results based on the FitzHugh-Nagumo model. It consists of two variables ${\bf x}=(x_1,x_2)$. The fast variable, $x_1$, is associated with the activator, and the slow recovery variable, $x_2$, is associated with the inhibitor. The equations for the master $(x_1, x_2)$ and the slave ${\bf y}=(y_1, y_2)$ systems, under unidirectional coupling are, respectively (see the schematic diagram shown in Fig. \[fig1\]): $$\label{eq1} \begin{array}{rcl} \dot{x_1} & = & -x_1(x_1-a)(x_1-1)-x_2 + I(t)\\ \dot{x_2}& = & \epsilon (x_1-b x_2) \end{array}$$ and $$\label{eq2} \begin{array}{rcl} \dot{y_1} & = & -y_1(y_1-a)(y_1-1)-y_2 + I(t)+ \\ & & + K[x_1(t)-y_1(t-\tau)]\\ \dot{y_2}& = & \epsilon (y_1-b y_2) \end{array}$$ where $a$, $b$, and $\epsilon$ are constants, $K$ is the coupling strength and $\tau$ is a delay time (associated to an inhibitory feedback loop in the slave neuron). Note that only the fast variables of the two systems are coupled. When the common external forcing, $I(t)$, is constant in time, the anticipated synchronization manifold: $x_1(t+\tau)=y_1(t)$, $x_2(t+\tau)=y_2(t)$ is an exact solution of Eqs. (\[eq1\]) and (\[eq2\]). If the external forcing is above threshold and for appropriate values of $K$ and $\tau$, the master system fires pulses periodically and the coupling induces a constant time shift $\tau$ between master and slave spikes. We have considered different types of random external forcing $I(t)$. The first one corresponds to a random process whose amplitude remains constant for a time $T$ and then it switches to a new random value chosen uniformly in \[$I_{0}-D$,$I_{0}+D$\], where $D$ is the noise intensity. We chose $I_{0}$ very close to (but below) the firing threshold of the excitable system. It would appear at first thought that with this type of external forcing the behavior of the master system can be easily predictable. However, there are two main factors that make the system response unpredictable: if the effect of the perturbation is not strong enough the system does not fire a pulse; besides, the system has a refractory time (after firing a pulse) during which, another firing is not possible. Figure \[fig2\] shows that anticipation occurs with this type of random external forcing for an appropriate value of the coupling strength $K$: after an initial transient time the two systems synchronize such that the slave system anticipates the fires of the master system by a time interval $\tau$. The firings in the master and the slave systems start at about the same time, and the anticipation phenomenon grows during the rising of the pulse. When the master system noisily evolves near the stable point, the anticipation vanishes. In other words, anticipation is a local process, during firings. The same qualitative results are found with other types of external forcing such as colored or even white noise. Figures \[fig3\](a-b) display the spikes of the master and slave systems when $I(t)$ is Gaussian white noise. Sometimes the slave system makes an error in anticipating the master firings. While the slave system always fires a pulse when the master system fires a pulse, it also might fire a “extra” pulse, which has no corresponding pulse in the train of pulses fired by the master. Notice that in Fig. \[fig3\](a) an error at about $t=1900$ occurs. Not surprisingly, we find that the longer the anticipation time $\tau$, the larger the number of errors. However, for a given anticipation time, the number of errors can be reduced considerably if a “cascade” of an adequate number of slave neurons is considered. A detailed study of the number of errors and its dependency with the type of external forcing will be reported elsewhere. Next we show simulations based on a more realistic model, namely the model of electro-receptors proposed by Braun et. al [@B98]. This model is a modification of the Hodgkin-Huxley neuron model: , where $x$ is the potential voltage across the membrane and $C_M$ is the capacitance; $i_{Na}$ and $i_K$ are the fast sodium and potassium currents, $i_{sd}$ and $i_{sr}$ are additional slow currents, $i_l$ is a passive leak current. For details and functional dependence of the currents on the voltage $x$ and other factors (as temperature) see [@B98]. We extend the model to account for two unidirectionally coupled neurons, with a delayed feedback loop in the slave neuron, and subject to a common external forcing $I(t)$, in the same way as in the FitzHugh-Nagumo model, e.g., the equations for the master, $x$, and for the slave, $y$, neurons are: $$\label{hh} \begin{array}{rcl} C_M \dot x & = & -i_{Na}^x-i_{K}^x -i_{sd}^x-i_{sr}^x-i_l^x+I(t) \\ C_M \dot y & = & -i_{Na}^y-i_{K}^y -i_{sd}^y-i_{sr}^y-i_l^y+I(t) \nonumber \\ & + & K[x(t)-y(t-\tau)]~~. \end{array}$$ Figures \[fig3\](c-d) display the results when the common external forcing $I(t)$ is a Gaussian white noise. We chose parameters such that in the absence of forcing there are no spikes (subthreshold, noise-activated firing regime). The behavior observed is qualitatively the same as in the FitzHugh-Nagumo model (the slave neuron anticipates the fires of the master neuron), which indicates that the anticipation phenomenon is general and model independent. Remarkably, in this model the anticipation time can be larger than the pulse duration. It is worth mentioning that anticipated synchronization is also observed in this model for parameters such that there is spontaneous (regular or irregular) spike activity (suprathreshold firing regime). To assess the robustness of the anticipated synchronization observed in the numerical simulations, we have implemented the FitzHugh-Nagumo model in analog hardware and constructed two coupled electronic neurons (a simplified version of the circuit is shown in Fig. \[fig4\]). The electronic neurons were built using operational amplifiers and the cubic non-linearity described by $x (x-a) (x-1)$ was implemented using analog multipliers (AD633) in a circuit not shown for simplicity. The resistor $R_C$ controls the strength of the unidirectional coupling between the master and the slave neurons. The resistor $R_D$ ($R_D=R_C$ in our case) controls the strength of the delayed feedback into the slave neuron. The coupling and the delayed feedback have opposite signs: while the master signal was obtained at point B of Fig. \[fig4\], where the voltage is $-V_m$, the slave signal that goes into the delay line was obtained at point C, where the voltage is $+V_s$. The different signs are due to the inverters that are located in between points A and B and C and D. The threshold on both neurons was controlled by a potentiometer represented by its equivalent circuit: offset and $R_0$. The analog delay line for the delayed feedback in the slave neuron was built using bucket brigade circuits (MN3004). A function generator with white noise output capabilities (HP33120A) was used to excite both electronic neurons. The signals were acquired using LabView and National Instruments DAQ 6025E data acquisition board. Similar electronic neurons have been implemented in [@abarbanel], where it was shown that their behavior is very similar to that of biological neurons: when interfaced to biological neurons, hybrid circuits, with the electronic neurons taking the place of missing or damaged biological neurons, could function normally. Our electronic coupled neurons behave very similar as in the numerical simulations. For an appropriate value of the coupling resistance $R_C$, we observe that, after a transient, the master and slave electronic neurons synchronize in such a way that the slave neuron anticipates the fires of the master neuron by a time interval approximately equal to the delay time $\tau$ of the feedback mechanism. Figure \[fig5\] (a) shows a typical spike train, and Fig. \[fig5\](b) displays in detail a single spike [@movies]. We observe that, as in the numerical simulations, the firings of the master and the slave neurons start at about the same time: anticipation begins during the rising of the peak and it vanishes when the neurons are in the unexcited state. Without coupling and feedback ($R_C=R_D=0$) the neurons fire pulses which are, in general, desynchronized (due to the small mismatch between the circuits). To summarize, we have studied the regime of anticipated synchronization in coupled systems exhibiting neuronal-type excitable behavior, when they are driven by common external aperiodic forcing. We have shown that under appropriate conditions, the slave system can anticipate the random spikes of the master system. This is despite of the fact that the anticipated synchronization manifold is not a solution of the equations. We have simulated numerically the coupled neurons with the FitzHugh-Nagumo and a modified Hodgkin-Huxley models and we have considered different types of random forcing. The FitzHugh-Nagumo model was also implemented in analog hardware, showing that the anticipation phenomenon is very general and robust. Our results show that non-linearity, noise and delayed feedback might conspire to produce new interesting and unexpected phenomena, and we hope that our findings will stimulate the search for anticipated synchronization in biological systems. The work is supported by MCyT (Spain) and FEDER, projects BFM2001-0341-C02-01 and BMF2000-1108. C. Masoller acknowledges partial support from the UIB. A. Pikovsky, M. Rosenblum and J. Kurths, [*Synchronization: a universal concept in nonlinear sciences*]{} (Cambridge University Press, New York, 2001). S. Boccalelli, J. Kurths, G. Osipov, D.L. Valladares and C.S. Zhou, Phys. Rep. [**366**]{},1 (2002). H. U. Voss, Phys. Rev. E [**61**]{}, 5115 (2000); Phys. Rev. E [**64**]{}, 039904(E) (2001); Phys. Rev. Lett. [**87**]{}, 014102 (2001). A. Longtin, A. Bulsara, and F. Moss, Phys. Rev. Lett. [**67**]{}, 656 (1991); J. J. Collins, Carson C. Chow, and Thomas T. Imhoff, Phys. Rev. E [**52**]{}, R3321 (1995); Nature [**376**]{}, 236 (1995); D. R. Chialvo, A. Longtin, and J. Muller-Gerking, Phys. Rev. E [**55**]{} 1798 (1997). L. Glass and M. C. Mackey, [*From clocks to chaos: the rhythms of life*]{} (Princeton University Press, Princeton, NJ, 1988). H. A. Braun, M. Huber, M. Dewald, K. Schäfer and K. Voigt, Int. J. of Bifurcation Chaos Appl. Sci. Eng. [**8**]{}, 881 (1998); U. Feudel, A. Neuman, X. Pei, W. Wojtenek, H. Braun, M. Huber and F. Moss, Chaos [**10**]{}, 231 (2000); W. Braun, B. Eckhardt, H. A. Braun and M. Huber, Phys. Rev. E [**62**]{}, 6352 (2000). R. C. Elson, A. I. Silverston, R. Huerta, N. F. Rulkov, M. I. Rabinovich and H. D. I. Abarbanel, Phys. Rev. Lett. [**81**]{}, 5692 (1998). Time traces of experimental spike trains as well as a detailed description of the electronic circuit can be found in the web page: http://www.imedea.uib.es/$\sim$claudio.
--- abstract: 'The GRAAL experiment could constrain the variations of the speed of light. The anisotropy of the speed of light may imply that the spacetime is anisotropic. Finsler geometry is a reasonable candidate to deal with the spacetime anisotropy. In this paper, the Lorentz invariance violation (LIV) of the photon sector is investigated in the locally Minkowski spacetime. The locally Minkowski spacetime is a class of flat Finsler spacetime and refers a metric with the anisotropic departure from the Minkowski one. The LIV matrices used to fit the experimental data are represented in terms of these metric deviations. The GRAAL experiment constrains the spacetime anisotropy to be less than $10^{-14}$. In addition, we find that the simplest Finslerian photon sector could be viewed as a geometric representation of the photon sector in the minimal standard model extension (SME).' author: - 'Zhe Chang$^{1,2}$[^1]' - 'Sai Wang$^{1}$[^2][^3] [^4]' title: Constraints on spacetime anisotropy and Lorentz violation from the GRAAL experiment --- Introduction ============ In the standard model, the speed of light $c$ is an isotropic constant in the vacuum. This principle displays one cornerstone for Einstein’s special relativity (SR). Various observations have been proposed to test its validity, see, for example, the Ref. [@Data; @tables; @for; @Lorentz; @and; @CPT; @violation] (and references therein). The pioneer and most famous one is the so-called Michelson-Morley experiment [@Michelson-Morley1887]. However, the Michelson-Morley experiment and its decedents involve the two-way propagation of light. Thus, they refer to the average speed of light [@REVIEW; @OF; @ONE-WAY; @AND; @TWO-WAY; @EXPERIMENTS]. Recently, a one-way experiment was performed by the GRAAL facility in the European Synchrotron Radiation Facility (ESRF) [@MPLA2005; @NuovoCimentoB2007; @Gurzadyan2010; @PRL2010]. It involves the Compton scattering of the high-energy electrons on the photons. This is the first kinematical non-threshold approach to test the Lorentz symmetry in the collision physics [@PRL2010]. The one-way experiment is sensitive to the first-order variation of the speed of light [@REVIEW; @OF; @ONE-WAY; @AND; @TWO-WAY; @EXPERIMENTS]. In the GRAAL’s setup, the high-energy electrons collide head-on with the monochromatic laser photons. The observable quantity is the minimum energy of scattered electrons. The minimum energy is called the Compton-edge (CE) energy. It is related with the CE position of the scattered electrons on the detector. Thus, the GRAAL experiment is also called the Compton-edge experiment. The CE energy of the scattered electrons would not change when the speed of light is isotropic as postulated in the SR. Otherwise, it would vary with the spatial directions due to the Earth’s spin (precisely, the sidereal rotation). Therefore, the azimuthal variation of the CE energy could reveal the anisotropy of the speed of light. The anisotropy of the speed of light implies the breaking of the Lorentz invariance. The Lorentz invariance violation (LIV) may stem from the quantum-gravity (QG) effects at very high-energy scale. String theory is the most promising fundamental theory to study the QG. It could induce the observable low-energy new physics, such as the spontaneous Lorentz invariance violation (sLIV) [@SLIV]. There is an effective field theory to account for the sLIV effects, namely the standard model extension (SME) [@SME01; @SME02]. In the SME, possible terms concerning the LIV are added into the Lagrangian of the standard model (SM) by hand. Recently, the minimal SME was found to be related with Finsler geometry [@SME; @Bogoslovsky01; @SME; @Bogoslovsky02; @Kostelecky_Finsler]. In addition, there exist other LIV models relating to Finsler geometry (see review in Ref. [@SME; @and; @Finsler]). Thus, the anisotropic speed of light may imply an anisotropic Finsler structure of the spacetime. The LIV could emerge in the Finsler spacetime naturally. Irrespective of the string motivated sLIV, Bogoslovsky [*et al.*]{} [@A; @special-relativistic; @theory; @of; @the; @locally; @anisotropic; @space-time.; @I; @A; @special-relativistic; @theory; @of; @the; @locally; @anisotropic; @space-time.; @II; @A; @special-relativistic; @theory; @of; @the; @locally; @anisotropic; @space-time.; @Appendix] proposed a flat Finsler spacetime, the metric of which is invariant under the eight-parametric inhomogeneous transformation group of the event coordinates. This eight-parametric group is known as the $DISIM_{b}(2)$ group in the general very special relativity (VSR) [@VSR; @VSR; @in; @Finsler]. It refers to the partially broken spatial isotropy. There is also a family of the flat Finsler event spacetime with entirely broken spatial isotropy [@Geometrical; @Models; @of; @the; @Locally; @Anisotropic; @Space-Time]. This refers to the seven-parametric inhomogeneous transformation group of the relativistic symmetry. In the two families of the flat Finsler event spacetimes, the LIV occurs while the relativistic symmetry (i.e., the principle of special relativity) preserves. This is different with that in the SME, where the LIV is accompanied by the violation of the relativistic symmetry. The above discussions are also applied to the LIV model of the electromagnetic field as should be discussed below. It can not be excluded that not only the Lorentz symmetry, but also, for example, the $DISIM_{b}(2)$ symmetry, would be broken either partially or entirely at the Planck scale. Thus, it is interesting to investigate this possibility in a generic flat Finsler spacetime. Finsler geometry [@Book; @by; @Rund; @Book; @by; @Bao; @Book; @by; @Shen] gets rid of the quadratic restriction on the spacetime line element. The propagation of a particle in Finsler spacetime may be governed by a modified dispersion relation (MDR). The Finsler spacetime is intrinsically anisotropic since it preserves less Killing vectors than the Riemann one [@Finsler; @isometry; @by; @Wang; @Finsler; @isometry; @LiCM]. The four dimensional Finsler structure admits no more than eight symmetries than the Riemann structure does [@A; @special-relativistic; @theory; @of; @the; @locally; @anisotropic; @space-time.; @I; @A; @special-relativistic; @theory; @of; @the; @locally; @anisotropic; @space-time.; @II; @A; @special-relativistic; @theory; @of; @the; @locally; @anisotropic; @space-time.; @Appendix]. Recently, we proposed a LIV model of the electromagnetic field in the locally Minkowski spacetime [@Electromagnetic; @field; @model; @in; @the; @flat; @Finsler; @spacetime]. The locally Minkowski spacetime [@Book; @by; @Bao] is a class of flat Finsler spacetime. It is a straightforward generalization of the Minkowski spacetime. The Lagrangian was presented explicitly for the electromagnetic field in such a spacetime. Formally, it could be related to the photon sector of the SME [@SME; @and; @Finsler; @Electromagnetic; @field; @model; @in; @the; @flat; @Finsler; @spacetime]. The LIV effects could be viewed as the influence from an anisotropic media on the electromagnetic field. It is noteworthy that this LIV model also does not violate the principle of special relativity. In this paper, we first review shortly the GRAAL experiment as well as its constraints on the anisotropy of the speed of light and on certain LIV parameters. In the locally Minkowski spacetime, the LIV matrices used to fit the experimental data are investigated for the photon sector. They could be represented in terms of the metric deviation of the locally Minkowski spacetime from the Minkowski one. We demonstrate the relationship and the differences between these LIV matrices and those in the SME. In addition, the Finslerian analysis of the GRAAL experiment could reveal the deeper relationship between both models. The rest of the paper is arranged as follows. In section 2, the GRAAl experiment as well as its constraint on the LIV is reviewed in the phenomenological framework. In section 3, we investigate the LIV of the photon sector in the locally Minkowski spacetime. The LIV matrices of the photon sector could be obtained and compared with those of the SME. The GRAAL experiment would constrain the level of the spacetime anisotropy. Conclusions and remarks are listed in section 4. The GRAAL experiment and the anisotropy of the speed of light ============================================================= Phenomenologically, we describe the GRAAL experiment as well as its constraints on the anisotropy of the speed of light and the LIV. More details could be found in Ref. [@MPLA2005; @PRL2010; @Theoretical; @diagnosis; @on; @light; @speed; @anisotropy; @by; @Zhou; @and; @Ma2010]. In the GRAAL experiment, the $6.03~\rm{GeV}$ electrons collide head-on with a beam of monochromatic laser photons. The 4-momentums of the incoming electrons and photons are given by $p^{\mu}=(E(p),p\hat{p})$ and $\lambda^{\mu}=(\omega,-\lambda\hat{p})$, respectively. The MDR of a photon is given by $$\label{Phenomenological MDR of Photons} w=(1+\delta(-\lambda\hat{p}))\lambda\ ,$$ where $\delta(-\lambda\hat{p})$ depends on the direction $-\hat{p}$ when the space is anisotropic. It is related with a modified refractive index $n(-\lambda\hat{p})=1-\delta(-\lambda\hat{p})$ at first order. The dispersion relation of electrons remains unchanged $E(p)=\sqrt{p^{2}+m_{e}^{2}}$. This equals to a choice of the frame such that one measures the speed of light relative to the electrons [@PRL2010]. The CE energy of electrons could be obtained when the scattered photons follow the direction $\hat{p}$. In general, the net 4-momentum of a scattering process is assumed to be conserved even though the Lorentz symmetry is broken. For the CE scattering process, the 4-momentum conservation implies $$\begin{aligned} \label{4-momentum conservation equation} E(p)+\omega &=& E(p')+\omega'\ ,\\ p\hat{p}-\lambda\hat{p} &=& p'\hat{p}+\lambda'_{CE}\hat{p}\ ,\end{aligned}$$ where the primes denote the 4-momentum of the outgoing electrons and photons. At the first order, we could obtain the CE energy of photons, $$\label{CE energy of photons} \lambda'_{CE}=\lambda_{CE}\left(1+\frac{2\gamma^{2}}{1+{4\gamma\lambda}/{m_{e}}}\delta(\lambda'\hat{p})\right)\ ,$$ where $\gamma={E(p)}/{m_{e}}$ denotes the Lorentz factor of photons and the CE energy is $\lambda_{CE}={4\gamma^{2}\lambda}/{(1+{4\gamma\lambda}/{m_{e}})}$ in the SR. Thus, the CE energy $\lambda'_{CE}$ would vary azimuthally due to the sidereal rotation of the Earth when the speed of light is anisotropic in the space. In the minimal SME, the Lorentz-breaking Lagrangian of the pure photon sector is given by [@SME02] $$\label{Lagrangian in SME} \mathcal{L}_{photon}=-\frac{1}{4}\eta^{\mu\rho}\eta^{\nu\sigma}F_{\mu\nu}F_{\rho\sigma} +\mathcal{L}^{CPT-even}_{photon}+\mathcal{L}^{CPT-odd}_{photon}\ ,$$ where $$\begin{aligned} \label{CPT-even LIV Lagrangian at first order in SME} \mathcal{L}^{CPT-even}_{photon}&=&-\frac{1}{4}(k_{F})^{\mu\nu\rho\sigma}F_{\mu\nu}F_{\rho\sigma}\ ,\\ \label{CPT-odd LIV Lagrangian at first order in SME} \mathcal{L}^{CPT-odd}_{photon}&=&\frac{1}{2}(k_{AF})_{\alpha}\epsilon^{\alpha\beta\mu\nu}A_{\beta}F_{\mu\nu}\ .\end{aligned}$$ The coefficient $k_{F}$ has the symmetry of the Riemannian tensor and is doubly traceless. The LIV parameters commonly used to fit the experimental data belong to a decomposition of the $(k_{F})^{\mu\nu\rho\sigma}$ coefficient: $(\tilde{\kappa}_{e+})^{jk}$, $(\tilde{\kappa}_{e-})^{jk}$, $(\tilde{\kappa}_{o+})^{jk}$, $(\tilde{\kappa}_{o-})^{jk}$ and $\tilde{\kappa}_{tr}$. Here we discard the $k_{AF}$ term since this term vanishes in the Finslerian photon sector as should be discussed in the following section. These LIV matrices are given as follows [@Signals; @for; @Lorentz; @violation; @in; @electrodynamics] $$\begin{aligned} \label{decompose of k} (\tilde{\kappa}_{e+})^{jk}&=&-(k_{F})^{0j0k}+\frac{1}{4}\epsilon^{jpq}\epsilon^{krs}(k_{F})^{pqrs}\ ,\nonumber\\ (\tilde{\kappa}_{e-})^{jk}&=&-(k_{F})^{0j0k}-\frac{1}{4}\epsilon^{jpq}\epsilon^{krs}(k_{F})^{pqrs}+\frac{2}{3}(k_{F})^{0l0l}\delta^{jk}\ ,\nonumber\\ (\tilde{\kappa}_{o+})^{jk}&=&-\frac{1}{2}\epsilon^{jpq}(k_{F})^{0kpq}+\frac{1}{2}\epsilon^{kpq}(k_{F})^{0jpq}\ ,\nonumber\\ (\tilde{\kappa}_{o-})^{jk}&=&\frac{1}{2}\epsilon^{jpq}(k_{F})^{0kpq}+\frac{1}{2}\epsilon^{kpq}(k_{F})^{0jpq}\ ,\nonumber\\ \tilde{\kappa}_{tr}&=&-\frac{2}{3}(k_{F})^{0j0j}\ .\end{aligned}$$ The first four matrices are traceless $3\times3$ matrices while the last one is a single coefficient. The third one is anti-symmetric while others are symmetric. Their components could be seen from the left two columns in the Table (\[decomposition\]). [ccc]{} Symbol & SME definitions & Finslerian model\ $(\tilde{\kappa}_{e+})^{12}$ & $-(k_{F})^{0102}+(k_{F})^{2331}$ & $0$\ $(\tilde{\kappa}_{e+})^{13}$ & $-(k_{F})^{0103}+(k_{F})^{2312}$ & $0$\ $(\tilde{\kappa}_{e+})^{23}$ & $-(k_{F})^{0203}+(k_{F})^{3112}$ & $0$\ $(\tilde{\kappa}_{e+})^{11}-(\tilde{\kappa}_{e+})^{22}$ &    $-(k_{F})^{0101}+(k_{F})^{2323}+(k_{F})^{0202}-(k_{F})^{3131}$ & $0$\ $(\tilde{\kappa}_{e+})^{33}$ & $-(k_{F})^{0303}+(k_{F})^{1212}$ & $\frac{1}{2}(h^{00}-h^{11}-h^{22}-h^{33})$\ \ $(\tilde{\kappa}_{e-})^{12}$ & $-(k_{F})^{0102}-(k_{F})^{2331}+\frac{2}{3}(k_{F})^{0l0l}$ & $-h^{12}-\tilde{\kappa}_{tr}$\ $(\tilde{\kappa}_{e-})^{13}$ & $-(k_{F})^{0103}-(k_{F})^{2312}+\frac{2}{3}(k_{F})^{0l0l}$ & $-h^{13}-\tilde{\kappa}_{tr}$\ $(\tilde{\kappa}_{e-})^{23}$ & $-(k_{F})^{0203}-(k_{F})^{3112}+\frac{2}{3}(k_{F})^{0l0l}$ & $-h^{23}-\tilde{\kappa}_{tr}$\ $(\tilde{\kappa}_{e-})^{11}-(\tilde{\kappa}_{e-})^{22}$ &    $-(k_{F})^{0101}-(k_{F})^{2323}+(k_{F})^{0202}+(k_{F})^{3131}$ & $h^{22}-h^{11}$\ $(\tilde{\kappa}_{e-})^{33}$ & $-(k_{F})^{0303}-(k_{F})^{1212}+\frac{2}{3}(k_{F})^{0l0l}$ & $\frac{1}{2}(h^{00}+h^{11}+h^{22}-h^{33})-\tilde{\kappa}_{tr}$\ \ $(\tilde{\kappa}_{o+})^{12}$ & $(k_{F})^{0131}-(k_{F})^{0223}$ & $-h^{03}$\ $(\tilde{\kappa}_{o+})^{31}$ & $(k_{F})^{0323}-(k_{F})^{0112}$ & $-h^{02}$\ $(\tilde{\kappa}_{o+})^{23}$ & $(k_{F})^{0212}-(k_{F})^{0331}$ & $-h^{01}$\ \ $(\tilde{\kappa}_{o-})^{12}$ & $(k_{F})^{0131}+(k_{F})^{0223}$ & $0$\ $(\tilde{\kappa}_{o-})^{13}$ & $(k_{F})^{0112}+(k_{F})^{0323}$ & $0$\ $(\tilde{\kappa}_{o-})^{23}$ & $(k_{F})^{0212}+(k_{F})^{0331}$ & $0$\ $(\tilde{\kappa}_{o-})^{11}-(\tilde{\kappa}_{o-})^{22}$ &    $2(k_{F})^{0123}-2(k_{F})^{0231}$ & $0$\ $(\tilde{\kappa}_{o-})^{33}$ & $2(k_{F})^{0312}$ & $0$\ \ $\tilde{\kappa}_{tr}$ & $-\frac{2}{3}(k_{F})^{0l0l}$ & $\frac{2}{3}h^{00}+\frac{1}{3}(h^{00}-h^{11}-h^{22}-h^{33})$\ The GRAAL experiment has been used to constrain the LIV matrix $\tilde{\kappa}_{o+}$ [@PRL2010]. In the minimal SME, the parameter $\delta$ in Eq. (\[Phenomenological MDR of Photons\]) is given by $$\label{delta in SME} \delta(-\lambda\hat{p})=\overrightarrow{\kappa}\cdot\hat{p}\ ,$$ where $\overrightarrow{\kappa}$ denotes $((\tilde{\kappa}_{o+})^{23},(\tilde{\kappa}_{o+})^{31},(\tilde{\kappa}_{o+})^{12})$. The GRAAL setup is given as $\hat{p^{i}}(t)\simeq(0.9~\rm{cos}(\Omega t),~0.9~\rm{sin}(\Omega t),~0.4)$, $p=6.03~\rm{GeV}$, $\lambda=3.5~\rm{eV}$, and $\Omega\simeq 2\pi/(23~\rm{h}~56~\rm{min})$ about the spin axis of the Earth. By substituting (\[delta in SME\]) into (\[CE energy of photons\]), we obtain the CE energy of photons as $$\label{CE in SME} \lambda'_{CE}=\tilde{\lambda}_{CE}+0.9\frac{2\gamma^{2}\lambda_{CE}}{1+4\gamma\lambda/m_{e}} \sqrt{\left((\tilde{\kappa}_{o+})^{23}\right)^{2}+\left((\tilde{\kappa}_{o+})^{31}\right)^{2}}\rm{sin}{\Omega t}\ .$$ Here the time-independent $(\tilde{\kappa}_{o+})^{12}$ term is absorbed into $\tilde{\lambda}_{CE}$ and an irrelevant phase is disregarded. The above equation (\[CE in SME\]) reveals that the anisotropy of the speed of light is characterized by $(\tilde{\kappa}_{o+})^{23}$ and $(\tilde{\kappa}_{o+})^{31}$ in the GRAAL experiment. The GRAAL experiment found no evidence for the sidereal variation of the CE energy. This provides an upper bound [@PRL2010] $$\label{deltalambdaCE} \frac{\Delta\lambda_{CE}}{\lambda_{CE}}<2.5\times10^{-6}~~\rm{(95\%~C.L.)}\ .$$ Thus, the LIV parameter acquires an upper limit $$\sqrt{\left((\tilde{\kappa}_{o+})^{23}\right)^{2}+\left((\tilde{\kappa}_{o+})^{31}\right)^{2}}<1.6\times10^{-14}~~\rm{(95\%~C.L.)}\ .$$ In addition, the GRAAL experiment could constrain the LIV matrix of photons in the standard model supplement (SMS) [@Theoretical; @diagnosis; @on; @light; @speed; @anisotropy; @by; @Zhou; @and; @Ma2010]. In the following section, we would show that it could also constrain the LIV matrices in the Finslerian photon sector. The photon sector and the spacetime anisotropy ============================================== Finsler geometry is a reasonable candidate to investigate the spacetime anisotropy. The Finsler spacetime stems from the integral of the form $s=\int_{a}^{b}F(x,y)d\tau$ where $y=dx/d\tau$ denotes a 4-momentum of a particle and $x$ a location. The integrand $F(x,y)$ is positively homogeneous of degree one on $y$. The Finsler metric is defined as $g_{\mu\nu}(x,y)=\frac{\partial}{\partial y^{\mu}}\frac{\partial}{\partial y^{\nu}}\left(\frac{1}{2}F^{2}\right)$ [@Book; @by; @Bao]. The locally Minkowski spacetime has a Finsler metric of the form $g_{\mu\nu}(x,y)=g_{\mu\nu}(y)$. It is a class of flat Finsler spacetime. Its metric depends on the 4-momentum $y$ only. This is different from the Minkowski metric $\eta_{\mu\nu}$ which is independent of $x$ and $y$. Thus, the locally Minkowski metric may acquires certain corrections from the new physics at high-energy scales. For detailed discussions on Finsler geometry, see Ref. [@Book; @by; @Rund; @Book; @by; @Bao; @Book; @by; @Shen]. The Finsler spacetime is intrinsically anisotropic. It could contain more complicated P and T properties than the Riemann spacetime. Its metric deviation from the Riemann metric may be even, odd or hybrid under the CPT [@SME; @and; @Finsler]. For instance, the deviation of the locally Minkowskian Randers metric is hybrid under the CPT [@SME; @and; @Finsler]. The Randers metric deviation is given by $g^{\mu\nu}-\eta^{\mu\nu}=b^{\mu}b^{\nu}+\frac{\beta}{\alpha}\left(\eta^{\mu\nu}-\frac{y^{\mu}y^{\nu}}{\alpha^{2}}\right) +\frac{1}{\alpha}\left(b^{\mu}y^{\nu}+b^{\nu}y^{\mu}\right)$ where $\alpha=\sqrt{\eta_{\mu\nu}y^{\mu}y^{\nu}}$ and $\beta=b_{\mu}y^{\mu}$. Under the CPT, $\beta$ changes its sign while $\alpha$ does not. Thus, the first term in the right-hand side of the Randers metric deviation remains unchanged while the last two terms change their signs under the CPT transformation. This example reveals that the asymmetry of Finsler spacetime leads to the possible LIV and CPT violation (CPTV). In the recent work [@Electromagnetic; @field; @model; @in; @the; @flat; @Finsler; @spacetime], a LIV model of the electromagnetic field was proposed in the locally Minkowski spacetime. The LIV effects originate from the replacement of the Minkowski metric with the locally Minkowski metric in the Lagrangian of the electromagnetic field. The Lorentz-breaking Lagrangian of the photon sector is given by $$\label{Lagrangian} \mathcal{L}=-\frac{1}{4}g^{\mu\rho}g^{\nu\sigma}F_{\mu\nu}F_{\rho\sigma}= -\frac{1}{8}(g^{\mu\rho}g^{\nu\sigma}-g^{\nu\rho}g^{\mu\sigma})F_{\mu\nu}F_{\rho\sigma}\ ,$$ where $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$ denotes the electromagnetic field strength and $g^{\mu\nu}$ denotes the locally Minkowski metric. In the last step, we had anti-symmetrized the spacetime indices $\mu\nu$ and $\rho\sigma$. To study the LIV phenomenologies, we represented the above Lagrangian of the photon sector in the framework of the effective field theory. First, we expanded the locally Minkowski metric to the first order $$\label{metric expansion} g^{\mu\nu}=\eta^{\mu\nu}+h^{\mu\nu}+\mathcal{O}(h^{2})\ ,$$ where $h^{\mu\nu}$ dentes the leading-order departure of the locally Minkowski metric from the Minkowski metric. The metric deviation $h^{\mu\nu}$ characterizes all the possible LIV and CPTV of the photon sector. At the first order of the LIV, the Lagrangian (\[Lagrangian\]) could be rewritten as $$\label{Lagrangian at first order} \mathcal{L}=-\frac{1}{4}\eta^{\mu\rho}\eta^{\nu\sigma}F_{\mu\nu}F_{\rho\sigma}+\mathcal{L}_{LIV}\ ,$$ where $$\begin{aligned} \label{LIV Lagrangian} \mathcal{L}_{LIV}=&-&\frac{1}{8}(\eta^{\mu\rho}h^{\nu\sigma}-\eta^{\nu\rho}h^{\mu\sigma}-\eta^{\mu\sigma}h^{\nu\rho} +\eta^{\nu\sigma}h^{\mu\rho})\nonumber\\ &\times& F_{\mu\nu}F_{\rho\sigma}\ .\end{aligned}$$ The first term in the right-hand side of Eq. (\[Lagrangian at first order\]) denotes the Lorentz invariant part of the Lagrangian while the second term (namely Eq. (\[LIV Lagrangian\])) involves all the possible LIV effects in the photon sector. The results (\[Lagrangian at first order\]) and (\[LIV Lagrangian\]) could be compared with those of the SME. We could obtain two “formal” relations $$\begin{aligned} \label{relation 1} (k_{F})^{\mu\nu\rho\sigma}&=&\frac{1}{2}\left(\eta^{\mu\rho}h^{\nu\sigma}-\eta^{\nu\rho}h^{\mu\sigma}-\eta^{\mu\sigma}h^{\nu\rho} +\eta^{\nu\sigma}h^{\mu\rho}\right)\ ,\\ \label{relation 2} (k_{AF})_{\alpha}&=&0\ .\end{aligned}$$ Note that they are formal relations between the LIV parameters of the SME photon sector and those of the Finslerian one. One reason is that $h^{\mu\nu}$ could have complicated CPT property. The right-hand side of Eq. (\[relation 1\]) would have corresponding CPT properties. As is mentioned above, for instance, the Randers metric deviation $h$ consists of the CPT-even and CPT-odd parts. It is CPT-hybrid and the corresponded $k_{F}$-components become CPT-hybrid. However, the coefficient $k_{F}$ are CPT-even in the SME photon sector. The other reason involves the number of the independent components of $k_{F}$. There are nineteen for the SME photon sector while just ten for the Finslerian one. The non-vanishing components of $k_{F}$ are listed in the Table (\[k\]) for the Finslerian photon sector. Components Finsler LIV parameters ------------------ ------------------------------- $(k_{F})^{0101}$ $\frac{1}{2}(h^{11}-h^{00})$ $(k_{F})^{0102}$ $\frac{1}{2}h^{12}$ $(k_{F})^{0103}$ $\frac{1}{2}h^{13}$ $(k_{F})^{0112}$ $\frac{1}{2}h^{02}$ $(k_{F})^{0113}$ $\frac{1}{2}h^{03}$ $(k_{F})^{0202}$ $\frac{1}{2}(h^{22}-h^{00})$ $(k_{F})^{0203}$ $\frac{1}{2}h^{23}$ $(k_{F})^{0212}$ $-\frac{1}{2}h^{01}$ $(k_{F})^{0223}$ $\frac{1}{2}h^{03}$ $(k_{F})^{0303}$ $\frac{1}{2}(h^{33}-h^{00})$ $(k_{F})^{0313}$ $-\frac{1}{2}h^{01}$ $(k_{F})^{0323}$ $-\frac{1}{2}h^{02}$ $(k_{F})^{1212}$ $-\frac{1}{2}(h^{11}+h^{22})$ $(k_{F})^{1213}$ $-\frac{1}{2}h^{23}$ $(k_{F})^{1223}$ $\frac{1}{2}h^{13}$ $(k_{F})^{1313}$ $-\frac{1}{2}(h^{11}+h^{33})$ $(k_{F})^{1323}$ $-\frac{1}{2}h^{12}$ $(k_{F})^{2323}$ $-\frac{1}{2}(h^{22}+h^{33})$ : The formal relation (\[relation 1\]) between the coefficient $k_{F}$ and the Finsler metric deviation $h$. The components of $k_{F}$ listed are non-vanishing (their symmetrized and anti-symmetrized counterparts are discarded here). We note that only ten of them are independent for the Finslerian photon sector. This is different from that there are nineteen independent components in the SME photon sector.[]{data-label="k"} According to Eq. (\[decompose of k\]) and Eq. (\[relation 1\]), the LIV matrices $\tilde{\kappa}$ used to fit the experimental data could be represented in terms of the Finslerian metric deviation $h$. The obtained results are listed in the third column of Table (\[decomposition\]). In this way, the LIV observables are showed to relate with the anisotropy of the locally Minkowski spacetime. Once again, we find that only ten LIV parameters remain independent for the Finslerian photon sector. Especially, we find the representations $(\tilde{\kappa}_{o+})^{23}=-h^{01}$ and $(\tilde{\kappa}_{o+})^{31}=-h^{02}$ which are closely related with the GRAAL experiment as should be discussed in the following. The 4-momentum of a particle is conserved in the locally Minkowski spacetime. This result could be deduced from the isometric transformations [@Finsler; @isometry; @LiCM; @no; @Cherenkov; @in; @Finsler; @FSR]. At the first order, the MDR of a photon was given by [@Electromagnetic; @field; @model; @in; @the; @flat; @Finsler; @spacetime] $$\label{MDR of photon} g_{\mu\nu}\lambda^{\mu}\lambda^{\nu}=\omega^{2}-\lambda^{2}+2h_{0i}\omega\lambda^{i}=0\ ,$$ where $\omega^{2}=g_{00}\lambda^{0}\lambda^{0}$ and $\lambda^{2}=-g_{ij}\lambda^{i}\lambda^{j}$. It could be rewritten as $$\label{MDR} \omega\simeq(1-h_{0i}(\lambda^{j})\hat{\lambda^{i}})\lambda\ ,$$ where $\hat{\lambda^{i}}$ denotes a unit 3-vector direction of the photon. Here we have represented $h$ as a function of 3-momentum $\lambda^{j}$ of the photon. In the following, we just consider the simplest case that $h_{0i}(\lambda^{j})$ is a function of only $\lambda$, namely the magnitude of the 3-momentum $\lambda^{j}$. The metric deviation $h_{0i}(\lambda)$ becomes constant once the GRAAL setup is given. According to Eq. (\[Phenomenological MDR of Photons\]) and Eq. (\[CE energy of photons\]), the Eq. (\[MDR\]) implies the CE energy of the outgoing photons as $$\label{final CE of photons} \lambda'_{CE}=\tilde{\lambda}_{CE}+0.9\frac{2\gamma^{2}\lambda_{CE}}{1+4\gamma\lambda/m_{e}}\sqrt{(h_{01})^{2}+(h_{02})^{2}}\rm{sin}{\Omega t}\ ,$$ where one time-independent term is absorbed into $\tilde{\lambda}_{CE}$ and an irrelevant phase is disregarded. This reveals that the anisotropy of the speed of light is characterized by the anisotropic parameters $h_{01}$ and $h_{02}$ for the GRAAL experiment in the Finslerian electromagnetic model. Recall that the parameters $h_{01}$ and $h_{02}$ are related to the LIV parameters $(\tilde{\kappa}_{o+})^{23}$ and $(\tilde{\kappa}_{o+})^{31}$, respectively. Exactly, the CE energy (\[final CE of photons\]) would take the same form as in Eq. (\[CE in SME\]). This result reveals that the simplest Finslerian photon sector may be a geometric representation of the SME one. This coincides with the proposition in Ref. [@SME; @Bogoslovsky01; @SME; @Bogoslovsky02; @Kostelecky_Finsler] that the SME could be related with Finsler geometry. According to Eq. (\[deltalambdaCE\]), a constraint $$\sqrt{(h_{01})^{2}+(h_{02})^{2}}<1.6\times10^{-14}~\rm{(95\%~C.L.)}$$ is set on the anisotropic departure of the locally Minkowski spacetime from the Minkowski one. It is noteworthy that the “eather drift” experiment has provided an upper constraint $10^{-10}$ on the spacetime anisotropy [@Eather01; @Eather02]. The GRAAL result improves the sensitivity by four orders of magnitude. However, one should note that both constraints are model-dependent. In particular, the upper limit $10^{-10}$ has been obtained within the framework of the relativistically invariant Finslerian theory [@A; @special-relativistic; @theory; @of; @the; @locally; @anisotropic; @space-time.; @I; @A; @special-relativistic; @theory; @of; @the; @locally; @anisotropic; @space-time.; @II; @A; @special-relativistic; @theory; @of; @the; @locally; @anisotropic; @space-time.; @Appendix], where the speed of light does not depend on the direction of light and equals to $c$. Conclusions and remarks ======================= The GRAAL experiment takes advantage of the Compton process of the high-energy electrons scattering the laser photons. The CE energy would vary azimuthally due to the sidereal rotation of the Earth when the speed of light is anisotropic. The anisotropy of the speed of light could be accounted by the LIV effects. We followed the convention of decomposing the LIV coefficient $k_{F}$ of the photon sector into nineteen LIV matrices $(\tilde{\kappa}_{e+})^{jk}$, $(\tilde{\kappa}_{e-})^{jk}$, $(\tilde{\kappa}_{o+})^{jk}$, $(\tilde{\kappa}_{o-})^{jk}$ and $\tilde{\kappa}_{tr}$. However, the GRAAL result showed no evidence for the variations of the CE energy. It gave upper limits on the anisotropy of the speed of light and certain LIV parameters. The locally Minkowski spacetime is a class of flat Finsler spacetime. It is a simplest anisotropic flat spacetime and may be viewed as certain modification of the Minkowski spacetime at the ultra high-energy physics. It naturally leads to the LIV and the CPTV. We had proposed a model of the electromagnetic field in the locally Minkowski spacetime in a previous paper. The LIV coefficient $k_{F}$ of the photon sector could be formally related to the anisotropic departure of the Finsler metric from the Minkowski one, while the coefficient $k_{AF}$ vanishes. We listed the Table (\[k\]) in detail to present the non-vanishing components of $k_{F}$. There are only ten independent components for the Finslerian photon sector. They may have the complicated CPT property. This is very different from that $k_{F}$ is CPT-even in the SME. Formally, we obtained all the LIV matrices $\tilde{\kappa}$ in terms of the anisotropic deviation $h$ of the locally Minkowski metric from the Minkowski one. We listed these LIV matrices as well as their representation in terms of $h$ in the Table (\[decomposition\]). In the SME, there are nineteen independent components for this set of LIV matrices. In the Finslerian photon sector, however, we found that only ten of them are independent. This prediction reveals that the number of the LIV parameters is severely squeezed for the Finslerian photon sector. Actually, there are only ten independent components for the anisotropic metric deviation $h$ in the locally Minkowski spacetime. Thus, they should completely determine the LIV and CPTV parameters as well as the anisotropy of the speed of light. As the SME, one can have LIV models without the CPT violation, or reciprocally have the CPT violation in a Lorentz covariant theory [@Chaichian2012a; @Chaichian2012b; @Chaichian2013]. The discussion on the GRAAL experiment could reveal the deep relationship between the Finsler spacetime and the SME. In the simplest case, the formula of the CE energy in the Finslerian photon sector was found to be as same as the one in the minimal SME. The LIV could be characterized by two common parameters in both models. Thus, the Finslerian photon sector might be a geometric representation of the photon sector in the SME. This prediction is compatible with the proposition that the SME could be Finsler geometric in Ref. [@SME; @Bogoslovsky01; @SME; @Bogoslovsky02; @Kostelecky_Finsler]. In addition, the level of the spacetime anisotropy was constrained to be less than $10^{-14}$ by the GRAAL experiment. This constraint improves the one from the previous “eather drift” experiment by four orders of magnitude. We thank useful discussions with Yunguo Jiang, Danning Li, Ming-Hua Li, Xin Li and Hai-Nan Lin. This work is supported by the National Natural Science Fund of China under Grant No. 11075166. [999]{} V.A. Kostelecky and N. Russell, *Data tables for Lorentz and CPT violation*, Rev. Mod. Phys. [**83**]{}, 11 (2011). A.A. Michelson and E.W. Morley, *The relative motion of the Earth and the luminiferous aether*, Am. J. Sci. [**34**]{}, 333 (1887). M.F. Ahmeda, B.M. Quinea, S. Sargoytchev, and A.D. Stauffer, *A Review of One-Way and Two-Way Experiments to Test the Isotropy of the Speed of Light*, Indian J. Phys. [**86**]{}, 835 (2012). V.G. Gurzadyan [*et al.*]{}, *Probe the light speed anisotropy with respect to the cosmic microwave background radiation dipole*, Mod. Phys. Lett. A [**20**]{}, 19 (2005). V.G. Gurzadyan [*et al.*]{}, *Lowering the Light Speed Isotropy Limit: European Synchrotron Radiation Facility Measurements*, Il Nuovo Cimento B [**122**]{}, 515 (2007). V.G. Gurzadyan [*et al.*]{}, *A new limit on the light speed isotropy from the GRAAL experiment at the ESRF*, arXiv:1004.2867. J.-P. Bocquet [*et al.*]{}, *Limits on Light-Speed Anisotropies from Compton Scattering of High-Energy Electrons*, Phys. Rev. Lett. [**104**]{}, 241601 (2010). V.A. Kostelecky and S. Samuel, *Spontaneous breaking of Lorentz symmetry in string theory*, Phys. Rev. D [**39**]{}, 683 (1989). D. Colladay and V.A. Kostelecky, *CPT violation and the standard model*, Phys. Rev. D [**55**]{}, 6760 (1997). D. Colladay and V.A. Kostelecky, *Lorentz-violating extension of the standard model*, Phys. Rev. D [**58**]{}, 116002 (1998). G.Yu. Bogoslovsky, *Lorentz symmetry violation without violation of relativistic symmetry*, Phys. Lett. A [**350**]{}, 5 (2006). G.Yu. Bogoslovsky, *Subgroups of the Group of Generalized Lorentz Transformations and Their Geometric Invariants*, SIGMA [**1**]{}, 017 (2005). V.A. Kostelecky, *Riemann-Finsler geometry and Lorentz-violating kinematics*, Phys. Lett. B [**701**]{}, 137 (2011). Z. Chang and S. Wang, *Standard model with Lorentz and CPT violations in Finsler spacetime*, arXiv:1209.3574. G.Yu. Bogoslovsky, *A special-relativistic theory of the locally anisotropic space-time. I: The metric and group of motions of the anisotropic space of events*, Il Nuovo Cimento B [**40**]{}, 99 (1977). G.Yu. Bogoslovsky, *A special-relativistic theory of the locally anisotropic space-time. II: Mechanics and electrodynamics in the anisotropic space*, Il Nuovo Cimento B [**40**]{}, 116 (1977). G.Yu. Bogoslovsky, *A special-relativistic theory of the locally anisotropic space-time. Appendix*, Il Nuovo Cimento B [**43**]{}, 377 (1978). A.G. Cohen and S.L. Glashow, *Very Special Relativity*, Phys. Rev. Lett. [**97**]{}, 021601 (2006). G.W. Gibbons, J. Gomis and C.N. Pope, *General very special relativity is Finsler geometry*, Phys. Rev. D [**76**]{}, 081701 (2007). V. Balan, G.Yu. Bogoslovsky, S.S. Kokarev, D.G. Pavlov, S.V. Siparov, and N. Voicu, *Geometrical Models of the Locally Anisotropic Space-Time*, J. Mod. Phys. [**3**]{}, 1314 (2012). H. Rund, *The Differential Geometry of Finsler Spaces*, Springer, (1959). D. Bao, S.S. Chern, and Z. Shen, *An Introduction to Riemann–Finsler Geometry*, Springer (2000). Z. Shen, *Lectures on Finsler Geometry*, World Scientific (2001). H.C. Wang, *On Finsler spaces with completely integrable equations of Killing*, *J. London Math. Soc.* [**s1-22**]{} (1), 5 (1947). X. Li and Z. Chang, *Symmetry and special relativity in Finsler spacetime with constant curvature*, Differ. Geom. Appl. [**30**]{}, 737 (2012). Z. Chang and S. Wang, *Lorentz invariance violation and electromagnetic field in an intrinsically anisotropic spacetime*, Eur. Phys. J. C [**72**]{}, 2165 (2012). L.L. Zhou and B.-Q. Ma, *A Theoretical Diagnosis on Light Speed Anisotropy from GRAAL Experiment*, Astropart. Phys. [**36**]{}, 37 (2012). V.A. Kostelecky and M. Mewes, *Signals for Lorentz violation in electrodynamics*, Phys. Rev. D [**66**]{}, 056005 (2002). Z. Chang, X. Li, and S. Wang, *Neutrino superluminality without Cherenkov-like processes in Finslerian special relativity*, Phys. Lett. B [**710**]{}, 430 (2012). Z. Chang, X. Li, and S. Wang, *Symmetry, causal structure and superluminality in Finsler spacetime*, arXiv:1201.1368. D.C. Champeney, G.R. Isaak, and A.M. Khan, *An “Ether drift” experiment based on the Mossbauer effect*, Phys. Lett. [**7**]{}, 241 (1963). G.R. Isaak, *Mossbauer effect: Application to relativity*, Phys. Bull. [**21**]{}, 255 (1970). M. Chaichian, A.D. Dolgov, V.A. Novikov, and A. Tureanu, *CPT Violation Does Not Lead to Violation of Lorentz Invariance and Vice Versa*, Phys. Lett. B [**699**]{}, 177 (2012). M. Chaichian, K. Fujikawa, and A. Tureanu, *Lorentz invariant CPT violation: Particle and antiparticle mass splitting*, Phys. Lett. B [**712**]{}, 115 (2012). M. Chaichian, K. Fujikawa, and A. Tureanu, *Electromagnetic interaction in theory with Lorentz invariant CPT violation*, Phys. Lett. B [**718**]{}, 1500 (2013). [^1]: E-mail: changz@ihep.ac.cn [^2]: E-mail: wangsai@ihep.ac.cn [^3]: E-mail: saiwangihep@gmail.com [^4]: Corresponding author at IHEP, CAS, 100049 Beijing, China.
--- abstract: 'In this paper, we introduce a new problem called the split feasibility and fixed point equality problems (SFFPEP) and propose a new iterative algorithm for solving the problem (SFFPEP) for the class of quasi-nonexpansive mappings in Hilbert spaces. Furthermore, we study the convergence of the proposed algorithm. At the end, we give numerical example that illustrate our theoretical result. The SFFPEP is a generalization of the split feasibility problem (SFP), split feasibility and fixed point problems (SFFPP) and split equality fixed point problem (SEFPP).' address: 'Department of Mathematics, Faculty of Science Universiti Putra Malaysia, 43400 Serdang Selangor, Malaysia.' author: - 'L.B. Mohammed and A. K[i]{}l[i]{}çman' title: 'the split feasibility and fixed point equality problems for quasi-nonexpansive mappings in Hilbert spaces' --- Introduction ============ The split feasibility problem (SFP) in finite-dimensional Hilbert space was first introduced in 1994 by Censor and Elfving [@4], this problem is useful to some area of applied mathematics, such as in convex optimization, image recovery, etc. Recently, it was found that the SFP can also be applied to study intensity-modulated radiation therapy; see, for example, [@5; @6; @7] and the references therein. For many years, a wide variety of iterative methods has been used to approximate the solution of SFP, for example, see [@9; @10; @11; @12] and references therein. The SFP is formulated as follows: $${\rm~Find~}x^{*}\in C {\rm~such~ that~} y^{*}\in Q,\label{1.1}$$ where $C$ and $Q$ are nonempty closed convex subset of Hilbert space $H_{1}$ and $H_{2},$ respectively, and $A:H_{1}\to H_{2}$ is a bounded linear operator. The split feasibility and fixed point problems (SFFPP) is required to find a vector $$x^{*}\in C\cap Fix(U) {\rm~such~ that~} Ax^{*}\in Q\cap Fix(T),\label{1.2}$$ where $U:H_{1}\to H_{1}$ and $T:H_{2}\to H_{2}$ are two nonlinear mappings, and $A:H_{1}\to H_{2}$ is a bounded linear operator. It is easy to see that Problem (\[1.2\]) reduces to the Problem (\[1.1\]) as $C:=Fix(U)$ and $Q:=Fix(T).$ Therefore, it is worth to mentioned here that Problem (\[1.2\]) generalizes Problem (\[1.1\]). The split equality fixed point problems (SEFPP) was introduced by Moudafi [@1] and it takes the following form: $${\rm~Find~} x^{*}\in C {\rm~and~} y^{*}\in Q {\rm~such~ that~} Ax^{*}=By^{*}.\label{1.3}$$ where $A:H_{1}\to H_{3}$ and $B:H_{2}\to H_{3}$ are two bounded linear operators, $C$ and $Q$ be a nonempty closed convex subset of $H_{1}$ and $H_{2},$ respectively. It is easy to see that Problem (\[1.3\]) reduces to Problem (\[1.1\]) as $H_{2}=H_{3}$ and $B=I$ ($I$ is the identity operator on $H_{2}$) in (\[1.3\]). Therefore Problem (\[1.3\]) proposed by Moudafi [@1] is a generalization of Problem (\[1.1\]). We now introduce a new problem called the split feasibility and fixed point equality problems (SFFPEP), this is fomulated as: $${\rm~Find~} x^{*}\in C\cap Fix(U) {\rm~and~} y^{*}\in Q\cap Fix(T) {\rm~such~ that~} Ax^{*}=By^{*},\label{1.4}$$ where $U:H_{1}\to H_{1}$ and $T:H_{2}\to H_{2}$ are two quasi-nonexpansive mappings with $Fix(U)\neq\emptyset$ and $Fix(T)\neq\emptyset,$ $A:H_{1}\to H_{3}$ and $B:H_{2}\to H_{3}$ are two bounded linear operators, $C$ and $Q$ are two nonempty closed convex subset of $H_{1}$ and $H_{2},$ respectively. Note that if $C:=Fix(U)$ and $Q:=Fix(T),$ then, Problem (\[1.4\]) reduces to Problem (\[1.3\]) and also reduces to Problem (\[1.2\]) as $H_{2}=H_{3}$ and $B=I$ ($I$ stands for the identity operator on $H_{2}$) in (\[1.4\]). In the light of this, it worth to mention here that the SFFPEP generalizes the SFP, SFFPP and SEFPP. Therefore, the results and conclusions that are true for the SFFPEP continue to holds for these problems (SFP, SFFPP and SEFPP) and it definitely shows the significance and the range of applicability of SFFPEP. In order to approximate the solution of SEFPP (\[1.2\]), Moudafi and Al-Shemas [@2] introduced the following simultaneous iterative methods which generate a sequences $\{x_{n}\}$ and $\{y_{n}\}$ by $${} \left\{ \begin{array}{ll} x_{n+1}=U(x_{n}-\lambda_{n} A^{*}(Ax_{n}-By_{n}),\\ \\ y_{n+1}=T(y_{n}+\lambda_{n} B^{*}(Ax_{n}-By_{n}), \forall n\geq 1, & \textrm{ $ $} \end{array} \right.\label{1.5}$$ where $U:H_{1}\to H_{1},$ $T:H_{2}\to H_{2}$ are two firmly quasi-nonexpansive mappings, $A:H_{1}\to H_{3},$ $B:H_{2}\to H_{3}$ are two bounded linear operators with their adjoints $A^{*}$ and $B^{*},$ respectively, $\lambda_{n}\subset\left(\epsilon, \frac{2}{L_{A^{*}A}L_{B^{*}B}}\right),$ $L_{A^{*}A}$ and $L_{B^{*}B}$ denote the spectral radius of the operators $A^{*}A$ and $B^{*}B,$ respectively. Noticing that projection operators have very attractive properties that make them particularly well suited for iterative algorithms, for example, see [@3]. By setting $U=P_{C}$ and $T=P_{Q},$ where $P_{C}$ and $P_{Q}$ denote the metric projection of $H_{1}$ and $H_{2}$ onto $C$ and $Q,$ respectively. Trivially, Algorithm (\[1.5\]) reduces to the following simultaneous iterative method: $${} \left\{ \begin{array}{ll} x_{n+1}=P_{C}(x_{n}-\lambda_{n} A^{*}(Ax_{n}-By_{n}),\\ \\ y_{n+1}=P_{Q}(y_{n}+\lambda_{n} B^{*}(Ax_{n}-By_{n}), \forall n\geq 1, & \textrm{ $ $} \end{array} \right.\label{k}$$ this algorithm was investigated in [@13] by means of the projected Landweber’s algorithm. We already mentioned that if $B=I,$ Problem (\[1.3\]) reduces to the classical SFP (\[1.1\]), and if in addition, $\lambda_{n}=\lambda=1,$ the second equation of Algorithm (\[k\]) reduces to $y_{n+1}=P_{Q}(Ax_{n})$ while the first equation gives the following algorithm: $$x_{n+1}=P_{C}(x_{n}-\lambda A^{*}(I-P_{Q})Ax_{n}),\label{r}$$ Algorithm (\[r\]) is exactly the algorithm proposed by Byrne for more details, see [@9] and reference therein. Very recently, Yuan et al., [@15], modified the algorithm of Moudafi and Al-Shemas [@2] and considered the following algorithm: $${} \left\{ \begin{array}{ll} x_{n+1}=(1-\alpha_{n})x_{n}+ \alpha_{n}U(x_{n}-\lambda_{n} A^{*}(Ax_{n}-By_{n}),\\ \\ y_{n+1}=(1-\alpha_{n})y_{n}+ \alpha_{n}T(y_{n}+\lambda_{n} B^{*}(Ax_{n}-By_{n}), \forall n\geq 1, & \textrm{ $ $} \end{array} \right.\label{1.5b}$$ where $U,T,$ $A,A^{*},$ $B, B^{*},$ $\lambda_{n},$ $L_{A^{*}A}$ and $L_{B^{*}B}$ as in Algorithm (\[1.5\]), and $\alpha_{n}\subset[\alpha,1]$ for $\alpha>0.$ By imposing some appropriate conditions on parameters and the operators involved, they proved a weak convergence result and they also obtained strong convergence result by imposing semicomfactness conditions. In 2015, Chidume et al., [@14] modified Algorithm (\[1.5b\]) and considered the following algorithm: $${} \left\{ \begin{array}{ll} u_{n}= x_{n}-\lambda_{n} A^{*}(Ax_{n}-By_{n}) \\x_{n+1}=(1-\alpha)u_{n}+ \alpha Uu_{n},\\ \\r_{n}=y_{n}+\lambda_{n} B^{*}(Ax_{n}-By_{n}) \\ y_{n+1}=(1-\alpha)r_{n}+ \alpha Tr_{n}, \forall n\geq 1, & \textrm{ $ $} \end{array} \right.\label{1.5ab}$$ where $U,T,$ are two demicontractive mappings, $A,A^{*},$ $B, B^{*},$ $\lambda_{n},$ $L_{A^{*}A}$ and $L_{B^{*}B}$ as in Algorithm (\[1.5b\]), and $\alpha\in (0,1)$. Under some appropriate conditions, they also proved a weak convergence result and strong convergence follows only if $U,$ and $T$ are semi-compacts. To solve Problem (\[1.2\]), Chen et al.,[@8] introduced the following Ishikawa extra-gradient iterative methods which generate a sequence $\{x_{n}\}$ by: $${} \left\{ \begin{array}{ll} x_{0}\in C~ {\rm ~chosen~arbitrarily,} \\ y_{n}=P_{C}(x_{n}-\lambda_{n} A^{*}(I-UP_{Q})Ax_{n}), \\ z_{n}=P_{C}(x_{n}-\lambda_{n} A^{*}(I-UP_{Q})Ay_{n}), \\ w_{n}=(1-\beta_{n})z_{n}+\beta_{n}Tz_{n}, \\ x_{n+1}=(1-\alpha_{n})z_{n}+\alpha_{n}Tw_{n}, \forall n\geq 0, & \textrm{ $ $} \end{array} \right.\label{chen}$$ where $\lambda_{n}\subset (0,\frac{1}{2\|A\|^{2}})$ and $\beta_{n},\alpha_{n}\subset (0,1)$ such that $0<a<\beta_{n}<c<\alpha_{n}<\frac{1}{\sqrt{1+L^{2}+1}},$ $U$ is a nonexpansive mapping and $T$ is L-Lipschitzian pseudocontractive mapping. Motivated and inspired by the work of; Moudafi [@1], Moudafi and Al-Shemas [@2], Chen et al., [@8], Byrne [@9], Yuan et al., [@15] and Chidume et al., [@14], we further propose the following algorithm to solve the split feasibility and fixed point equality problems (\[1.4\]) in the case where $U$ and $T$ are quasi-nonexpansive mappings. $${} \left\{ \begin{array}{ll} x_{1}\in H_{1} {\rm ~and~} x_{2}\in H_{2}; \\z_{n}=P_{C}(x_{n}-\lambda_{n} A^{*}(Ax_{n}-By_{n})), \\ w_{n}=(1-\beta_{n})z_{n}+\beta_{n}U(z_{n}), \\ x_{n+1}=(1-\alpha_{n})z_{n}+\alpha_{n}U(w_{n}), \\ \\u_{n}=P_{Q}(y_{n}+\lambda_{n} B^{*}(Ax_{n}-By_{n})), \\ r_{n}=(1-\beta_{n})u_{n}+\beta_{n}T(u_{n}), \\ y_{n+1}=(1-\alpha_{n})u_{n}+\alpha_{n}T(r_{n}), \forall n\geq 1, & \textrm{ $ $} \end{array} \right.$$ where $0<a<\beta_{n}<1,$ $0<b<\alpha_{n}<1,$ and $\lambda_{n}\in\left(0, \frac{2}{L_{1}+L_{2}}\right),$ where $L_{1}$ and $L_{2}$ denote the spectral radius of the operators $A^{*}A$ and $B^{*}B,$ respectively. It is important to know that the class of quasi-nonexpansive mapping generalizes the class of firmly quasi-nonexpansive mappings studied by Moudafi and Al-Shemas [@2]. Under some appropriate conditions imposed on the parameters and operators involved, we proved a weak convergence results of the proposed algorithms. Furthermore, we gave numerical example that illustrate our theoretical results. The results presented in this paper, improve, extend and generalize a number of well-known results annouced. Preliminaries ============= In this section, we present some definitions and lemmas which will be use in proving our main result. Let $H$ be a Hilbert space and $T:H\to H$ be a map with $Fix(T)=\{x\in H: Tx=x\}\neq\emptyset.$ $T$ is said to be; nonexpansive, if $$\left\|Tx-Ty\right\|\leq \left\|x-y\right\|, \forall x,y\in H,$$ quasi-nonexpansive, if $$\left\|Tx-q\right\|\leq \left\|x-q\right\|, \forall x\in H {\rm~and~} q\in Fix(T),\nonumber$$ firmly quasi-nonexpansive, if $$\left\|Tx-q\right\|^{2}\leq\left\|x-q\right\|^{2}-\left\|Tx-x\right\|^{2}, \forall x\in H {\rm~and~} q\in Fix(T).\nonumber$$ And also $T$ is said to be demiclosed at 0, if for any sequence $\{x_{n}\}$ in $H$ such that $x_{n}$ converges weakly to $x$ and $Tx_{n}$ converges strongly to 0, then it implies that $Tx=0.$ And it is said to be semi-compact, if for any bounded sequence $\{x_{n}\}\subset H$ with $(I-T)x_{n}$ converges strongly to 0, there exists a sub-sequence say $\{x_{n_{k}}\}$ of $\{x_{n}\}$ such that $\{x_{n_{k}}\}$ converges strongly to 0. \[opial\][(Opial  [@17])]{} Let $H$ be a real Hilbert space and $\{x_{n}\}$ be a sequence in $H$ such that there exists a nonempty set $C\subset H$ such that the following conditions are satisfied: 1. For each $x\in C$, $\underset{n\to\infty}{\lim}{\|x_{n}-x\|}$ exists, 2. Any weak-cluster point of the sequence $\{x_{n}\}$ belongs to $C.$ Then, there exists $y\in C$ such that $\{x_{n}\}$ converges weakly to $y.$ In sequel, adopt the following notations: - $I:$ The identity operator on a Hilbert space $H,$ - $Fix(T):$ The fixed point set of $T$ i.e., $Fix(T)=\{x\in H: Tx=x\},$ - $"\rightarrow "$and $"\rightharpoonup"$ The strong and weak covergence, respectively, - $\omega_{\omega}(x_{n}):$ The set of the cluster point of $\{x_{n}\}$ in the weak topology i.e., $\{{\rm~there~ exists~} \{x_{n_{k}}\}$ of $\{x_{n}\}$ such that $x_{n_{k}}\rightharpoonup x\},$ - $\Omega:$ The solution set of Problem (\[1.4\]), i.e., $$\Omega=\Big\{{\rm~Find~} x^{*}\in C\cap Fix(U) {\rm~and~} y^{*}\in Q\cap Fix(T) {\rm~such~ that~} Ax^{*}=By^{*}\Big\}.\label{2.2}$$ Main Results ============ To approximate the solution of split feasibility and fixed point equality problems (\[2.2\]), we make the following assumptions: 1. $H_{1},$ $H_{2},$ $H_{3},$ are real Hilbert spaces, $C$ and $Q$ are two nonempty closed convex subset of $H_{1}$ and $H_{2},$ respectively. 2. $U:H_{1}\to H_{1}$ and $T:H_{2}\to H_{2}$ are two quasi-nonexpansive mappings with $Fix(U)\neq\emptyset$ and $Fix(T)\neq\emptyset.$ 3. $A:H_{1}\to H_{3}$ and $B:H_{2}\to H_{3}$ are two bounded linear operators with their adjoints $A^{*}$ and $B^{*},$ respectively. 4. $(U-I)$ and $(T-I)$ are demiclosed at zero. 5. $P_{C}$ and $P_{Q}$ are metric projection of $H_{1}$ and $H_{2}$ onto $C$ and $Q,$ respectively. 6. For arbitrary $x_{1}\in H_{1}$ and $ y_{1}\in H_{2},$ define a sequence $\{(x_{n}, y_{n})\}$ by: $${} \left\{ \begin{array}{ll} z_{n}=P_{C}(x_{n}-\lambda_{n} A^{*}(Ax_{n}-By_{n})), \\ w_{n}=(1-\beta_{n})z_{n}+\beta_{n}U(z_{n}), \\ x_{n+1}=(1-\alpha_{n})z_{n}+\alpha_{n}U(w_{n}), \\ \\u_{n}=P_{Q}(y_{n}+\lambda_{n} B^{*}(Ax_{n}-By_{n})), \\ r_{n}=(1-\beta_{n})u_{n}+\beta_{n}T(u_{n}), \\ y_{n+1}=(1-\alpha_{n})u_{n}+\alpha_{n}T(r_{n}), \forall n\geq 1. & \textrm{ $ $} \end{array} \right.\label{BUL}$$ where $0<a<\beta_{n}<1,$ $0<b<\alpha_{n}<1,$ and $\lambda_{n}\in\left(0, \frac{2}{L_{1}+L_{2}}\right),$ where $L_{1}$ and $L_{2}$ denote the spectral radius of the operators $A^{*}A$ and $B^{*}B,$ respectively. We are now in the position to state and prove our main result. \[T1\] Suppose that assumptions $(B_{1})-(B_{6})$ are satisfied, in addition assume that the solution set $\Omega\neq\emptyset.$ Then, the sequence $\{(x_{n}, y_{n})\}$ generated by Algorithm (\[BUL\]) converges weakly to $(x^{*}, y^{*})\Omega.$ Let $(x^{*}, y^{*})\in\Omega,$ by (\[BUL\]), we have $$\begin{aligned} \left\|x_{n+1}-x^{*}\right\|^{2}&=\left\|(1-\alpha_{n})(z_{n}-x^{*})+\alpha_{n}(Uw_{n}-x^{*})\right\|^{2}\nonumber \\=&(1-\alpha_{n})\left\|z_{n}-x^{*}\right\|^{2}+\alpha_{n}\left\|Uw_{n}-x^{*}\right\|^{2}-\alpha_{n}(1-\alpha_{n})\left\|Uw_{n}-z_{n}\right\|^{2}\nonumber \\\leq& (1-\alpha_{n})\left\|z_{n}-x^{*}\right\|^{2} +\alpha_{n}\left\|w_{n}-x^{*}\right\|^{2}-\alpha_{n}(1-\alpha_{n})\left\|Uw_{n}-z_{n}\right\|^{2}.\label{a}\end{aligned}$$ On the other hand, $$\begin{aligned} \left\|w_{n}-x^{*}\right\|^{2}&=\left\|(1-\beta_{n})(z_{n}-x^{*})+\beta_{n}(Uz_{n}-x^{*})\right\|^{2}\nonumber \\=&(1-\beta_{n})\left\|z_{n}-x^{*}\right\|^{2}+\beta_{n}\left\|Uz_{n}-x^{*}\right\|^{2}-\beta_{n}(1-\beta_{n})\left\|Uz_{n}-z_{n}\right\|^{2}\nonumber \\\leq& \left\|z_{n}-x^{*}\right\|^{2}-\beta_{n}(1-\beta_{n})\left\|Uz_{n}-z_{n}\right\|^{2},\label{b}\end{aligned}$$ and $$\begin{aligned} \left\|z_{n}-x^{*}\right\|^{2}&=\left\|P_{C}(x_{n}-\lambda_{n} A^{*}(Ax_{n}-By_{n}))-P_{C}(x^{*})\right\|^{2}\nonumber \\&\leq \left\|x_{n}-\lambda_{n} A^{*}(Ax_{n}-By_{n})-x^{*}\right\|^{2}\nonumber \\&= \left\|x_{n}-x^{*}\right\|^{2}-2\lambda_{n}\left\langle Ax_{n}-Ax^{*}, Ax_{n}-By_{n}\right\rangle+\lambda_{n}^{2}L_{1}\left\|Ax_{n}-By_{n}\right\|^{2}\label{c}\end{aligned}$$ From $(\ref{a})-(\ref{c}),$ we obtain that $$\begin{aligned} \left\|x_{n+1}-x^{*}\right\|^{2}&\leq\left\|x_{n}-x^{*}\right\|^{2}-2\lambda_{n}\left\langle Ax_{n}-Ax^{*}, Ax_{n}-By_{n}\right\rangle+\lambda_{n}^{2}L_{1}\left\|Ax_{n}-By_{n}\right\|^{2}\nonumber \\&-\alpha_{n}\beta_{n}(1-\beta_{n})\left\|U(z_{n})-z_{n}\right\|^{2}-\alpha_{n}(1-\alpha_{n})\left\|Uw_{n}-z_{n}\right\|^{2}.\label{d}\end{aligned}$$ Similarly, the second equation of Equation (\[BUL\]) gives $$\begin{aligned} \left\|y_{n+1}-y^{*}\right\|^{2}&\leq\left\|y_{n}-y^{*}\right\|^{2}+2\lambda_{n}\left\langle By_{n}-By^{*}, Ax_{n}-By_{n}\right\rangle+\lambda_{n}^{2}L_{2}\left\|Ax_{n}-By_{n}\right\|^{2}\nonumber \\&-\alpha_{n}\beta_{n}(1-\beta_{n})\left\|T(u_{n})-u_{n}\right\|^{2}-\alpha_{n}(1-\alpha_{n})\left\|Tr_{n}-u_{n}\right\|^{2}.\label{e}\end{aligned}$$ By (\[d\]), (\[e\]) and noticing that $Ax^{*} = By^{*},$ we deduce that $$\begin{aligned} \left\|x_{n+1}-x^{*}\right\|^{2}+\left\|y_{n+1}-y^{*}\right\|^{2}&\leq \left\|x_{n}-x^{*}\right\|^{2}+\left\|y_{n}-y^{*}\right\|^{2}-2\lambda_{n}\left\|Ax_{n}-By_{n}\right\|^{2}\nonumber \\&+\lambda_{n}^{2}(L_{1}+L_{2})\left\|Ax_{n}-By_{n}\right\|^{2}\nonumber \\&-\alpha_{n}\beta_{n}(1-\beta_{n})\left\|U(z_{n})-z_{n}\right\|^{2}\nonumber \\&-\alpha_{n}\beta_{n}(1-\beta_{n})\left\|T(u_{n})-u_{n}\right\|^{2}.\label{f}\end{aligned}$$ Thus, we deduce that $$\begin{aligned} \Omega_{n+1}&\leq \Omega_{n}-\lambda_{n}\left(2-\lambda_{n}^{2}(L_{1}+L_{2})\right)\left\|Ax_{n}-By_{n}\right\|^{2}\nonumber \\&-\alpha_{n}\beta_{n}(1-\beta_{n})\left\|U(z_{n})-z_{n}\right\|^{2}-\alpha_{n}\beta_{n}(1-\beta_{n})\left\|T(u_{n})-u_{n}\right\|^{2},\label{g}\end{aligned}$$ where $$\Omega_{n}:=\left\|x_{n}-x^{*}\right\|^{2}+\left\|y_{n}-y^{*}\right\|^{2}.$$ Thus, $\{\Omega_{n}\}$ is a non-increasing sequence and bounded below by 0, therefore, it converges. From (\[g\]) and the fact that $\{\Omega_{n}\}$ converges, we deduce that $$\begin{aligned} \underset{n\to\infty}{\lim}\left\|Ax_{n}-By_{n}\right\|=0,\label{q}\end{aligned}$$ $$\begin{aligned} \underset{n\to\infty}{\lim}\left\|Uz_{n}-z_{n}\right\|=0 {\rm~and~} \underset{n\to\infty}{\lim}\left\|Tu_{n}-u_{n}\right\|=0\label{K}. \end{aligned}$$ Furthermore, since $\{\Omega_{n}\}$ converges, this ensures that $\{x_{n}\}$ and $\{y_{n}\}$ also converges.\ Now, let $(x,y)\in\Omega,$ this implies that $x\in C\cap Fix(U)$ and $y\in Q\cap Fix(T)$ such that $Ax=By.$ The fact that $x_{n}\rightharpoonup x$ and $\underset{n\to\infty}{\lim}\left\|Ax_{n}-By_{n}\right\|=0$ together with $$z_{n}=P_{C}(x_{n}-\lambda_{n} A^{*}(Ax_{n}-By_{n})),$$ we deduce that $z_{n}\rightharpoonup P_{C}x.$ Since $x\in C,$ by projection theorem, we obtain that $P_{C}x=x.$ Hence, $z_{n}\rightharpoonup x.$ Similarly, The fact that $y_{n}\rightharpoonup y$ and $\underset{n\to\infty}{\lim}\left\|Ax_{n}-By_{n}\right\|=0$ together with $$u_{n}=P_{Q}(y_{n}+\lambda_{n} B^{*}(Ax_{n}-By_{n})),$$ we deduce that $u_{n}\rightharpoonup P_{Q}x.$ Since $x\in Q,$ by projection theorem, we obtain that $P_{Q}y=y.$ Hence, $u_{n}\rightharpoonup y.$ Now, $z_{n}\rightharpoonup x$ and $\underset{n\to\infty}{\lim}\left\|Uz_{n}-z_{n}\right\|=0$ together with the demiclosed of $(U-I)$ at zero, we deduce that $x\in Fix(U)$ which implies that $x\in Fix(U).$ On the other hand, $u_{n}\rightharpoonup y$ and $\underset{n\to\infty}{\lim}\left\|Tu_{n}-u_{n}\right\|=0$ together with the demiclosed of $(T-I)$ at zero, we deduce that $y\in Fix(T)$ which implies that $y\in Fix(T).$ Since $z_{n}\rightharpoonup x,$ $u_{n}\rightharpoonup y$ and the fact that $A$ and $B$ are bounded linear operators, we have $$Az_{n}\rightharpoonup Ax {\rm~~and~~} Bu_{n}\rightharpoonup By,$$ This implies that $$Az_{n}-Bu_{n}\rightharpoonup Ax-By,$$ which turn to implies that $$\left\|Ax-By\right\|\leq\underset{n\to\infty}{\liminf}\left\|Az_{n}-Bu_{n}\right\|=0,$$ which further implies that $Ax=By.$ Noticing that $x\in C,$ $x\in Fix(U),$ $y\in Q$ and $y\in Fix(T)$, we have that $x\in C\cap Fix(U)$ and $y\in Q\cap Fix(T).$ Hence, we conclude that $(x,y)\in \Omega.$ Summing up, we have proved that: 1. for each $(x^{*}, x^{*})\in\Omega,$ the $\underset{n\to\infty}{\lim}\left(\left\|x_{n}-x^{*}\right\|^{2}+\left\|y_{n}-y^{*}\right\|^{2}\right)$ exists; 2. each weak cluster of the sequence $(x_{n}, y_{n})$ belongs to $\Omega.$ Thus, by Lemma (\[opial\]) we conclude that the sequences $(x_{n}, y_{n})$ converges weakly to $(x^{*}, x^{*})\in\Omega.$ And the proof is complete. \[T2\]Suppose that all the hypothesis of Theorem \[T1\] are satisfied and in addition, $U$ and $T$ are semi-compacts, then, the sequence $\{(x_{n}, y_{n})\}$ converges strongly to $(x^{*}, y^{*})\in\Omega.$ As in the proof of Theorem \[T1\], $\{u_{n}\}$ and $\{z_{n}\}$ are bounded, by (\[K\]) and the fact that $U$ and $T$ are semi-compacts, then there exists a sub-sequences $\{u_{n_{k}}\}$ and $\{z_{n_{k}}\}$ (suppose without loss of generality) of $\{u_{n}\}$ and $\{z_{n}\}$ such that $u_{n_{k}}\to x$ and $z_{n_{k}}\to y.$ Since, $u_{n}\rightharpoonup x^{*}$ and $z_{n}\rightharpoonup y^{*}$, we have $ x=x^{*}$ and $y= y^{*}.$ By (\[q\]) and the fact that $u_{n_{k}}\to x^{*}$ and $z_{n_{k}}\to y^{*},$ we have $$\begin{aligned} \underset{n\to\infty}{\lim}\left\|Ax^{*}-Ay^{*}\right\|=\underset{n\to\infty}{\lim}\left\|Au_{n_{k}}-Bz_{n_{k}}\right\|=0.\end{aligned}$$ which turn to implies that $Ax^{*}=Ay^{*}$. Hence $(x^{*}, y^{*})\in\Omega$. Thus, the iterative algorithm of Theorem \[T1\] conveges strongly to the solution of Problem \[2.2\]. Suppose that conditions $(B_{1})-(B_{6})$ are satisfied and let the sequence $\{(x_{n}, y_{n})\}$ be generated by Algorithm (\[BUL\]). Assume that $\Omega\neq\emptyset$ and let $U$ and $T$ be a firmly of quasi-nonexpansive mappings. Then, the sequence $\{(x_{n}, y_{n})\}$ generated by Algorithm (\[BUL\]) converges weakly to the solution set of Problem (\[2.2\]). Suppose that conditions $$(B_{1})-(B_{5})$$ are satisfied are satisfied and let the sequence $\{(x_{n}, y_{n})\}$ be generated by $${} \left\{ \begin{array}{ll} z_{n}=x_{n}-\lambda_{n} A^{*}(Ax_{n}-By_{n}), \\ x_{n+1}=(1-\alpha_{n})z_{n}+\alpha_{n}U(z_{n}), \\ \\u_{n}=y_{n}+\lambda_{n} B^{*}(Ax_{n}-By_{n}), \\ y_{n+1}=(1-\alpha_{n})u_{n}+\alpha_{n}T(_{n}), \forall n\geq 0. & \textrm{ $ $} \end{array} \right.\label{BUL3}$$ where $0<a<\beta_{n}<1,$ and $\lambda_{n}\in\left(0, \frac{2}{L_{1}+L_{2}}\right),$ where $L_{1}$ and $L_{2}$ denote the spectral radius of the operators $A^{*}A$ and $B^{*}B,$ respectively. Assume that $\Omega\neq\emptyset.$ Then, the sequence $\{(x_{n}, y_{n})\}$ generated by Algorithm (\[BUL3\]) converges weakly to the solution of SEFPP (\[1.3\]). Trivially, Algorithm (\[BUL\]) reduces to Algorithm (\[BUL3\]) as $\beta=0,$ $P_{C}=P_{Q}=I$ and SFFPEP (\[1.4\]) reduces to SEFPP (\[1.3\]) as $C:=Fix(U)$ and $Q:=Fix(T).$ Therefore, all the hypothesis of Theorem \[T1\] are satisfied. Hence, the proof of this corollary follows directly from Theorem \[T1\]. Numerical Example ================= In this section, we give a numerical example to illustrate our theoretical results. \[example\]Let $H_{1}=\Re$ with the inner product defined by $\left\langle x, y\right\rangle=xy$ for all $x, y\in \Re$ and $\|.\|$ stand for the corresponding norm. Let $C:=[0,\infty),$ $Q:=[0,\infty)$ and define a mappings $T:C\to \Re$ and $S:Q\to \Re$ by $Tx = \frac{x^{2}+5}{1+x},$ for all $x\in C$ and $Sx=\frac{x+5}{5}$, for all $x\in Q.$ Then $T$ and $S$ are quasi nonexpansive mappings. Trivially, $Fix(T)=5 $ and $Fix(S)=\frac{5}{4}.$ Now, $$\begin{aligned} \left|Tx-5 \right|&=\left|\frac{x^{2}+5}{1+x}-5\right|=\frac{x}{1+x}\left|x-5\right| \\&\leq \left|x-5\right|.\end{aligned}$$ On the other hand, $$\begin{aligned} \left|Sx-\frac{5}{4} \right|&=\left|\frac{x+5}{5}-\frac{5}{4}\right|=\frac{1}{5}\left|x-\frac{5}{4}\right| \\&\leq \left|x-5\right|.\end{aligned}$$ Hence, $T$ and $S$ are quasi-nonexpansive mappings. Let $H_{1}=\Re$ with the inner product defined by $\left\langle x, y\right\rangle=xy$ for all $x, y\in \Re$ and $\|.\|$ stand for the corresponding norm. Let $C:=[0,\infty),$ $Q:=[0,\infty)$ and define a mappings $U:C\to \Re$ and $T:Q\to \Re$ by $Ux = \frac{x^{2}+5}{1+x},$ for all $x\in C$ and $Tx=\frac{x+5}{5}$, for all $x\in Q.$ And also let $P_{C}=P_{Q}=I,$ $Ax=x,$ $By=4y,$ $\lambda_{n}=1$, $\alpha_{n}=\frac{1}{5},$ $\beta_{n}=\frac{1}{8}$ and $\{(x_{n},y_{n})\}$ be the sequence generated by $${} \left\{ \begin{array}{ll} x_{0}\in C{\rm~~and~~} y_{0}\in Q, \\z_{n}=P_{C}(x_{n}- A^{*}(x_{n}-4y_{n})), \\ w_{n}=(1-\frac{1}{8})z_{n}+\frac{1}{8}U(z_{n}), \\ x_{n+1}=(1-\frac{1}{5})z_{n}+\frac{1}{5}U(w_{n}), \\ \\u_{n}=P_{Q}(y_{n}+ B^{*}(x_{n}-4y_{n})), \\ r_{n}=(1-\frac{1}{8})u_{n}+\frac{1}{8}T(u_{n}), \\ y_{n+1}=(1-\frac{1}{5})u_{n}+\frac{1}{5}T(r_{n}), \forall n\geq 0. & \textrm{ $ $} \end{array} \right.\label{BUL11}$$ Then, $\{(x_{n},y_{n})\}$ converges to $(5,5/4)\in\Omega$. By Example \[example\] $U$ and $T$ are quasi-nonexpansive mappings. Clearly, $A$ and $B$ are bounded linear operator on $\Re$ with $A=A^{*}=1$ and $B=B^{*}=4,$ respectively. Furthermore, it is easy to see that $Fix(U)=5$ and $Fix(T)=\frac{5}{4}.$ Hence, $$\Omega=\Big\{ 5\in C\cap Fix(U) {\rm~and~} 5/4\in Q\cap Fix(T) {\rm~such ~that~} A(5)=B(5/4)\Big\}.$$ After simplification, Algorithm (\[BUL11\]) reduces to $${} \left\{ \begin{array}{ll} x_{0}\in C{\rm~~and~~} y_{0}\in Q, \\z_{n}=x_{n}, \\ w_{n}=\frac{7}{8}z_{n}+\frac{1}{8}(\frac{z_{n}^{2}+5}{z_{n}+1}), \\ x_{n+1}=\frac{4}{5}z_{n}+\frac{1}{5}(\frac{w_{n}^{2}+5}{w_{n}+1}), \\ \\u_{n}=y_{n}, \\ r_{n}=\frac{7}{8}u_{n}+\frac{1}{8}(\frac{u_{n}+5}{5}), \\ y_{n+1}=\frac{4}{5}u_{n}+\frac{1}{5}(\frac{r_{n}+5}{5}), \forall n\geq 0. & \textrm{ $ $} \end{array} \right.$$ ------------- ------------- ------------- n $x_{n}$ $y_{n}$ \[0.5ex\] 0 10.00000000 15.00000000 1 9.898293685 12.74500000 2 9.797736851 10.85982000 3 9.698337655 9.283809520 . . . . . . . . . 248 5.001051418 1.250000002 249 5.001012726 1.250000002 250 5.000975458 1.250000002 \[1ex\] ------------- ------------- ------------- : Starting with initial values $x_{0}=10$ and $y_{0}= 15$[]{data-label="fig1"} ![The convergence of $\{(x_{n}, y_{n})\}$ with the initial value $x_{0}=10$ and $y_{0}=15$[]{data-label="fig2"}](LBM6) ------------- ------------- ------------- n $x_{n}$ $y_{n}$ \[0.5ex\] 0 5.000000000 1.250000000 1 5.000000000 1.250000000 2 5.000000000 1.250000000 . . . . . . . . . 98 5.000000000 1.250000000 99 5.000000000 1.250000000 100 5.000000000 1.250000000 \[1ex\] ------------- ------------- ------------- : Starting with initial values $x_{0}=5$ and $y_{0}= 1.25$[]{data-label="tab:StartingWithInialValue101"} ![The convergence of $\{(x_{n}, y_{n})\}$ with the initial value $x_{0}=5$ and $y_{0}=1.25$[]{data-label="fig4"}](LBM5) Conclusion ========== In this paper, we introduce a new problem called split feasibility and fixed point equality problems (SFFPEP) and study it for the class of quasi-nonexpansive mappings in Hilbert spaces. Under some suitable assumptions imposed on the parameters and operators involved, we proved a weak convergence theorem of the propose problem. Furthermore, we gives a numerical example that illustrate our theoretical result. The results presented in this paper, extend and complement the results of; Moudafi [@1], Moudafi and Al-Shemas [@2], Chen et al., [@8], Byrne [@9], Yuan et al., [@15] and Chidume et al., [@14]. The split feasibility and fixed point equality problem (SFFPEP) is a very interesting topic. Its generalizes the split feasibility problem (SFP), fixed point problem (FPP), split feasibilty and fixed point problem (SFFPP) and split equality fixed point problem (SEFPP) . All the results and conclusions that are true for the split feasibility and fixed point equality problem (SFFPEP) continue to holds for these problems (SFP,FPP,SFFPP and SEFPP) and it definitely shows the significance and the range of applicability of split feasibility and fixed point equality problem (SFFPEP). Theorem \[T2\] gives a strong convergence result for the class of quasi-nonexpansive mappings with the assumption that each mapping is a semi-compact. This compactness type condition appeared very strong as only few mapping are semi-compact. This leads us to think of the following question: 1. Can the strong convergence of Theorem \[T1\] be obtain without imposing the semi-compactness conditions? 2. If the above answer is affirmative, can the strong convergence hold for the class of infinite family of quasi-nonexpansive mappings? This will be our future research. [cite]{} Moudafi, A. (2014). Alternating CQ-algorithm for convex feasibility and split fixed-point problems. J. Nonlinear Convex Anal, 15(4), 809–818. Moudafi, A., & Al-Shemas, E. (2013). Simultaneous iterative methods for split equality problem. Trans. Math. Program. Appl, 1(2), 1–11. Moudafi, A. (2002). Mixed equilibrium problems: sensitivity analysis and algorithmic aspect. Computers & Mathematics with Applications, 44(8), 1099–1108. Censor, Y., & Elfving, T. (1994). A multiprojection algorithm using Bregman projections in a product space. Numerical Algorithms, 8(2), 221–239. Censor, Y., Bortfeld, T., Martin, B., & Trofimov, A. (2006). A unified approach for inversion problems in intensity-modulated radiation therapy. Physics in Medicine and Biology, 51(10), 2353–2365. Censor, Y., Elfving, T., Kopf, N., & Bortfeld, T. (2005). The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Problems, 21(6), 2071–2084. Censor, Y., Motova, A., & Segal, A. (2007). Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. Journal of Mathematical Analysis and Applications, 327(2), 1244–1256. Chen, J. Z., Ceng, L. C., Qiu, Y. Q., & Kong, Z. R. (2015). Extra-gradient methods for solving split feasibility and fixed point problems. Fixed Point Theory and Applications, 2015(1), 1–21. Ma, Y. F., Wang, L., & Zi, X. J. (2013). Strong and weak convergence theorems for a new split feasibility problem. In Int. Math. Forum 8(33), 1621–1627. Byrne, C. (2002). Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Problems, 18(2),441–453. Byrne, C. (2003). A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse problems, 20(1), 103–-120. Xu, H. K. (2010). Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Problems, 26(10), 105018. Qu, B., & Xiu, N. (2005). A note on the CQ algorithm for the split feasibility problem. Inverse Problems, 21(5), 1655–1665. Byrne, C., & Moudafi, A. (2013). Extensions of the CQ algorithm for the split feasibility and split equality problems. Nonlinear and Convex Anal. Chidume, C. E., Ndambomve, P., & Bello, U. A. (2015). The split equality fixed point problem for demicontractive mappings. Journal of Nonlinear Analysis and Optimization: Theory & Applications, 6(1), 61–69. He, Z., & Du, W. S. (2012). Nonlinear algorithms approach to split common solution problems. Fixed Point Theory and Applications, 2012(1), 1–14. Opial, Z. (1967). Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bulletin of the American Mathematical Society, 73(4), 591–597.
[Bifurcation of asymptotically stable periodic solutions in nearly impact oscillators: DRAFT ]{}\ [Oleg Makarenkov]{}\ 0.2truecm Department of Mathematics, Imperial College London, UK\ \ 0.2truecm Mathematical Institute, University of Utrecht, Netherlands 09 September 2009 Introduction ============ Since the observation by Glover-Lazer-McKenna [@lazer] that a simple harmonic oscillator with a piecewise linear stiffness (jumping nonlinearity) contributes to the explanation of the failure of the Takoma bridge, the studying of periodic oscillations in such models got a lot of attention of mathematicians; see the recent survey [@mawhin] and the papers [@yagasaki; @blm]. Also, new engineering studies of impact oscillators open up a large potential for challenging extensions of these results. In fact, Ivanov [@ivanov] argued that harmonic oscillators with a jumping nonlinearity with one part of the force field nearly infinite is a better model for describing the bouncing ball, rather then its limit version for an impact oscillator. In our modeling the resulting system of differential equations is singularly perturbed, but as we discuss below, the classical singular perturbation theory does not apply. In this paper we develop an averaging-like approach which solves the problem in a weakly nonlinear case. For a discussion of the use of averaging method for regular impacting systems we refer to [@BK01]. To be explicit, our approach concerns the existence and stability of periodic oscillations of the following system $$\label{ps} \begin{array}{l} \ddot{x}+x={\varepsilon}f(t,x,\dot{x},{\varepsilon}), \qquad x>0, \\ \ddot{x}+\dfrac{1}{{\varepsilon}^2(\omega_{\varepsilon})^2}x=g(t,x,\dot{x},{\varepsilon}), \quad x<0, \end{array}$$ where $f,g \in C^1(\mathbb{R} \times \mathbb{R}\times \mathbb{R}\times[0,1],\mathbb{R}),{\varepsilon}>0$ is a small parameter, $\omega_{\varepsilon}\to \omega_0 \in \mathbb{R} \ {\rm as}\ {\varepsilon}\to 0.$ System \[ps\] can be considered as a smoothed version of a system with impacts. We will study resonance oscillations and assume, therefore, that $$f(t+\pi,u,v,{\varepsilon}) \equiv f(t,u,v, {\varepsilon}), g(t+\pi,u,v,{\varepsilon}) \equiv g(t,u,v,{\varepsilon}).$$ System (\[ps\]) represents a natural singular perturbation description of impact phenomena which is different from the usual approaches. Our main result (Theorem 1) states that the emergence of asymptotically stable $\pi$-periodic solutions in (\[ps\]) from $\pi$-periodic cycles of non-smooth limiting the system $$\label{np} \begin{array}{l} \ddot{x}+x=0, \qquad x>0,\\ \dot{x}(t-0)=-\dot{x}(t+0), \quad x(t)=0, \end{array}$$ can be studied by a special form of the averaging method combined with a suitable scaling of time when solutions pass the half plane $x<0$. This involves the use of the implicit function theorem for a non-smooth problem in the limit as $\varepsilon \rightarrow 0$; this is possible by introducing a suitable Poincaré map. The result is a change of stability of a fixed point when its eigenvalues enter the unit disc from outside through the imaginary axis. Main result =========== We prove the following theorem. Let $f,g \in C^1(\mathbb{R}\times\mathbb{R}\times\mathbb{R},\mathbb{R})$ be $\pi$-periodic with respect to time. Define $$\begin{array}{rcl} \hskip-0.2cm\overline{P}(A,\theta)&=&-\int\limits_0^{\frac{\pi}{2}-\theta}\begin{pmatrix} \sin(\tau+\theta)\\ \dfrac{1}{A}\cos(\tau+\theta)\end{pmatrix} (f(\tau,A\cos(\tau+\theta),-A\sin(\tau+\theta),0)-2\omega_0A\cos(\tau+\theta))d\tau-\\ &&\hskip-0.3cm-\int\limits_{\frac{\pi}{2}-\theta}^\pi \begin{pmatrix} \sin(\tau+\theta+\pi)\\ \dfrac{1}{A}\cos(\tau+\theta+\pi)\end{pmatrix} (f(\tau,A\cos(\tau+\theta+\pi),-A\sin(\tau+\theta+\pi),0)-\\ &&\hskip-0.3cm-2\omega_0A\cos(\tau+\theta+\pi))d\tau-\omega_0 \int\limits_0^\pi \begin{pmatrix} \sin\left(s+\dfrac{\pi}{2}\right)\\0\end{pmatrix}g \left( \dfrac{\pi}{2}-\theta,0,-A\sin \left( s+\dfrac{\pi}{2}\right),0\right)ds. \end{array}$$ If $\overline{P}(A_0,\theta_0)=0$ for some $(A_0,\theta_0)\in\mathbb{R}\times(0,\pi)$ and the real parts of eigenvalues of $\overline{P}'(A_0,\theta_0)$ are negative, then, for any ${\varepsilon}>0$ sufficiently small, equation (\[ps\]) has a unique $\pi$-periodic solution satisfying $$x_{\varepsilon}(t) \to x_0(t)\quad \mbox{as}\quad {\varepsilon}\to 0\quad \mbox{pointwise on} \quad [0,\pi]\setminus \left\{ \dfrac{\pi}{2}-\theta_0 \right\}$$ where $x_0$ is the unique $\pi$-periodic solution of the equation (\[np\]) with initial condition $$(x(0),\dot{x}(0))=(A_0\cos \theta_0,-A_0\sin \theta_0).$$ Moreover, the solution $x_{\varepsilon}$ is asymptotically stable. [**Proof.**]{} Rewrite system (\[ps\]) as follows (see Fig. \[fig1\]) \[fig1\] ![Illustration of the change of variables (\[CH1\])-(\[CH3\]).](ris1.eps) $$\begin{aligned} &&\ddot{x}+\dfrac{1}{(1-{\varepsilon}\omega_{\varepsilon})^2}x={\varepsilon}f(t,x,\dot{x},{\varepsilon})-2{\varepsilon}\dfrac{\omega_{\varepsilon}}{(1-{\varepsilon}\omega_{\varepsilon})^2}x-{\varepsilon}^2\dfrac{(\omega_{\varepsilon})^2}{(1-{\varepsilon}\omega_{\varepsilon})^2}x, \quad x>0, \label{EQ1}\\ &&\ddot{x}+\dfrac{1}{({\varepsilon})^2(\omega_{\varepsilon})^2}x=g(t,x,\dot{x},{\varepsilon}), \quad x<0,\label{EQ2}\end{aligned}$$ so, that any solution of the reduced system ($\varepsilon =0$) $$\begin{aligned} &&\ddot{x}+\dfrac{1}{(1-{\varepsilon}\omega_{\varepsilon})^2}x=0, \quad x>0,\\ &&\ddot{x}+\dfrac{1}{({\varepsilon})^2(\omega_{\varepsilon})^2}x=0, \quad x<0\end{aligned}$$ is $\pi$-periodic. Let us introduce variables $(A,\theta)$ as follows $$\label{CH1} \left\{ \begin{array}{rcll} x&=&A\cos\left( \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\theta\right),&\\ \dot{x}&=&-A \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\sin \left( \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\theta\right), & \theta \in \left[0,\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon})\right], \end{array} \right.$$ $$\label{CH2} \hskip-0.15cm\left\{ \begin{array}{rcll} x&=&{\varepsilon}A\dfrac{\omega_{\varepsilon}}{1-{\varepsilon}\omega_{\varepsilon}}\cos\left( \dfrac{1}{{\varepsilon}\omega_{\varepsilon}}(\theta-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}))+\dfrac{\pi}{2}\right),&\\ \dot{x}&=&-A \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\sin \left( \dfrac{1}{{\varepsilon}\omega_{\varepsilon}}(\theta-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}))+\dfrac{\pi}{2}\right), & \theta \in \left[\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}),\dfrac{\pi}{2}(1+{\varepsilon}\omega_{\varepsilon})\right], \end{array} \right.$$ $$\label{CH3} \hskip-0.15cm\left\{ \begin{array}{rcll} x&=&A\cos \left(\dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\left( \theta -\dfrac{\pi}{2}(1+{\varepsilon}\omega_{\varepsilon})\right)+\dfrac{\pi}{2}+\pi\right),&\\ \dot{x}&=&-A \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\sin \left( \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\left(\theta-\dfrac{\pi}{2}(1+{\varepsilon}\omega_{\varepsilon})\right)+\dfrac{\pi}{2}+\pi\right), & \theta \in \left[\dfrac{\pi}{2}(1+{\varepsilon}\omega_{\varepsilon}),\pi\right], \end{array} \right.$$ which transforms equations (\[EQ1\])-(\[EQ2\]) to the following system $$\begin{aligned} &&\begin{pmatrix} \dot{A}\\ \dot{\theta}\end{pmatrix}=\begin{pmatrix} 0\\1\end{pmatrix}+{\varepsilon}G_1(t,A,\theta,{\varepsilon}), \qquad \mbox{if} \quad \theta \in \left[0,\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon})\right], \label{S1}\\ &&\begin{pmatrix} \dot{A}\\ \dot{\theta}\end{pmatrix}=\begin{pmatrix} 0\\1\end{pmatrix}+{\varepsilon}G_2(t,A,\theta,{\varepsilon}), \qquad \mbox{if} \quad \theta \in \left[\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}),\dfrac{\pi}{2}(1+{\varepsilon}\omega_{\varepsilon})\right], \label{S2}\\ &&\begin{pmatrix} \dot{A}\\ \dot{\theta}\end{pmatrix}=\begin{pmatrix} 0\\1\end{pmatrix}+{\varepsilon}G_3(t,A,\theta,{\varepsilon}), \qquad \mbox{if} \quad \theta \in \left[\dfrac{\pi}{2}(1+{\varepsilon}\omega_{\varepsilon}),\pi\right], \label{S3}\end{aligned}$$ where $$G_1(t,A,\theta,{\varepsilon})= \left( \begin{array}{l} -(1-{\varepsilon}\omega_{\varepsilon})\sin \left( \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\theta \right)\left( f \left( t,A \cos \left( \dfrac{\theta}{1-{\varepsilon}\omega_{\varepsilon}} \right),\right.\right.\\ \left.\quad \qquad\qquad \qquad \qquad\qquad \qquad-A \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\sin \left( \dfrac{\theta}{1+{\varepsilon}\omega_{\varepsilon}} \right),{\varepsilon}\right)-\\ \quad \qquad\qquad \qquad \qquad\qquad \qquad \left. -A \dfrac{\omega_{\varepsilon}}{(1-{\varepsilon}\omega_{\varepsilon})^2}(2+{\varepsilon}\omega_{\varepsilon})\cos \left( \dfrac{\theta}{1-{\varepsilon}\omega_{\varepsilon}} \right) \right)\\ -\dfrac{(1-{\varepsilon}\omega_{\varepsilon})^2}{A}\cos \left( \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\theta \right)\left( f \left( t,A \cos \left( \dfrac{\theta}{1-{\varepsilon}\omega_{\varepsilon}} \right),\right.\right.\\ \qquad \qquad\qquad \qquad\qquad \qquad\quad\left.-A \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\sin \left( \dfrac{\theta}{1+{\varepsilon}\omega_{\varepsilon}} \right),{\varepsilon}\right)-\\ \qquad \qquad\qquad \qquad\qquad \qquad\quad\left. -A \dfrac{\omega_{\varepsilon}}{(1-{\varepsilon}\omega_{\varepsilon})^2}(2+{\varepsilon}\omega_{\varepsilon})\cos \left( \dfrac{\theta}{1-{\varepsilon}\omega_{\varepsilon}} \right) \right) \end{array}\right)$$ $$G_2(t,A,\theta,{\varepsilon})= \left( \begin{array}{l} -\dfrac{1}{{\varepsilon}}(1-{\varepsilon}\omega_{\varepsilon})\sin \left( \dfrac{1}{{\varepsilon}\omega_{\varepsilon}}\left( \theta -\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon})\right)+\dfrac{\pi}{2} \right) \cdot\\ \qquad \ \cdot g \left( t,{\varepsilon}A \dfrac{\omega_{\varepsilon}}{1-{\varepsilon}\omega_{\varepsilon}}\cos \left(\dfrac{1}{{\varepsilon}\omega_{\varepsilon}}\left( \theta-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}) \right) +\dfrac{\pi}{2}\right),\right.\\ \qquad \qquad \left.-A \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\sin \left( \dfrac{1}{{\varepsilon}\omega_{\varepsilon}} \left( \theta-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}) \right)+\dfrac{\pi}{2}\right),{\varepsilon}\right)\\ -\dfrac{1}{A}(1-{\varepsilon}\omega_{\varepsilon})\omega_{\varepsilon}\cos \left( \dfrac{1}{{\varepsilon}\omega_{\varepsilon}} \left( \theta-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}) \right) +\dfrac{\pi}{2}\right) \cdot\\ \qquad \ \cdot g \left( t,{\varepsilon}A \dfrac{\omega_{\varepsilon}}{1-{\varepsilon}\omega_{\varepsilon}}\cos \left(\dfrac{1}{{\varepsilon}\omega_{\varepsilon}}\left( \theta-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}) \right) +\dfrac{\pi}{2}\right),\right.\\ \qquad \qquad \left.-A \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\sin \left( \dfrac{1}{{\varepsilon}\omega_{\varepsilon}} \left( \theta-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}) \right)+\dfrac{\pi}{2}\right),{\varepsilon}\right) \end{array}\right)$$ $$G_3(t,A,\theta,{\varepsilon})= \left( \begin{array}{l} -(1-{\varepsilon}\omega_{\varepsilon})\sin \left( \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\left( \theta -\dfrac{\pi}{2}(1+{\varepsilon}\omega_{\varepsilon})\right)+\dfrac{\pi}{2} +\pi\right) \cdot\\ \quad \cdot \left( f \left( t,A \cos \left(\dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\left( \theta-\dfrac{\pi}{2}(1+{\varepsilon}\omega_{\varepsilon}) \right) \right)+\dfrac{\pi}{2}+\pi\right),\right.\\ \quad \quad \left.-A \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\sin \left( \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}} \left( \theta-\dfrac{\pi}{2}(1+{\varepsilon}\omega_{\varepsilon}) \right)+\dfrac{\pi}{2}+\pi \right),{\varepsilon}\right)-\\ \quad\quad\quad-A\dfrac{\omega_{\varepsilon}}{(1-{\varepsilon}\omega_{\varepsilon})^2}(2+{\varepsilon}\omega_{\varepsilon})\cos\left( \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}} \left( \theta-\dfrac{\pi}{2}(1+{\varepsilon}\omega_{\varepsilon}) \right) +\dfrac{\pi}{2}+\pi \right)\\ -\dfrac{(1-{\varepsilon}\omega_{\varepsilon})^2}{A}\cos \left( \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}} \left( \theta-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}) \right) +\dfrac{\pi}{2}+\pi\right) \cdot\\ \quad\cdot \left( f \left( t,A \cos \left(\dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\left( \theta-\dfrac{\pi}{2}(1+{\varepsilon}\omega_{\varepsilon}) \right) \right)+\dfrac{\pi}{2}+\pi\right),\right.\\ \quad \quad \left.-A \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\sin \left( \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}} \left( \theta-\dfrac{\pi}{2}(1+{\varepsilon}\omega_{\varepsilon}) \right)+\dfrac{\pi}{2}+\pi \right),{\varepsilon}\right)-\\ \quad\quad\quad-A\dfrac{\omega_{\varepsilon}}{(1-{\varepsilon}\omega_{\varepsilon})^2}(2+{\varepsilon}\omega_{\varepsilon})\cos\left( \dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}} \left( \theta-\dfrac{\pi}{2}(1+{\varepsilon}\omega_{\varepsilon}) \right) +\dfrac{\pi}{2}+\pi \right)\end{array}\right).$$ If $t \mapsto (A(t),\theta(t)-t)$ is an asymptotically stable $\pi$-periodic solution of (\[S1\])-(\[S3\]) then $(x,\dot{x})$ defined by (\[CH1\])-(\[CH3\]) is an asymptotically stable $\pi$-periodic solution of (\[EQ1\])-(\[EQ2\]). To prove the existence of asymptotically stable $\pi$-periodic solutions of equations (\[S1\])-(\[S3\]) we show that each solution$(\overline{A}(\cdot,A,\theta,{\varepsilon}),\overline{\theta}(\cdot,A,\theta,{\varepsilon}))$ of (\[S1\])-(\[S3\]) with initial condition $(\overline{A}(0,A,\theta,{\varepsilon}),\overline{\theta}(0,A,\theta,{\varepsilon}))=(A,\theta)$ is defined on $[0,\pi]$ whenever $(A,\theta)$ belongs to a small neighborhood of $(A_0,\theta_0)$, and that the map $$\label{PO} P_{\varepsilon}(A,\theta)=(\overline{A}(\pi,A,\theta,{\varepsilon}),\overline{\theta}(\pi,A,\theta,{\varepsilon})-\pi)$$ contracts in this neighborhood. [**Step 1.**]{} First we show that solution $t \mapsto (\overline{A}(\cdot,A,\theta,{\varepsilon}),\overline{\theta}(\cdot,A,\theta,{\varepsilon}))$ of (\[S1\])-(\[S3\]) on $[0,\pi]$ can be consequently sewed by solutions of systems (\[S1\]), (\[S2\]) and (\[S3\]). Denote by $t \mapsto (\overline{A}_i(\cdot,t_0,A,\theta,{\varepsilon}),\overline{\theta}_i(\cdot,t_0,A,\theta,{\varepsilon})),$ $i=1,2,3,$ the solutions of (\[S1\]), (\[S2\]), (\[S3\]) respectively with initial condition $(\overline{A}_i(t_0,t_0,A,\theta,{\varepsilon}),\overline{\theta}_i(t_0,t_0,A,\theta,{\varepsilon}))=(A,\theta).$ Put $$F_1(T,A,\theta,{\varepsilon})=\dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\overline{\theta}_1(T,0,A,\theta,{\varepsilon})-\dfrac{\pi}{2}.$$ Since $F_1\left(\dfrac{\pi}{2}-\theta_0,A_0,\theta_0,0\right)=\overline{\theta}_1\left(\dfrac{\pi}{2}-\theta_0,0,A_0,\theta_0,0\right)-\dfrac{\pi}{2}=0$ and $$(F_1)'_T\left(\dfrac{\pi}{2}-\theta_0,A_0,\theta_0,0\right)=(\overline{\theta})'_{(1)}\left(\dfrac{\pi}{2}-\theta_0,0,A_0,\theta_0,0\right)=1$$ then by the implicit function theorem [@kolm Ch. X, § 2, Theorems 1 and 2] there exists $T_1 \in C^1(\mathbb{R}\times\mathbb{R}\times [0,1],\mathbb{R})$ such that $$F_1(T_1(A,\theta,{\varepsilon}),A,\theta,{\varepsilon})=0, \qquad |A-A_0|<\delta,\ |\theta-\theta_0|<\delta, \ {\varepsilon}\in [0,\delta),$$ where $\delta>0$ sufficiently small. Or, equivalently, $$\dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\overline{\theta}_1(T_1(A,\theta,{\varepsilon}),0,A,\theta,{\varepsilon})=\dfrac{\pi}{2}, \qquad |A-A_0|<\delta,\ |\theta-\theta_0|<\delta,\ {\varepsilon}\in [0,\delta).$$ Therefore, the solution of system (\[S1\]) with initial condition $(A,\theta)$ at $t=0$ approaches the threshold of switching to (\[S2\]) at time $T_1(A,\theta,{\varepsilon}).$ Now we show that the solution $$\begin{array}{l} \left(\overline{A}_2\left(\cdot,T_1(A,\theta,{\varepsilon}),\overline{A}_1(T_1(A,\theta,{\varepsilon}),A,\theta,{\varepsilon}), \dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}),{\varepsilon}\right),\right.\\ \qquad \left.\overline{\theta}_2\left(\cdot,T_1(A,\theta,{\varepsilon}),\overline{A}_1(T_1(A,\theta,{\varepsilon}),A,\theta,{\varepsilon}),\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}),{\varepsilon}\right)\right) \end{array}$$ stays till some time $T_2(A,\theta,{\varepsilon})$ in $$[0,\infty)\times\left[\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}),\dfrac{\pi}{2}(1+{\varepsilon}\omega_{\varepsilon})\right]$$ and that $T_2(A,\theta,{\varepsilon})$ is given by $$T_2(A,\theta,{\varepsilon})=T_1(A,\theta,{\varepsilon})+{\varepsilon}\widetilde{T}_2(A,\theta,{\varepsilon}),$$ where $\widetilde{T}_2\in C^1(\mathbb{R}\times\mathbb{R}\times[0,1],\mathbb{R})$. To do this consider $$\begin{aligned} F_2(T,A,\theta,{\varepsilon})&=&\dfrac{1}{{\varepsilon}\omega_{\varepsilon}} \left( \overline{\theta}_2 (T_1(A,\theta,{\varepsilon})+{\varepsilon}T,T_1(A,\theta,{\varepsilon}), \right.\\ &&\left.\overline{A}_1(T_1(A,\theta,{\varepsilon}),0,A,\theta,{\varepsilon}),\overline{\theta}_1(T_1(A,\theta,{\varepsilon}),0,A,\theta,{\varepsilon}),{\varepsilon})-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon})\right)-\pi.\end{aligned}$$ $$F_2(T,A,\theta,{\varepsilon})=\left\{ \begin{array}{l} \dfrac{1}{{\varepsilon}\omega_{\varepsilon}}\left(\overline{\theta}_2(T_1(A,\theta,{\varepsilon})+{\varepsilon}T,T_1(A,\theta,{\varepsilon}),\overline{A}_1(T_1(A,\theta,{\varepsilon}),A,\theta,{\varepsilon}),\right.\\ \qquad \left.\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}),{\varepsilon})-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon})\right)-\pi, \qquad {\rm if\ \ }{\varepsilon}>0,\\ \dfrac{1}{\omega_0}T-\pi, \qquad {\rm if\ \ }{\varepsilon}=0. \end{array} \right.$$ Let us convince ourself that the function $F_2$ verifies the assumptions of the implicit function theorem [@kolm Ch. X, § 2, Theorems 1 and 2] at the point $ (T,A,\theta,{\varepsilon})=(\omega_0\pi,A_0,\theta_0,{\varepsilon}). $ Since\ $\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon})=\overline{\theta}_2\left(T_1(A,\theta,{\varepsilon}),T_1(A,\theta,{\varepsilon}),\overline{A}_1(T_1(A,\theta,{\varepsilon}),A,\theta,{\varepsilon}),\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}),{\varepsilon}\right)$ then $$\begin{aligned} \lim_{{\varepsilon}\rightarrow 0} F_2(T,A,\theta,{\varepsilon})&=&\lim_{{\varepsilon}\rightarrow 0} \dfrac{1}{{\varepsilon}\omega_{\varepsilon}}(\overline{\theta}_2)'_{(1)}(T_1(A,\theta,{\varepsilon})+\lambda (A,\theta,{\varepsilon}){\varepsilon}T,T_1(A,\theta,{\varepsilon}),\\ &&\left.\overline{A}_1(T_1(A,\theta,{\varepsilon}),A,\theta,{\varepsilon}),\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}),{\varepsilon}\right){\varepsilon}T-\pi=\dfrac{1}{\omega_0}T-\pi,\end{aligned}$$ that is $F_2$ is continuous at ${\varepsilon}=0$. Here $\lambda (A,\theta,{\varepsilon}) \in [0,1].$ Furthermore, we have $$(F_2)'_T\left(\dfrac{\pi}{2}-\theta_0+\pi,A_0,\theta_0,0\right)=\dfrac{1}{\omega_0}\neq 0.$$ Therefore, the implicit function theorem [@kolm Ch. X, § 2, Theorems 1 and 2] allows as to conclude that there exists $\widetilde{T}_2 \in C^1(\mathbb{R} \times \mathbb{R} \times [0,1],\mathbb{R})$ such that $$\begin{aligned} &&\dfrac{1}{{\varepsilon}\omega_{\varepsilon}}\left(\overline{\theta}_2(T_1(A,\theta,{\varepsilon})+{\varepsilon}\widetilde{T}_2(A,\theta,{\varepsilon}),T_1(A,\theta,{\varepsilon}),A_1(T_1(A,\theta,{\varepsilon}),A,\theta,{\varepsilon}),\right.\\ &&\left.\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}),{\varepsilon})-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon})\right)=\pi, \qquad |A-A_0|<\delta,\ |\theta-\theta|<\delta,\ {\varepsilon}\in [0,\delta).\end{aligned}$$ Since $\theta_0 \in (0,\pi)$, then $\delta>0$ can be diminished in such a way that $$\begin{aligned} &&\overline{\theta}_3(\pi,T_1(A,\theta,{\varepsilon})+{\varepsilon}\widetilde{T}_2(A,\theta,{\varepsilon}),\overline{A}_2(T_1(A,\theta,{\varepsilon})+{\varepsilon}\widetilde{T}_2(A,\theta,{\varepsilon}),\\ &&\left.T_1(A,\theta,{\varepsilon}),\overline{A}_1(T_1(A,\theta,{\varepsilon}),A,\theta,{\varepsilon}),\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}),{\varepsilon}),\dfrac{\pi}{2}(1+{\varepsilon}\omega_{\varepsilon}),{\varepsilon}\right)<\\ &&<\dfrac{\pi}{2}-\theta+\pi+\pi, \qquad \mbox{for}\ \ {\varepsilon}>0 \ \ \mbox{sufficiently small}\end{aligned}$$ and any $|A-A_0|<\delta, |\theta-\theta_0|<\delta$. Therefore, the solution of (\[S3\]) with initial conditions under consideration does not meet the line of discontinuity during time $(T_1(A,\theta,{\varepsilon})+{\varepsilon}\widetilde{T}_2(A,\theta,{\varepsilon}),\pi]$. \[fig2\] ![Illustration of sewing of the solution of system (\[S1\])-(\[S1\]) by solutions of each its single component (\[S1\]), (\[S2\]) and (\[S3\]).](ris2.eps) Summarizing, we can define the solution $t \mapsto \left(\overline{A}(t,A,\theta,{\varepsilon}),\overline{\theta}(t,A,\theta,{\varepsilon})\right)$ of system (\[S1\])-(\[S3\]) as follows (see Fig. \[fig2\]) $$\begin{pmatrix} \overline{A}(t,A,\theta,{\varepsilon})\\ \overline{\theta}(t,A,\theta,{\varepsilon}) \end{pmatrix}=\left\{ \begin{array}{l} \begin{pmatrix} \overline{A}(t,A,\theta,{\varepsilon})\\ \overline{\theta}(t,A,\theta,{\varepsilon}) \end{pmatrix},\qquad \mbox{if} \quad t \in [0,T_1(A,\theta,{\varepsilon})],\\ \begin{pmatrix} \overline{A}_2\left(t,T_1(A,\theta,{\varepsilon}),\overline{A}(T_1(A,\theta,{\varepsilon}),A,\theta,{\varepsilon}),\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}),{\varepsilon}\right) \\ \overline{\theta}_2\left(t,T_1(A,\theta,{\varepsilon}),\overline{A}(T_1(A,\theta,{\varepsilon}),A,\theta,{\varepsilon}),\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon}),{\varepsilon}\right)\end{pmatrix},\\ \qquad \qquad \mbox{if} \quad t \in [T_1(A,\theta,{\varepsilon}),T_1(A,\theta,{\varepsilon})+{\varepsilon}\widetilde{T}_2(A,\theta,{\varepsilon})],\\ \begin{pmatrix} \overline{A}_3(t,T_1(A,\theta,{\varepsilon})+{\varepsilon}\widetilde{T}_2(A,\theta,{\varepsilon}),\overline{A}(T_1(A,\theta,{\varepsilon})+{\varepsilon}\widetilde{T}_2(A,\theta,{\varepsilon}),A,\theta,{\varepsilon}),\\ \qquad \qquad \left.\dfrac{\pi}{2}(1+{\varepsilon}\omega_{\varepsilon}),{\varepsilon}\right)\\ \overline{\theta}_3(t,T_1(A,\theta,{\varepsilon})+{\varepsilon}\overline{T}_2(A,\theta,{\varepsilon}),\overline{A}(T_1(A,\theta,{\varepsilon})+{\varepsilon}\widetilde{T}_2(A,\theta,{\varepsilon}),A,\theta,{\varepsilon}),\\ \qquad \qquad \left.\dfrac{\pi}{2}(1+{\varepsilon}\omega_{\varepsilon}),{\varepsilon}\right) \end{pmatrix},\\ \qquad \qquad \mbox{if} \quad t \in [T_1(A,\theta,{\varepsilon})+{\varepsilon}\widetilde{T}_2(A,\theta,{\varepsilon}),\pi]. \end{array} \right.$$ [**Step 2.**]{} At this step we show that fixed points of the Poincaré map (\[PO\]) can be studied by means of the function $\overline{P}$ introduced in the formulation of the theorem. To this end we decompose $P_{\varepsilon}$ as $$P_{\varepsilon}(a,\theta)=\begin{pmatrix} A\\ \theta \end{pmatrix} +{\varepsilon}(\overline{P}_{{\varepsilon},1}(A,\theta)+\overline{P}_{{\varepsilon},2}(A,\theta)+\overline{P}_{{\varepsilon},3}(A,\theta)),$$ where $$\begin{aligned} &&\overline{P}_{{\varepsilon},1}(A,\theta)=\int\limits_0^{T_1(A,\theta,{\varepsilon})} G_1(\tau,\overline{A}(\tau,A,\theta,{\varepsilon}),\overline{\theta}(\tau,A,\theta,{\varepsilon}),{\varepsilon})d\tau,\\ &&\overline{P}_{{\varepsilon},2}(A,\theta)=\int\limits_{T_1(A,\theta,{\varepsilon})}^{T_1(A,\theta,{\varepsilon})+{\varepsilon}\widetilde{T}_2(A,\theta,{\varepsilon})} G_2(\tau,\overline{A}(\tau,A,\theta,{\varepsilon}),\overline{\theta}(\tau,A,\theta,{\varepsilon}),{\varepsilon})d\tau,\\ &&\overline{P}_{{\varepsilon},3}(A,\theta)=\int\limits_{T_1(A,\theta,{\varepsilon})+{\varepsilon}\widetilde{T}_2(A,\theta,{\varepsilon})}^\pi G_3(\tau,\overline{A}(\tau,A,\theta,{\varepsilon}),\overline{\theta}(\tau,A,\theta,{\varepsilon}),{\varepsilon})d\tau.\end{aligned}$$ Since $\sin, \cos$ and $g$ are bounded on any bounded set then from system (\[S1\])-(\[S3\]) we have that $$\begin{pmatrix} \overline{A}(t,A,\theta,{\varepsilon})\\ \overline{\theta}(t,A,\theta,{\varepsilon}) \end{pmatrix} \to \begin{pmatrix} A\\ t+\theta \end{pmatrix} \quad \mbox{as} \quad {\varepsilon}\to 0$$ uniformly with respect to $t \in [0,\pi], \ |A-A_0|<\delta, \ | \theta-\theta_0|<\delta.$ This gives us immediately that $$\label{COP} \begin{array}{l} \overline{P}_{{\varepsilon},1}(A,\theta) \to \int\limits_0^{T_1(A,\theta,0)} G_1(\tau,A,\tau+\theta,0)d\tau,\\ \overline{P}_{{\varepsilon},3}(A,\theta) \to \int\limits_{T_1(A,\theta,0)}^\pi G_3(\tau,A,\tau+\theta,0)d\tau,\quad \mbox{as} \quad {\varepsilon}\to 0, \end{array}$$ uniformly with respect to $|A-A_0|<\delta,$ $|\theta-\theta_0|<\delta.$ Since we proved that $T_1$ and $\widetilde{T}_2$ are continuously differentiable, then (\[COP\]) implies that $$\begin{aligned} &&(\overline{P}_{{\varepsilon},_1})'(A,\theta) \to (P_{0,1})'(A,\theta),\\ &&(\overline{P}_{{\varepsilon},_3})'(A,\theta) \to (P_{0,3})'(A,\theta), \quad \mbox{as} \quad {\varepsilon}\to 0,\end{aligned}$$ uniformly with respect to $|A-A_0|<\delta,$ $|\theta-\theta_0|<\delta$. Let us now study the behavior of $\overline{P}_{{\varepsilon},2}$ and $(\overline{P}_{{\varepsilon},2})'$ as ${\varepsilon}\to 0.$ We have $$\begin{aligned} \overline{P}_{{\varepsilon},2}(A,\theta)&=&-(1-{\varepsilon}\omega_{\varepsilon}) \int\limits_{T_1(A,\theta,{\varepsilon})}^{T_1(A,\theta,{\varepsilon})+{\varepsilon}\widetilde{T}_2(a,\theta,{\varepsilon})} \begin{pmatrix} \dfrac{1}{{\varepsilon}} \sin \left(\dfrac{1}{{\varepsilon}\omega_{\varepsilon}}\left(\overline{\theta}(\tau,A,\theta,{\varepsilon})-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon})\right)+\dfrac{\pi}{2}\right)\\\dfrac{1}{\overline{A}(\tau,A,\theta,{\varepsilon})}(1-{\varepsilon}\omega_{\varepsilon})\omega_{\varepsilon}\cos\left(\dfrac{1}{{\varepsilon}\omega_{\varepsilon}}(\overline{\theta}(\tau,A,\theta,{\varepsilon})-\right.\\ \qquad \qquad\left.\left.-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon})\right)+\dfrac{\pi}{2}\right) \end{pmatrix} \cdot\\ &&\cdot g\left(\tau,{\varepsilon}\overline{A}(\tau,A, \theta,{\varepsilon})\dfrac{\omega_{\varepsilon}}{1-{\varepsilon}\omega_{\varepsilon}}\cos \left(\dfrac{1}{{\varepsilon}\omega_{\varepsilon}}\left(\overline{\theta}(\tau,A,\theta,{\varepsilon})-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon})\right)+\dfrac{\pi}{2}\right)\right.,\\ &&\left.-\overline{A}(\tau,A,\theta,{\varepsilon})\dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\sin\left(\dfrac{1}{{\varepsilon}\omega_{\varepsilon}}\left(\overline{\theta}(\tau,A,\theta,{\varepsilon})-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon})\right)+\dfrac{\pi}{2}\right),{\varepsilon}\right)d\tau.\end{aligned}$$ Scaling the time in the integral as $\tau=T_1(A,\theta,{\varepsilon})+{\varepsilon}\omega_{\varepsilon}s$ we get $$\begin{aligned} &&\overline{P}_{{\varepsilon},2}(A,\theta)=-\omega_{\varepsilon}(1-{\varepsilon}\omega_{\varepsilon}) \int\limits_0^{T_2(A,\theta.{\varepsilon})\backslash \omega_{\varepsilon}} \left(\begin{array}{l} \sin \left(\dfrac{1}{{\varepsilon}\omega_{\varepsilon}}\left(\overline{\theta}(T_1(A,\theta,{\varepsilon})+{\varepsilon}s \omega_{\varepsilon},A,\theta,{\varepsilon})-\right.\right.\\ \left.\left.\quad-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon})\right)+\dfrac{\pi}{2}\right)\\ {\varepsilon}\dfrac{1}{\overline{A}(T_1(A,\theta,{\varepsilon})+{\varepsilon}\omega_{\varepsilon}s,A,\omega,{\varepsilon})}(1-{\varepsilon}\omega_{\varepsilon})\omega_{\varepsilon}\cdot\\ \quad\cdot\cos \left(\dfrac{1}{{\varepsilon}\omega_{\varepsilon}}\left(\overline{\theta}(T_1(A,\theta,{\varepsilon})+{\varepsilon}\omega_{\varepsilon}s,A,\theta,{\varepsilon})-\right.\right.\\ \qquad \left.\left.-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon})\right)+\dfrac{\pi}{2}\right) \end{array}\right) \cdot \\ &&\cdot g\left(T_1(A,\theta,{\varepsilon})+{\varepsilon}\omega_{\varepsilon}s,{\varepsilon}\overline{A}(T_1(A,\theta,{\varepsilon})+{\varepsilon}\omega_{\varepsilon}s,A,\theta,{\varepsilon})\dfrac{\omega_{\varepsilon}}{1-{\varepsilon}\omega_{\varepsilon}} \cos \left(\dfrac{1}{{\varepsilon}\omega_{\varepsilon}}(\overline{\theta}(T_1(A,\theta,{\varepsilon})+\right.\right.\\ &&\left.\left.+{\varepsilon}\omega_{\varepsilon}s,A,\theta,{\varepsilon})-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon})\right),-\overline{A}(T_1(A,\theta,{\varepsilon})+{\varepsilon}\omega_{\varepsilon}s,A,\theta,{\varepsilon})\dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\cdot\right.\\ &&\left.\cdot\sin\left(\dfrac{1}{{\varepsilon}\omega_{\varepsilon}}\left(\overline{\theta}(T_1(A,\theta,{\varepsilon})+{\varepsilon}\omega_{\varepsilon}s,A,\theta,{\varepsilon})-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon})\right)+\dfrac{\pi}{2}\right),{\varepsilon}\right)ds.\end{aligned}$$ Put $$K_{\varepsilon}(A,\theta)= \dfrac{1}{{\varepsilon}}\left(\overline{\theta}(T_1(A,\theta,{\varepsilon})+{\varepsilon}\omega_{\varepsilon}s,A,\theta,{\varepsilon})-\overline{\theta}(T_1(A,\theta,{\varepsilon}),A,\theta,{\varepsilon})\right).$$ Since $$\begin{aligned} &&\dfrac{1}{{\varepsilon}}\left(\overline{\theta}(T_1(A,\theta,{\varepsilon})+{\varepsilon}\omega_{\varepsilon}s,A,\theta,{\varepsilon})-\dfrac{\pi}{2}(1-{\varepsilon}\omega_{\varepsilon})\right)=\\ &&=K_{\varepsilon}(A,\theta) \to (\overline{\theta})'_{(1)}(T_1(A,\theta,0),A,\theta,0)\omega_0 s=\omega_0 s, \quad \mbox{as} \ {\varepsilon}\to 0,\end{aligned}$$ then $$\overline{P}_{{\varepsilon},2}(A,\theta) \to -\omega_0 \int\limits_0^\pi \begin{pmatrix} \sin\left(s+\dfrac{\pi}{2}\right)\\0 \end{pmatrix} g\left(\dfrac{\pi}{2}-\theta,0,A\sin\left(s+\dfrac{\pi}{2}\right),0\right)ds, \quad \mbox{as} \ \ {\varepsilon}\to 0,$$ uniformly with respect to $|A-A_0|<\delta,|\theta-\theta_0|<\delta.$ Since $(K_{\varepsilon})'(A,\theta)$ converges as ${\varepsilon}\to 0$ uniformly in $|A-A_0|<\delta$ and $|\theta -\theta_0|<\delta$ then $(K_{\varepsilon})'(A,\theta) \to (K_0)'(A,\theta)$ as ${\varepsilon}\to 0.$ Therefore, $$(\overline{P}_{{\varepsilon},2})'(A,\theta) \to (\overline{P}_{0,2})'(A,\theta) \quad \mbox{as} \quad {\varepsilon}\to 0$$ uniformly with respect to $|A-A_0|<\delta,|\theta-\theta_0|<\delta.$ Summarizing, we proved, that $$\begin{aligned} &&\dfrac{1}{{\varepsilon}}\left(P_{\varepsilon}(A,\theta)-\begin{pmatrix} A\\ \theta \end{pmatrix}\right)= \overline{P}(A,\theta)+\\ &&+(\overline{P}_{{\varepsilon},1}-\overline{P}_{0,1}(A,\theta)+\overline{P}_{{\varepsilon},2}(A,\theta)-\overline{P}_{0,2}(A,\theta)+\overline{P}_{{\varepsilon},3}(A,\theta)-\overline{P}_{0,3}(A,\theta)).\end{aligned}$$ Therefore, for any ${\varepsilon}>0$ sufficiently small, the function $$(A,\theta) \mapsto P_{\varepsilon}(A,\theta)-\begin{pmatrix} a\\ \theta \end{pmatrix}$$ has a unique zero $(A_{\varepsilon},\theta_{\varepsilon})$ such that $(A_{\varepsilon},\theta_{\varepsilon})\to (A_0,\theta_0)$ as ${\varepsilon}\to 0$ and the real parts of eigenvalues of $(P_{\varepsilon})'(A_{\varepsilon},\theta_{\varepsilon})-I$ are negative. This is equivalent to say that the eigenvalues of the matrix $((P_{\varepsilon})'(A_{\varepsilon},\theta_{\varepsilon}))^2$ belong to the interval $[0,1)$. This implies (see [@kraope Lemma 9.2]) that $t \mapsto (\overline{A}(t,A_{\varepsilon},\theta_{\varepsilon},{\varepsilon}),\overline{\theta}(t,A_{{\varepsilon}},\theta_{\varepsilon},{\varepsilon})-t)$ is an asymptotically stable $\pi$-periodic solution of (\[S1\])-(\[S3\]). To see the latter one will probably wish to make the change of variables $\Xi(t)=\theta(t)-t$ in (\[S1\])-(\[S3\]). Since the change of variables (\[CH1\])-(\[CH3\]) is $\pi$-periodic, than given by these formulas corresponding solution $(x_{\varepsilon},\dot{x}_{\varepsilon})$ of (\[EQ1\])-(\[EQ2\]) is also $\pi$-periodic and asymptotically stable. Uniqueness of $x_{\varepsilon}$ follows from uniqueness of $(A_{\varepsilon},\theta_{\varepsilon})$ and the fact that the change of variables (\[CH1\])-(\[CH3\]) is one-to-one. To finish the proof it remains to observe that for any $t \in \left[0,\dfrac{\pi}{2}-\theta_0\right)$ we have $$\begin{pmatrix} \overline{A}(t,A_{\varepsilon},\theta_{\varepsilon},{\varepsilon})\cos \left(\dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\overline{\theta}(t,A_{\varepsilon},\theta_{\varepsilon},{\varepsilon})\right)\\ -A(t,A_{\varepsilon},\theta_{\varepsilon},{\varepsilon})\dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\sin \left(\dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\overline{\theta}(t,A_{\varepsilon},\theta_{\varepsilon},{\varepsilon})\right) \end{pmatrix} \to \begin{pmatrix} A_0\cos (t+\theta_0)\\ -A_0 \sin(t+\theta_0) \end{pmatrix} \quad \mbox{as} \quad {\varepsilon}\to 0$$ and that for any $t \in \left(\dfrac{\pi}{2}-\theta_0,\pi\right]$ we have $$\begin{pmatrix} \overline{A}(t,A_{\varepsilon},\theta_{\varepsilon},{\varepsilon})\cos \left(\dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\left(\overline{\theta}(t,A_{\varepsilon},\theta_{\varepsilon},{\varepsilon})-\dfrac{\pi}{2}(1+{\varepsilon}\omega_{\varepsilon})\right)+\dfrac{\pi}{2}+\pi\right)\\ -A(t,A_{\varepsilon},\theta_{\varepsilon},{\varepsilon})\sin \left(\dfrac{1}{1-{\varepsilon}\omega_{\varepsilon}}\left(\overline{\theta}(t,A_{\varepsilon},\theta_{\varepsilon},{\varepsilon})-\dfrac{\pi}{2}(1+{\varepsilon}\omega_{\varepsilon})\right)+\dfrac{\pi}{2}+\pi\right) \end{pmatrix} \to$$ $$\to\begin{pmatrix} A_0\cos (t+\theta_0+\pi)\\ -A_0 \sin(t+\theta_0+\pi) \end{pmatrix} \quad \mbox{as} \quad {\varepsilon}\to 0,$$ that is $x_{\varepsilon}$ converges to the solution of (\[np\]) with the initial condition $x_{\varepsilon}(0)=(A_0\cos \theta_0,-A_0\sin \theta_0)$ as ${\varepsilon}\to 0$ pointwise on $[0,\pi]\backslash\{t_0\}$. The proof is complete. An application ============== In this section we apply the result of section 2 to an impact oscillator shown in Fig. 3. A body of mass $m=1$ is bouncing against a nearly elastic surface $S$ (large stiffness $1/ {\varepsilon}^2 \omega^2$). Assuming in addition that the body is subjected to Rayleigh excitation, viscous friction and forcing, the equation of motions can be written as follows $$\label{new313} \begin{array}{ll} \ddot{x}+x=-{\varepsilon}a x -{\varepsilon}c_1 \dot{x} + {\varepsilon}\mu_1\dot x (1-\dot x^2)+{\varepsilon}\gamma \sin t, & \mbox{if} \quad x \geqslant 0,\\ \ddot{x}+\dfrac{1}{{\varepsilon}^2\omega^2}x=-(c_2+{\varepsilon}c_1)\dot{x}+(\mu_2+{\varepsilon}\mu_1)\dot x(1-\dot x^2)+{\varepsilon}\gamma \sin t, & \mbox{if} \quad x<0. \end{array}$$ System (\[new313\]) is of the form (\[ps\]) with limiting system. There is no obvious reason why the Rayleigh excitation should be $O(1)$ during impact, but as this does not complicate the analysis, we admit this possibility. The averaging function $\overline{P}$ takes now the longer form $$\begin{aligned} \overline{P}(A,\theta)&=&-\int_0^{\frac{\pi}{2}-\theta} \begin{pmatrix} \sin(\tau+\theta)\\ \frac{1}{A}\cos(\tau+\theta) \end{pmatrix} (-aA\cos(\tau+\theta)+c_1A\sin(\tau+\theta)+\\ &&-\mu_1 A\sin (\tau+\theta)(1-A^2\sin^2 (\tau+\theta))+\gamma \sin\tau-2\omega A\cos(\tau+\theta))d\tau-\\ &&-\int_{\frac{\pi}{2}-\theta}^{\pi} \begin{pmatrix} \sin(\tau+\theta+\pi)\\ \frac{1}{A}\cos(\tau+\theta+\pi) \end{pmatrix} (-aA\cos(\tau+\theta+\pi)+c_1A\sin(\tau+\theta+\pi)+\\ &&-\mu_1 A\sin(\tau+\theta+\pi)(1-A^2\sin^2(\tau+\theta+\pi))+\gamma \sin\tau-2\omega A\cos(\tau+\theta+\pi))d\tau-\\ &&-\omega \int_0^\pi \begin{pmatrix} \sin(\tau+\frac{\pi}{2})\\0 \end{pmatrix} \left(c_2 A \sin \left(\tau+\frac{\pi}{2}\right)-\mu_2A\sin\left(\tau+\frac{\pi}{2}\right)\left(1-A^2\sin^2\left(\tau+\frac{\pi}{2}\right)\right)\right)d\tau=\\ &&=\begin{pmatrix} \gamma \theta \cos \theta -\frac{\pi}{2}A(c_1+c_2 \omega)+ \frac{\pi}{2}(\mu_1+\mu_2\omega)A\left(1-\frac{3}{4}A^2\right)\\ -\frac{1}{A} \gamma \cos\theta-\frac{1}{A}\gamma\theta\sin\theta+\frac{\pi}{2}(a+2\omega) \end{pmatrix}.\end{aligned}$$ To formulate our result we need some preliminary notations. First, we introduce the function $M:\mathbb{R}\to\mathbb{R}$ as follows $$\begin{aligned} M(\theta)&=&-\theta\cos\theta+(c_1+c_2\omega)\frac{\cos\theta+\theta\sin\theta}{a+2\omega}+\\ &&-(\mu_1+\mu_2\omega)\left(\frac{\cos\theta+\theta\sin\theta}{a+2\omega}-\frac{3\gamma^2}{\pi^2}\left(\frac{\cos\theta+\theta\sin\theta}{a+2\omega}\right)^3\right).\end{aligned}$$ \[B\] Let $M(\theta_0)=0$ for some $\theta_0 \in (0,\frac{\pi}{2})$ and $A_0$ is defined as $$\label{A0} A_0=\frac{\gamma\cos\theta_0+\gamma\theta_0\sin\theta_0}{\frac{\pi}{2}(a+2\omega)}.$$ If $$\label{Mprime} M'(\theta_0)>0$$ and $$-(c_1+c_2\omega)+(\mu_1+\mu_2\omega)(1-3{A_0}^2)<0$$ then, for any ${\varepsilon}>0$ sufficiently small, equation (\[new313\]) has exactly one $\pi$-periodic solution $x_{\varepsilon}$ such that $$(x_{\varepsilon}(0),\dot x_{\varepsilon}(0)) \to (A_0\cos\theta_0,-A_0\sin\theta_0) \ \mbox{as} \ {\varepsilon}\to 0.$$ The solution $x_{\varepsilon}$ is asymptotically stable. The proof of proposition (\[B\]) relies on the following lemma. \[C\] Consider $2\times2$ real matrix $D$. If $Sp \ D<0$ and $\det\|D\|>0$ then the eigenvalues of $D$ have negative real parts. The statement of the lemma follows from the direct computation of the eigenvalues of $D$ according to the standard formula for roots of quadratic equations.\ [**Proof of proposition \[B\].**]{} Direct computation shows that $(A_0,\theta_0)$ is a zero of $\overline{P}$. To prove the proposition it remains to show that - $Sp \ \overline P'(A_0,\theta_0)<0$. - $\det \|\overline P'(A_0,\theta_0)\|>0$. But these two relations follow from the formulas - $Sp \ \overline P'(A_0,\theta_0)=-\pi(c_1+c_2\omega)+\pi(\mu_1+\mu_2\omega)(1-3A_0^2) $. - $\dfrac{2A_0}{\pi\gamma(a+2\omega)}\det \|\overline P'(A_0,\theta_0)\|=M'(\theta_0)$, which are straightforward. Our next proposition \[A\] shows that proposition \[B\] is not vacuous, namely we give sufficient conditions ensuring that (\[Mprime\]) is satisfied. Before proceeding to the formulation of proposition \[A\] we need to introduce some notations and properties. First we observe, that the $$\frac{M(\theta)}{\theta\cos\theta}=-1+K(\theta),$$ where $$K(\theta)=\left(\frac{c_1+c_2\omega}{a+2\omega}-\frac{\mu_1+\mu_2\omega}{a+2\omega}\left(1-\frac{3\gamma^2}{\pi^2}\left(\frac{\cos\theta+\theta\sin\theta}{a+2\omega}\right)^2\right)\right)\left(\frac{1}{\theta}+ tg \ \theta\right).$$ Observe, that there exists $\theta_* \in \left(0,\frac{\pi}{2}\right)$ such that $$\label{KK} K'(\theta) \ne 0 \ \mbox{for all} \ \theta \in \left(\theta_*,\frac{\pi}{2}\right).$$ In fact, if $K'(\theta_n)=0$ for some sequence $\theta_n \uparrow \frac{\pi}{2}$ as $n \to \infty$, then $$M'(\theta_n)=(\cos\theta_n-\theta_n\sin\theta_n)(-1+K(\theta_n)) \to \infty \ \mbox{as} \ n \to \infty,$$ that contradicts the boundedness of the derivative of $M$. \[A\] Let $\theta_*\in\left(0,\frac{\pi}{2}\right)$ be such a number that (\[KK\]) holds true. Assume that $$K(\theta_*)<1$$ and denote by $\theta_0 \in (\theta_*,\frac{\pi}{2})$ the unique point satisfying $$\label{sati} -1+K(\theta_0)=0.$$ Then $M(\theta_0)=0$ and $M'(\theta_0)<0$. Consequently, for any ${\varepsilon}>0$ sufficiently small, equation (\[new313\]) has exactly one $\pi$-periodic solution $x_{\varepsilon}$ such that $$(x_{\varepsilon}(0),\dot x_{\varepsilon}(0)) \to (A_0\cos\theta_0,-A_0\sin\theta_0) \ \mbox{as} \ {\varepsilon}\to 0,$$ where $$A_0=\frac{\gamma\cos\theta_0+\gamma\theta_0\sin\theta_0}{\frac{\pi}{2}(a+2\omega)}.$$ The solution $x_{\varepsilon}$ is asymptotically stable. Note that $\theta_0 \in \left(\theta_*,\frac{\pi}{2}\right)$ satisfying (\[sati\]) always exists and is unique since $-1+K(\theta_*)<0,\ K'(\theta)\ne 0, \ \mbox{for} \ \theta \in \left(\theta_*,\frac{\pi}{2}\right),$ and $$\label{KKK} \lim\limits_{t\uparrow \frac{\pi}{2}} K(\theta)=+\infty.$$ [**Proof of proposition \[A\].**]{} First, observe that $M(\theta_0)=\theta_0\cos\theta_0(-1+K(\theta_0))=0$. Second, we have $$M'(\theta)=(\cos\theta-\theta\sin\theta)L(\theta)+\theta\cos\theta K'(\theta)$$ and so $M'(\theta_0)=K'(\theta_0)$. But properties (\[KK\]) and (\[KKK\]) imply that $K'(\theta_0)>0$ and so the proof is complete. Let us finally formulate our result in the simpler setting when the Rayleigh excitation is switched off, that is $\mu_1=\mu_2=0$. We have that $$K(\theta)=\frac{c_1+c_2\omega}{a+2\omega}\left(\frac{1}{\theta}+ tg \ (\theta)\right),$$ in particular, there exists an unique $\theta_* \in\left(0,\frac{\pi}{2}\right)$ such that $sign \ K'(\theta_*)=sign \ (\theta-\cos\theta)=0$. \[D\] Let $\theta_* \in\left(0,\frac{\pi}{2}\right)$ be the unique point such that $\theta_*-\cos\theta_*=0$. Assume that $$\frac{c_1+c_2\omega}{a+2\omega}\left(\frac{1}{\theta_*}+tg \ \theta_*\right)<1.$$ Denote by $\theta_0 \in\left( \theta_*,\frac{\pi}{2} \right)$ the unique point such that $$-1+\frac{c_1+c_2\omega}{a+2\omega}\left(\frac{1}{\theta_0}+tg \ \theta_0\right)=0.$$ Then the conclusion of proposition \[A\] holds true. Discussion ========== - It is remarkable that the analysis of system (\[ps\]) can be handled by the introduction of a Poincaré map and the use of the implicit function theorem although the limiting system for $\varepsilon \rightarrow 0$ is non-smooth. - At the same time we have formulated an unusual type of singular perturbation problem. Putting $\varepsilon =0$, we have non-smooth impact, for $\varepsilon > 0$ we have fast motion in a neighborhood of the subset $x=0$. For $x>0$ slow motion takes place but this is not described by standard slow manifold theory, see [@V05]. Still, the dynamics for $x>0$ can be considered as taking place in an explicitly formulated slow manifold.\ On the other hand, the solutions for $x <0$ have as slow manifold the boundary $x=0$. This does not satisfy the necessary hyperbolicity condition, but the solutions for $x >0$ are forced to the manifold $x=0$ and, after passing by a fast transition through the domain $x <0$ they are forced again to leave $x=0$. We note also that sliding along the slow manifold, as happens for instance in dry friction problems, is not possible. This simplifies the bifurcational behavior. - Regarding the averaging result obtained in this paper, we draw attention to the papers [@P79]- [@MS85] and further references there. In [@P79] the framework of differential inclusions is used, in [@MS85] explicit estimates of the vector field and the solutions are given in the case of impulsive forces. Our approach economically avoids the estimate of general solution behavior as we aim at the more restricted result of obtaining periodic solutions. [99]{} V.I. Babitsky, V.L. Krupenin, “Vibration of strongly nonlinear discontinous systems," Springer, Berlin etc., (2001). A. Buica, J. Llibre, O. Makarenkov, Asymptotic stability of periodic solutions for nonsmooth differential equations with application to the nonsmooth van der Pol oscillator, SIAM J. Math. Anal. 40 (2009), no. 6, 2478–2495. J. Glover, A.C. Lazer, P.J. McKenna, Existence and stability of large scale nonlinear oscillations in suspension bridges. Z. Angew. Math. Phys. 40 (1989), no. 2, 172–200. A. Ivanov, Bifurcations in impact systems, Chaos, Solitons & Fractals Vol. 7, No. 10. pp. 1615-1634, 1996. A. N. Kolmogorov, S. V. Fom[i]{}n, “Elements of the theory of functions and functional analysis," Fourth edition, revised. Izdat. “Nauka”, Moscow, 1976 (in Russian); transl. 1st ed. Dover Publ., New York (1996). M. A. Krasnosel’skii, “The operator of translation along the trajectories of differential equations," Translations of Mathematical Monographs, 19. Translated from the Russian by Scripta Technica, American Mathematical Society, Providence, R.I. (1968). P.O.K. Krehl, “Details of history of shock waves, explosions and impact: a chronological and biographical reference," Springer, Berlin etc., (2008). J. Mawhin, Resonance and nonlinearity: a survey. Ukrainian Math. J. 59 (2007), no. 2, 197–214. Yu.A. Mitropolsky, A.M. Samoilenko, “Forced oscillations of systems with impulsive force”, Int. J. Non-Linear Mechanics 20, pp. 419-426 (1985) V.A. Plotnikov, “The averaging method for differential inclusions and its application to optimal-control problems”, Differential Equations 15, pp. 1427-1433 (1979) F. Verhulst, “Methods and applications of singular perturbations, boundary layers and multiple timescale dynamics," Springer, New York etc., (2005). K. Yagasaki, Nonlinear dynamics of vibrating microcantilevers in tapping-mode atomic force microscopy, Physical Review B 70, 245419 (2004).
--- abstract: 'The Misner and Sharp approach to the study of gravitational collapse is extended to the viscous dissipative case in, both, the streaming out and the diffusion approximations. The dynamical equation is then coupled to causal transport equations for the heat flux, the shear and the bulk viscosity, in the context of Israel–Stewart theory, without excluding the thermodynamics viscous/heat coupling coefficients. The result is compared with previous works where these later coefficients were neglected and viscosity variables were not assumed to satisfy causal transport equations. Prospective applications of this result to some astrophysical scenarios are discussed.' author: - | L. Herrera$^1$[^1] [^2], A. Di Prisco$^1$[^3], E. Fuenmayor$^1$[^4],\ and O. Troconis$^1$[^5]\ [$^1$Escuela de Física, Facultad de Ciencias,]{}\ [Universidad Central de Venezuela, Caracas, Venezuela.]{}\ title: 'Dynamics of viscous dissipative gravitational collapse: A full causal approach' --- Introduction ============ The gravitational collapse of massive stars represents one of the few observable phenomena where general relativity is expected to play a relevant role. This fact is at the origin of the great attraction that this problem exerts on the comunity of the relativists, since the seminal paper by Oppenheimer and Snyder [@Opp]. Ever since that work, much was done by researchers trying to grasp essentials aspects of this phenomenon (see [@May] and references therein). However this endeavour proved to be difficult and uncertain. Different kind of obstacles appear, depending on the approach adopted for the modelling and/or on the complexity of the physical description of the fluid, assumed to form the selfgravitating object. All these factors in turn are conditioned by the relevant time scales of different physical phenomena under consideration. Thus, during their evolution, self–gravitating objects may pass through phases of intense dynamical activity, with time scales of the order of magnitude of (or even smaller than) the hydrostatic time scale, and for which the quasi–static approximation is clearly not reliable, e.g.,the collapse of very massive stars [@8''], and the quick collapse phase preceding neutron star formation, see for example [@9'] and references therein. In these cases, in which we are mainly concerned with here, it is mandatory to take into account terms which describe departure from equilibrium, i.e. a full dynamic description has to be used. We shall assume that the process is dissipative. In fact, it is already an established fact, that gravitational collapse is a highly dissipative process (see [@Hs], [@Hetal], [@Mitra] and references therein). This dissipation is required to account for the very large (negative) binding energy of the resulting compact object (of the order of $-10^{53} erg$). Indeed, it appears that the only plausible mechanism to carry away the bulk of the binding energy of the collapsing star, leading to a neutron star or black hole is neutrino emission [@1]. In the diffusion approximation, it is assumed that the energy flux of radiation (and that of thermal conduction) is proportional to the gradient of temperature. This assumption is in general very sensible, since the mean free path of particles responsible for the transfer of energy in stellar interiors is usually very small as compared with the typical length of the object. Thus, for a main sequence star as the sun, the mean free path of photons at the centre, is of the order of $2\, cm$. Also, the mean free path of trapped neutrinos in compact cores of densities about $10^{12} \, g.cm.^{-3}$ becomes smaller than the size of the stellar core [@3; @4]. Furthermore, the observational data collected from supernovae 1987A suggests that the regime of radiation transport prevailing during the emission process, is closer to the diffusion approximation than to the streaming out limit [@5]. Dissipative effects in the diffusion approximation are further enhanced by very large values of thermal conductivity, which may be of the order of $\kappa \approx 10^{23} erg \, s^{-1} \, cm^{-1} \, K^{-1}$ for electron conductivity (see [@78] and references therein) or even $\kappa \approx 10^{37}erg \, s^{-1} \, cm^{-1} \, K^{-1}$, for neutrino conductivity in a pre–supernovae event [@Ma]. However in many other circumstances, the mean free path of particles transporting energy may be large enough as to justify the free streaming approximation. Therefore we shall include simultaneously both limiting cases of radiative transport (diffusion and streaming out), allowing for describing a wide range situations. Since we are mainly concerned with time scales that might be of the order of (or even smaller than) relaxation times, we have to appeal to a hyperbolic theory of dissipation in order to treat the transport equation for dissipative variables. The use of a hyperbolic theory of dissipation is further justified by the necessity of overcoming the difficulties inherent to parabolic theories (see references [@Hs], [@18]–[@8'] and references therein). Doing so we shall be able to give a description of processes occuring before thermal relaxation. Some years ago, Misner and Sharp [@MisnerSharp] and Misner [@Misner] provided a full account of the dynamical equations governing the adiabatic [@MisnerSharp], and the dissipative relativistic collapse in the streaming out approximation [@Misner]. An extension of the Misner dynamical equations as to include dissipation in the form of a radial heat flow (besides pure radiation), was given in [@Hs]. In that work the heat flux was assumed to satisfy a causal transport equation, but viscosity was absent and thereby the thermodynamics viscous/heat coupling coefficients were not taken under consideration. Furthermore, for simplification the fluid was assumed shearfree, in despite of the fact that the relevance of the shear tensor in the evolution of self–gravitaing systems has been brought out by many authors (see [@SFCF] and references therein). More recently [@DHN], shear viscosity was introduced into the Misner approach, but again thermodynamics viscous/heat coupling coefficients were neglected, and furthermore the assumed transport equation for the shear viscosity was the corresponding to the standard Eckart theory of relativistic irreversible thermodynamics [@10; @11]. The motivation to consider viscosity effects in the study of relativistic gravitational collapse is well founded. In fact, though they are often excluded in general relativistic models of stars, they are known to play a very important role in the structure and evolution of neutron stars. Indeed, depending on the dominant process, the coefficient of shear viscosity may be as large as $\eta \approx 10^{20}$ g cm$^{-1}$ s$^{-1}$ (see [@Anderson] for a review on shear viscosity in neutron stars). On the other hand the coefficient of bulk viscosity may be as large as $\zeta \approx10^{30}$ g cm$^{-1}$ s$^{-1}$ due to Urca processes in strange quark matter [@sad]. Similar and even larger values may be attained for two color superconducting quark matter phases [@alford; @blaschke] and for hybrid stars [@drago](see also [@jones; @vandalen; @Dong] and references therein for a review on bulk viscosity in nuclear and quark matter). The purpose of this work is to present a dynamical description of the gravitational collapse within the framework of the Misner approach, for the more general dissipative fluid distribution consistent with spherical symmetry. This includes the presence of both shear and bulk viscosity, with a full causal treatment for all dissipative variables as well as the inclusion of the thermodynamics viscous/heat coupling coefficients. These coefficients may be relevant in non–uniform stellar models [@8]. The manuscript is organized as follows: in the next Section, besides the field equations, the conventions, and other useful formulae we obtain the resulting dynamical equation. In Section 3 transport equations in the context of the Müller–Israel–Stewart theory [@Muller67; @12; @13] are obtained. The coupling of these equations with the dynamical equation is performed in Section 4. After doing that we show how the effective inertial mass density of a fluid element reduces by a factor which depends on dissipative variables. This result was already known (see [@eim] and references therein), but for the case in which only the heat flux was assumed to satisfy a causal transport equation, without the presence of the thermodynamics viscous/heat coupling coefficients. In Section 5 an expression is derived which relates the Weyl tensor with the density inhomogeneity and thermodynamical variables. This allows us to bring out the role of dissipative variables in a definition of the arrow of time. Finally the results are discussed in the last section. The basic equations =================== In this section we shall deploy the relevant equations for describing a viscous dissipative self–gravitating fluid. This includes a full description of the matter distribution, the line element, both, inside and outside the fluid boundary, and the field equations this line element must satisfy. Interior spacetime ------------------ We consider a spherically symmetric distribution of collapsing fluid, bounded by a spherical surface $\Sigma$. We assume the fluid to undergo dissipation in the form of heat flow, free streaming radiation and shearing and bulk viscosity. Choosing comoving coordinates inside $\Sigma$, the general interior metric can be written as $$ds^2_-=-A^2dt^2+B^2dr^2+(Cr)^2(d\theta^2+\sin^2\theta d\phi^2), \label{1}$$ where $A$, $B$ and $C$ are functions of $t$ and $r$ and are assumed positive. We number the coordinates $x^0=t$, $x^1=r$, $x^2=\theta$ and $x^3=\phi$. The assumed matter energy momentum $T_{\alpha\beta}^-$ inside $\Sigma$ has the form $$\begin{aligned} T^{-}_{\alpha \beta} = \left( \mu + p + \Pi\right)V_{\alpha}V_{\beta} + (p + \Pi)g_{\alpha \beta} + q_{\alpha}V_{\beta} + q_{\beta}V_{\alpha} + \epsilon l_{\alpha}l_{\beta} + \pi_{\alpha \beta} \label{Tmunu}\end{aligned}$$ where $\mu$ is the energy density, $p$ the pressure, $\Pi$ the bulk viscosity, $q^{\alpha}$ the heat flux, $\pi_{\alpha \beta}$ the shear viscosity tensor, $\epsilon$ the radiation density, $V^{\alpha}$ the four velocity of the fluid, and $l^{\alpha}$ a radial null four vector. These quantities satisfy $$\begin{aligned} V^{\alpha}V_{\alpha}&=&-1, \;\; V^{\alpha}q_{\alpha}=0, \;\; \;\; l^{\alpha} V_{\alpha}=-1, \;\; l^{\alpha}l_{\alpha}=0,\;\;\nonumber \\ \;\; \pi_{\mu \nu}V^\nu&=&0, \;\; \pi_{[\mu \nu]}=0, \;\; \pi^\alpha_\alpha=0. \label{4}\end{aligned}$$ In the standard irreversible thermodynamics we have [@8; @chan1] $$\pi_{\alpha \beta}=-2\eta \sigma_{\alpha \beta}, \;\; \Pi=-\zeta \Theta \label{sv}$$ where $\eta$ and $\zeta$ denote the coefficient of shear and bulk viscosity, respectively, $\sigma_{\alpha \beta}$ is the shear tensor and $\Theta$ is the expansion. However, since we are interested in a full causal picture of dissipative variables we shall not assume (\[sv\]). Instead, we shall use the corresponding transport equation derived from the Müller–Israel–Stewart theory. The shear $\sigma_{\alpha\beta}$ is given by $$\sigma_{\alpha\beta}=V_{(\alpha ;\beta)}+a_{(\alpha}V_{\beta)}-\frac{1}{3}\Theta h_{\alpha\beta} \label{4a}$$ where the acceleration $a_{\alpha}$ and the expansion $\Theta$ are given by $$a_{\alpha}=V_{\alpha ;\beta}V^{\beta}, \;\; \Theta={V^{\alpha}}_{;\alpha} \label{4b}$$ and $h_{\alpha \beta}=g_{\alpha \beta}+V_{\alpha}V_{\beta}$ is the projector onto the hypersurface orthogonal to the four velocity . Since we assumed the metric (\[1\]) comoving then $$V^{\alpha}=A^{-1}\delta_0^{\alpha}, \;\; q^{\alpha}=qB^{-1}\delta^{\alpha}_1, \;\; l^{\alpha}=A^{-1}\delta^{\alpha}_0+B^{-1}\delta^{\alpha}_1, \;\; \label{5}$$ where $q$ is a function of $t$ and $r$. Also it follows from (\[4\]) that $$\pi_{0 \alpha}=0 \;\;, \pi^1_1=-2\pi^2_2=-2\pi^3_3\;. \label{5shear}$$ In a more compact form we can write $$\pi_{\alpha \beta}=\Omega(\chi_{\alpha}\chi_{\beta}-\frac{1}{3}h_{\alpha \beta}) \label{comp}$$ where $\chi^{\alpha}$ is a unit four vector along the radial direction, satisfying $$\chi^{\alpha}\chi_{\alpha}=1, \;\; \chi^{\alpha} V_{\alpha}=0, \;\;\chi^{\alpha}=B^{-1}\delta^{\alpha}_1, \label{chi}$$ and $\Omega=\frac{3}{2}\pi^1_1$. With (\[5\]) we obtain for (\[4a\]) its non null components $$\sigma_{11}=\frac{2}{3}B^2\sigma, \;\; \sigma_{22}=\frac{\sigma_{33}}{\sin^2\theta}=-\frac{1}{3}(Cr)^2\sigma, \label{5a}$$ where $$\sigma=\frac{1}{A}\left(\frac{\dot{B}}{B}-\frac{\dot{C}}{C}\right),\label{5b1}$$ and the dot stands for differentiation with respect to $t$, which gives the scalar quantity $$\sigma_{\alpha\beta}\sigma^{\alpha\beta}=\frac{2}{3}\sigma^2. \label{5b}$$ For (\[4b\]) with (\[5\]) we have, $$a_1=\frac{A^{\prime}}{A}, \;\; \Theta=\frac{1}{A}\left(\frac{\dot{B}}{B}+2\frac{\dot{C}}{C}\right), \label{5c}$$ where the prime stands for $r$ differentiation. The Einstein equations ---------------------- Einstein’s field equations for the interior spacetime (\[1\]) are given by $$G_{\alpha\beta}^-=8\pi T_{\alpha\beta}^-. \label{2}$$ The non null components of (\[2\]) with (\[1\]), (\[Tmunu\]) (\[5\]), (\[5shear\]) and (\[comp\]) become $$\begin{aligned} G^{-}_{00} = 8\pi T^{-}_{00} & =& 8\pi(\mu + \epsilon)A^{2} = \left(2\frac{\dot B}{B} + \frac{\dot C}{C} \right)\frac{\dot C}{C}\nonumber \\ &+ & \left(\frac{A}{B} \right)^{2}\left\{-2 \frac{C^{\prime \prime}}{C} +\left(2 \frac{B^{\prime}}{B} - \frac{C^{\prime}}{C} \right)\frac{C^{\prime}}{C} + \frac{2}{r}\left(\frac{B^{\prime}}{B} - 3\frac{C^{\prime}}{C}\right) - \left[ 1 - \left(\frac{B}{C} \right)^{2}\right]\frac{1}{r^{2}} \right\} \nonumber\\\label{G00}\end{aligned}$$ $$\begin{aligned} G^{-}_{11} = 8\pi T^{-}_{11} & =& 8\pi\left[p + \Pi + \epsilon + \frac{2}{3} \Omega\right]B^{2} = -\left(\frac{B}{A} \right)^{2}\left[2\frac{\ddot C}{C} + \left(\frac{\dot C}{C} \right)^{2} - 2\frac{\dot A}{A}\frac{\dot C}{C}\right] + \nonumber\\ &+& \left(\frac{C^{\prime}}{C} \right)^{2} + 2\frac{A^{\prime}}{A}\frac{C^{\prime}}{C} + \frac{2}{r}\left(\frac{A^{\prime}}{A} + \frac{C^{\prime}}{C}\right) + \left[ 1 - \left(\frac{B}{C} \right)^{2}\right]\frac{1}{r^{2}} \label{G11}\end{aligned}$$ $$\begin{aligned} G^{-}_{22} = 8\pi\ T^{-}_{22} &=& 8\pi\left[p + \Pi - \frac{\Omega}{3} \right](Cr)^{2} = - \left(\frac{Cr}{A}\right)^{2} \left[\frac{\ddot B}{B} + \frac{\ddot C}{C} - \frac{\dot A}{A}\left(\frac{\dot B}{B} + \frac{\dot C}{C} \right) + \frac{\dot B}{B}\frac{\dot C}{C}\right] + \nonumber\\ &+& \left(\frac{Cr}{B}\right)^{2}\left[\frac{A^{\prime \prime}}{A} + \frac{C^{\prime \prime}}{C} - \frac{A^{\prime}}{A}\left(\frac{B^{\prime}}{B} - \frac{C^{\prime}}{C} \right) - \frac{B^{\prime}}{B} \frac{C^{\prime}}{C} + \frac{1}{r}\left(\frac{A^{\prime}}{A} - \frac{B^{\prime}}{B} + 2\frac{C^{\prime}}{C}\right)\right]\nonumber\\ \label{G22}\end{aligned}$$ $$\begin{aligned} G^{-}_{33} = \sin^2\theta G^{-}_{22}\label{G33}\end{aligned}$$ $$\begin{aligned} G^{-}_{01} = 8\pi T^{-}_{01} = - 8\pi(q + \epsilon )AB = -2\left(\frac{\dot C^{\prime}}{C} - \frac{\dot B}{B}\frac{C^{\prime}}{C} - \frac{\dot C}{C}\frac{A^{\prime}}{A} \right)+ \frac{2}{r}\left(\frac{\dot B}{B} - \frac{\dot C}{C}\right), \label{G01}\end{aligned}$$ observe that this last equation may be written as $$4\pi(q + \epsilon)B=\frac{1}{3}(\Theta-\sigma)^{\prime}-\sigma\frac{(Cr)^{\prime}}{Cr}. \label{nueva01}$$ Next, the mass function $m(t,r)$ introduced by Misner and Sharp [@MisnerSharp] is defined by $$m=\frac{(Cr)^3}{2}{R_{23}}^{23} =\frac{Cr}{2}\left\{\left(\frac{r\dot{C}}{A}\right)^2 -\left[\frac{(Cr)^{\prime}}{B}\right]^2+1\right\}. \label{18}$$ . The exterior spacetime and junction conditions ---------------------------------------------- Outside $\Sigma$ we assume we have the Vaidya spacetime (i.e. we assume all outgoing radiation is massless), described by $$ds^2=-\left(1-\frac{2M(v)}{r}\right)dv^2-2drdv+r^2(d\theta^2 +\sin^2\theta d\phi^2) \label{1int},$$ where $M(v)$ denote the total mass, and $v$ is the retarded time. The matching of the full non-adiabatic sphere (including viscosity) to the Vaidya spacetime was discussed in [@chan1]. From the continuity of the first and second differential forms it follows (see [@chan1] for details), $$m(t,r)\stackrel{\Sigma}{=}M(v), \label{junction1}$$ and $$\begin{aligned} &&2\frac{\dot C^{\prime}}{C}+2\frac{\dot C}{Cr}-2\frac{\dot B}{B}\frac{C^\prime}{C}-2\frac{\dot B}{Br}-2\frac{A^{\prime}}{A}\frac{\dot C}{C} + \nonumber\\&& +\frac{B}{A}\left[2\frac{\ddot C}{C}-2\frac{\dot C}{C}\frac{\dot A}{A}+\left(\frac{A}{Cr}\right)^2+\left(\frac{\dot C}{C}\right)^2-\left(\frac{A}{B}\right)^2 \left(\frac{C^\prime}{C}+\frac{1}{r}\right)\left(\frac{C^\prime}{C}+\frac{1}{r}+2\frac{A^\prime}{A}\right)\right] \nonumber\\&& \stackrel{\Sigma}{=}0, \label{j2}\end{aligned}$$ where $\stackrel{\Sigma}{=}$ means that both sides of the equation are evaluated on $\Sigma$ (observe a misprint in eq.(40) in [@chan1] and a slight difference in notation). Comparing (\[j2\]) with (\[G11\]) and (\[G01\]) one obtains $$p+\Pi +\frac{2}{3} \Omega\stackrel{\Sigma}{=}q. \label{j3}$$ Thus the matching of (\[1\]) and (\[1int\]) on $\Sigma$ implies (\[junction1\]) and (\[j3\]). In the context of the standard irreversible thermodynamics where (\[sv\]) is valid, we obtain $$p+\Pi-\frac{4\eta \sigma}{3}\stackrel{\Sigma}{=}q, \label{j4}$$ which reduces to eq.(41) in [@chan1] with the appropriate change in notation. Observe a misprint in eq.(27) in [@DHN] (the $\sigma$ appearing there is the one defined in [@chan1], which is $-\frac{1}{3}$ of the one used here and in [@DHN]). Dynamical equations ------------------- The non trivial components of the Bianchi identities , $(T^{-\alpha\beta})_{;\beta}=0$ yield $$\begin{aligned} &&T^{- \mu \nu}_{;\nu}V_{\mu} = -\frac{1}{A}\left( \dot \mu + \dot \epsilon \right) - \frac{1}{B}\left( q^{\prime} + \epsilon^{\prime} \right) - 2\left(q + \epsilon \right)\frac{(ACr)^{\prime}}{ABCr} + \nonumber\\ &&- \frac{2}{A}\frac{\dot C}{C}\left(\mu + p + \Pi + \epsilon - \frac{\Omega}{3} \right) - \frac{1}{A}\frac{\dot B}{B}\left(\mu + p + \Pi + 2\epsilon + \frac{2}{3} \Omega \right) \nonumber \\ && = 0 \label{bianchiv}\end{aligned}$$ and $$\begin{aligned} &&T^{- \mu \nu} _{;\nu}\chi_{\mu} = \frac{1}{A}\left( \dot q + \dot \epsilon \right) + \frac{2}{A} \frac{(BC)^{.}}{BC}\left( q + \epsilon \right) \nonumber \\ &&+ \frac{1}{B} \left(p^{\prime} + \Pi^{\prime} + \epsilon^{\prime} + \frac{2}{3} \Omega^{\prime} \right) + \frac{1}{B} \frac{A^{\prime}}{A}\left(\mu + p + \Pi + 2\epsilon + \frac{2}{3} \Omega \right) \nonumber\\ &&+ \frac{2}{B}\frac{(Cr)^{\prime}}{Cr}\left(\epsilon + \Omega \right) = 0. \label{bianchichi}\end{aligned}$$ To study the dynamical properties of the system, let us introduce, following Misner and Sharp [@MisnerSharp], the proper time derivative $D_T$ given by $$D_T=\frac{1}{A}\frac{\partial}{\partial t}, \label{16}$$ and the proper radial derivative $D_R$, $$D_R=\frac{1}{R^{\prime}}\frac{\partial}{\partial r}, \label{23a}$$ where $$R=Cr \label{23aa}$$ defines the proper radius of a spherical surface inside $\Sigma$, as measured from its area. Using (\[16\]) we can define the velocity $U$ of the collapsing fluid as the variation of the proper radius with respect to proper time, i.e.$$U=rD_TC<0 \;\; \mbox{(in the case of collapse)}. \label{19}$$ Then (\[18\]) can be rewritten as $$E \equiv \frac{(Cr)^{\prime}}{B}=\left[1+U^2-\frac{2m(t,r)}{Cr}\right]^{1/2}. \label{20}$$ With (\[23a\])-(\[23aa\]) we can express (\[nueva01\]) as $$4\pi(q+\epsilon)=E\left[\frac{1}{3}D_R(\Theta-\sigma) -\frac{\sigma}{R}\right].\label{21a}$$ Observe that in the non–dissipative, shearfree case, the equation above may be written, with the help of (\[5b1\]), (\[5c\]) and (\[19\]) as $$D_R(\frac{U}{R})=0 \label{homo}$$ implying $U\sim R$, which describes an homologous collapse [@7']. Next, using (\[G00\]-\[G01\]) and (\[16\]-\[23aa\]) we obtain from (\[18\]) $$\begin{aligned} D_Tm=-4\pi R^2\left[\left (p+\Pi+\epsilon+\frac{2}{3} \Omega \right)U+(q+\epsilon)E\right] \label{22}\end{aligned}$$ and $$\begin{aligned} D_Rm=4\pi R^2\left[\mu+\epsilon+(q+\epsilon)\frac{U}{E}\right]. \label{27}\end{aligned}$$ Expression (\[22\]) describes the rate of variation of the total energy inside a surface of radius $R$. On the right hand side of (\[22\]), $(p+\Pi+\epsilon+\frac{2}{3} \Omega)U$ (in the case of collapse $U<0$) increases the energy inside $R$ through the rate of work being done by the “effective” radial pressure $p+\Pi+ \frac{2}{3} \Omega$ and the radiation pressure $\epsilon$. In the stationary regime where we can use the standard thermodynamical relation $\pi_{\alpha \beta}=-2\eta \sigma_{\alpha \beta}$, we recover the result obtained in [@DHN]. The second term $(q+\epsilon)E$ is the matter energy leaving the spherical surface. Equation (\[27\]) shows how the total energy enclosed varies between neighboring spherical surfaces inside the fluid distribution. The first two terms on the right hand side of (\[27\]), $\mu+\epsilon$, are due to the energy density of the fluid element plus the energy density of the null fluid describing dissipation in the free streaming approximation. The last term, $(q+\epsilon)U/E$ is negative (in the case of collapse) and measures the outflow of heat and radiation. The acceleration $D_TU$ of an infalling particle inside $\Sigma$ can be obtained by using (\[G11\]), (\[18\]), (\[16\]) and (\[20\]), producing $$D_TU=-\frac{m}{R^2}-4\pi R\left(p+\Pi+\epsilon+\frac{2}{3} \Omega\right) +\frac{EA^{\prime}}{AB}, \label{28}$$ and then, substituting $A^{\prime}/A$ from (\[28\]) into (\[bianchichi\]), we obtain $$\begin{aligned} &&\left(\mu + p + \Pi + 2\epsilon + \frac{2}{3} \Omega\right)D_T U=\nonumber \\ && - \left(\mu + p + \Pi + 2\epsilon + \frac{2}{3} \Omega \right) \left[ \frac{m}{R^2} + 4 \pi R \left(p + \Pi + \epsilon + \frac{2}{3} \Omega \right) \right] \nonumber \\ && - E^2 \left[ D_R \left(p + \Pi + \epsilon + \frac{2}{3} \Omega \right) + \frac{2}{R} \left( \epsilon +\Omega\right) \right] \nonumber \\ && - E \left[ D_T q + D_T \epsilon + 4 \left(q + \epsilon \right) \frac{U}{R}+2 \left(q + \epsilon \right) \sigma \right].\label{nd}\end{aligned}$$ As it can be easily seen, the main difference between (\[nd\]), and eq.( 40) in [@DHN] (regarding the contributions from shear viscosity) stems from the $\pi_{\alpha \beta}$ terms which now are not given by (\[sv\]), but have to satisfy a transport equation obtained within the context of the causal dissipative theory (see next section). Thus, the factor within the round bracket on the left (which equals the factor on the first round bracket on the right, as it should be) represents the effective inertial mass (the passive gravitational mass according to the equivalence principle). The first term on the right hand side of (\[nd\]) represents the gravitational force. In this term, the factor within the square bracket shows how dissipation affects the “active” gravitational mass term. There are two different contributions in the second square bracket. The first one is just the gradient of the total “effective” pressure (which includes the radiation pressure and the influence of shear and bulk viscosity ). The second contribution comes from the local anisotropy of pressure induced by the radiation pressure and shear viscosity. The last square bracket contains different contributions due to dissipative processes. The third term within this bracket is positive ($U<0$) showing that the outflow of $q>0$ and $\epsilon>0$ diminish the total energy inside the collapsing sphere, thereby reducing the rate of collapse. The last term describes an effect resulting from the coupling of the dissipative flux with the shear of the fluid. The effects of $D_T\epsilon$ have been discussed in [@Misner] and we shall not analyze them in detail here. Therefore it only remains to analyze the effects of transport equations when coupled to (\[nd\]); we will proceed to carry on that task in the next section. Transport equations =================== As stated in the Introduction, the main purpose of this work consists in providing a full causal description of viscous dissipative gravitational collapse. This implies that all dissipative variables (noy only $q$) have to satisfy the corresponding transport equation derived from causal thermodynamics. Furthermore the thermodynamics viscous/heat coupling coefficients will not be neglected, as they are expected to be relevant in non–uniform stellar models [@8]. Accordingly, we shall use transport equations derived from the Müller-Israel-Stewart second order phenomenological theory for dissipative fluids [@Muller67; @12; @13]. This theory was proposed to overcome the pathologies [@9] found in the approaches of Eckart [@10] and Landau [@11] for relativistic dissipative processes. The important point to retain is that this theory provides transport equations for the dissipative variables, which are of Cattaneo type [@18], leading thereby to hyperbolic equations for dissipative perturbations. The starting point is the general expression for the entropy four–current, which in the context of the Müller-Israel-Stewart theory, reads (see [@8] for details) $$S^\mu=SnV^\mu+\frac{q^\mu}{T}-(\beta_0 \Pi^2+\beta_1 q_\nu q^\nu+\beta_2 \pi_{\nu \kappa}\pi^{\nu \kappa})\frac{V^\mu}{2T}+\frac{\alpha_0 \Pi q^\mu}{T}+\frac{\alpha_1\pi^{\mu \nu}q_\nu}{T} \label{ent}$$ where $n$ is particle number density, $\beta_A(\rho,n)$ are thermodynamic coefficients for different contributions to the entropy density, and $\alpha_A(\rho, n)$ are thermodynamics viscous/heat coupling coefficients. Next, from the Gibbs equation and Bianchi identities, it follows that $$\begin{aligned} &&T S^{\alpha}_{;\alpha} = - \Pi \left[ V^\alpha_{; \alpha}- \alpha_{0}q^{\alpha}_{;\alpha} + \beta_{0} \Pi _{; \alpha} V^\alpha+ \frac{T}{2}\left( \frac{\beta_{0}}{T}V^{\alpha}\right)_{;\alpha} \Pi\right] \nonumber\\ &&- q^{\alpha} \left[ h^\mu_{\alpha} (\ln{T })_{,\mu} (1+\alpha_0 \Pi)+ V_{\alpha;\mu} V^\mu- \alpha_{0} \Pi_{;\alpha} - \alpha_{1}\pi^{\mu}_{\alpha ; \mu} + \alpha_{1} \pi^{\mu}_{\alpha}h^\beta_\mu(\ln{T})_{,\beta}\right. \nonumber\\ &&\left. + \beta_{1} q_{\alpha;\mu} V^\mu+ \frac{T}{2} \left( \frac{\beta_{1}}{T}V^{\mu}\right)_{;\mu}q_{\alpha}\right] \nonumber\\&& - \pi^{\alpha \mu} \left[ \sigma_{\alpha \mu} - \alpha_{1}q_{\mu ; \alpha} + \beta_{2} \pi_{\alpha \mu;\nu} V^\nu+ \frac{T}{2} \left( \frac{\beta_{2}}{T}V^{\nu}\right)_{;\nu}\pi_{\alpha \mu}\right]. \label{diventropia}\end{aligned}$$\ Finally, by the standard procedure, the constitutive transport equations follow from the requirement $S^\alpha;_\alpha \geq 0$ $$\begin{aligned} \tau_{0} \Pi_{,\alpha}V^{\alpha} + \Pi = -\zeta \Theta + \alpha_{0} \zeta q^{\alpha}_{;\alpha} - \frac{1}{2}\zeta T\left( \frac{\tau_{0}}{\zeta T}V^{\alpha}\right)_{;\alpha} \Pi, \label{ectransppi}\end{aligned}$$ $$\begin{aligned} \tau_{1} h^{\beta}_{\alpha} q_{\beta; \mu}V^{\mu} + q_{\alpha}& =& - \kappa \left[ h_{\alpha}^{\beta}T_{, \beta} (1 + \alpha_{0}\Pi) + \alpha_{1} \pi^{\mu}_{\alpha}h_{\mu}^{\beta}T_{,\beta} + T (a_\alpha - \alpha_{0} \Pi_{;\alpha} - \alpha_{1}\pi^{\mu}_{\alpha;\mu})\right] \nonumber\\&-& \frac{1}{2}\kappa T^{2}\left( \frac{\tau_{1}}{\kappa T^{2}}V^{\beta}\right)_{;\beta} q_{\alpha}\label{ectranspq}\end{aligned}$$ and $$\begin{aligned} \tau_{2} h^{\mu}_{\alpha} h^{\nu}_{\beta} \pi_{\mu \nu; \rho}V^{\rho} + \pi_{\alpha \beta} = -2\eta \sigma_{\alpha \beta} + 2\eta \alpha_{1} q_{<\beta ;\alpha>} - \eta T\left( \frac{\tau_{2}}{2\eta T}V^{\nu}\right)_{;\nu}\pi_{\alpha \beta}\label{ectransppialphabeta}\end{aligned}$$ with $$q_{<\beta;\alpha>}= h^\mu_\beta h^\nu_\alpha \left(\frac{1}{2}(q_{\mu;\nu} + q_{\nu;\mu})- \frac{1}{3} q_{\sigma;\kappa} h^{\sigma \kappa}h_{\mu\nu}\right) \label{marra}$$ and where the relaxational times are given by $$\tau_0=\zeta \beta_0, \;\; \tau_1=\kappa T \beta_1, \;\; \tau_2=2\eta \beta_2. \label{relax}$$ Equations (\[ectransppi\])–(\[ectransppialphabeta\]) reduce to equations (2.21), (2.22) and (2.23) in [@8] when thermodynamics coupling coefficients vanish. In our case each of the equations (\[ectransppi\])–(\[ectransppialphabeta\]) has only one independent component, they read $$\begin{aligned} \tau_{0} \dot \Pi &= & - \left(\zeta+ \frac{\tau_{0}}{2} \Pi \right)A\Theta + \frac{A}{B} \alpha_{0} \zeta \left[q^{\prime} + q \left( \frac{A^{\prime}}{A} + \frac{2(rC)^{\prime}}{rC} \right) \right]\nonumber \\ &-& \Pi \left[\frac{\zeta T}{2} \left(\frac{\tau_{0}}{\zeta T}\right)^{.} + A \right], \label{resultpi}\end{aligned}$$ $$\begin{aligned} \tau_{1} \dot q &= &-\frac{A}{B} \kappa\{ T^{\prime} (1 + \alpha_{0}\Pi+\frac{2}{3}\alpha_1 \Omega) + T \Big[ \frac{A^{\prime}}{A} - \alpha_{0} \Pi^{\prime} - \frac{2}{3}\alpha_{1} \Big(\Omega^\prime +\Big( \frac{A^{\prime}}{A} + 3\frac{(rC)^{\prime}}{rC} \Big)\Omega \Big)\Big]\}\nonumber \\ &-& q \left[ \frac{\kappa T^{2}}{2} ( \frac{\tau_{1}}{\kappa T^{2}})^{.} + \frac{\tau _{1} }{2} A\Theta + A\right] \label{resultq}\end{aligned}$$ and $$\begin{aligned} \tau_{2} \dot \Omega = -2\eta A\sigma + 2\eta \alpha_{1} \frac{A}{B}\left(q^{\prime} - q \frac{(rC)^{\prime}}{rC}\right) -\Omega\left[ \eta T \Big( \frac{\tau_{2}}{2\eta T} \Big)^{.} + \frac{ \tau_{2}}{2} A \Theta +A\right]. \label{ectransppialphabeta11}\end{aligned}$$\ We shall now proceed to couple transport equations in the form above, to the dynamical equation (\[nd\]), in order to bring out the effects of dissipation on the dynamics of the collapsing sphere. For that purpose, let us first substitute (\[resultq\]) into (\[nd\]), then we obtain, after some rearrangements, $$\begin{aligned} &&\left(\mu+p+\Pi + 2\epsilon+\frac{2}{3}\Omega\right)(1-\Lambda)D_TU =(1-\Lambda)F_{grav}+F_{hyd}\nonumber \\ & +&\frac{\kappa E^2}{\tau_1}\left\{D_RT \left(1+\alpha_0 \Pi + \frac{2}{3} \alpha_1\Omega\right) -T\left[\alpha_0 D_R \Pi + \frac{2}{3}\alpha_1 \left(D_R \Omega+\frac{3}{R}\Omega\right)\right] \right\}\nonumber\\ &+&E\left[\frac{\kappa T^2q}{2\tau_1}D_T\left(\frac{\tau_1}{\kappa T^2}\right)-D_T\epsilon\right] -E\left[\left(\frac{3q}{2}+2\epsilon\right)\Theta-\frac{q}{\tau_1}-2(q+\epsilon)\frac{U}{R}\right],\nonumber\\ \label{V4N}\end{aligned}$$ where $F_{grav}$ and $F_{hyd}$ are defined by $$\begin{aligned} F_{grav}&=&-\left(\mu+p+\Pi+2\epsilon +\frac{2}{3}\Omega\right)\nonumber\\ &&\times \left[m+4\pi\left(p+\Pi+\epsilon+\frac{2}{3}\Omega \right)R^3\right]\frac{1}{R^2}, \label{grav}\\ F_{hyd}&=& -E^2 \left[D_R \left(p+\Pi+\epsilon +\frac{2}{3}\Omega \right) +2(\epsilon+\Omega)\frac{1}{R}\right], \label{hyd}\end{aligned}$$ and $\Lambda$ is given by $$\Lambda=\frac{\kappa T}{\tau_1}\left(\mu+p+\Pi+2\epsilon+\frac{2}{3}\Omega\right)^{-1} \left(1-\frac{2}{3}\alpha_1\Omega\right) . \label{alpha}$$ Next we express $\Theta$ by means of (\[resultpi\]) and feed this back into (\[V4N\]), obtaining: $$\begin{aligned} &&\left(\mu+p+\Pi + 2\epsilon+\frac{2}{3}\Omega\right)(1-\Lambda+\Delta)D_TU =(1-\Lambda+\Delta)F_{grav}+F_{hyd}\nonumber \\ & +&\frac{\kappa E^2}{\tau_1}\left\{D_RT \left(1+\alpha_0 \Pi + \frac{2}{3}\alpha_1 \Omega\right) -T\left[\alpha_0 D_R \Pi + \frac{2}{3}\alpha_1 \left(D_R \Omega+\frac{3}{R}\Omega\right)\right] \right\}\nonumber\\ &-&E^2\left(\mu+p+\Pi + 2\epsilon+\frac{2}{3}\Omega\right)\Delta\left(\frac{D_R q}{q}+\frac{2q}{R}\right) \nonumber\\ &+&E\left[\frac{\kappa T^2q}{2\tau_1}D_T\left(\frac{\tau_1}{\kappa T^2}\right)-D_T\epsilon\right] +E\left[\frac{q}{\tau_1}+2(q+\epsilon)\frac{U}{R}\right]\nonumber\\ &+&E \frac{\Delta}{\alpha_0\zeta q}\left(\mu+p+\Pi + 2\epsilon+\frac{2}{3}\Omega\right)\left\{\left[1+\frac{\zeta T}{2}D_T\left(\frac{\tau_0}{\zeta T}\right)\right]\Pi + \tau_0 D_T \Pi\right\},\nonumber\\ \label{V4}\end{aligned}$$ where $\Delta$ is given by $$\Delta=\alpha_0 \zeta q \left(\mu+p+\Pi+2\epsilon+\frac{2}{3}\Omega\right)^{-1}\left( \frac{3q+4\epsilon}{2\zeta+\tau_0 \Pi}\right) . \label{alpha}$$ Thus, once transport equations have been taken into account, then the inertial energy density and the “passive gravitational mass density”, appear diminished by the factor $1-\Lambda+\Delta$. This result generalizes the one obtained in [@DHN], by means of a complete causal treatment of all dissipative variables and the inclusion of thermodynamics viscous/heat coupling coefficients. The Weyl tensor =============== In this section we shall find some interesting relationships linking the Weyl tensor with matter variables, from which we shall extract some conclusions about the arrow of time. From the Weyl tensor we may construct the Weyl scalar ${\mathcal C}^2=C^{\alpha\beta\gamma\delta}C_{\alpha\beta\gamma\delta}$ which can be given in terms of the Kretchman scalar ${\mathcal R}=R^{\alpha\beta\gamma\delta}R_{\alpha\beta\gamma\delta}$, the Ricci tensor $R_{\alpha\beta}$ and the curvature scalar R by $${\mathcal C}^2={\mathcal R}-2R^{\alpha\beta}R_{\alpha\beta}+\frac{1}{3}\rm{R}^2.\label{I18}$$ With the help of the formulae given in the Appendix of [@DHN] and the field equations, we may write (\[I18\]) as $${\mathcal E}=m- \frac{4\pi}{3}R^3\left(\mu-\Omega \right),\label{I19}$$ where ${\mathcal E}$ is given by $${\mathcal E}=\frac{{\mathcal C}}{48^{1/2}}R^3. \label{I20}$$ From (\[I19\]) with (\[22\]) and (\[27\]) we have $$\begin{aligned} D_R{\mathcal E} =4\pi R^2\left[(q+\epsilon)\frac{U}{E} +\epsilon + \Omega - D_R\left(\mu -\Omega\right)\frac{R}{3} \right]. \label{II21}\end{aligned}$$ From (\[II21\]) we obtain at once for the non-dissipative, perfect fluid case $$D_R{\mathcal E}+\frac{4\pi}{3}R^3D_R\mu=0, \label{arrow}$$ implying that $D_R \mu=0$ produces ${\mathcal C}=0$ (using the regular axis condition), and conversely the conformally flat condition implies homogeneity in the energy density. Since tidal forces tend to make the gravitating fluid more inhomogeneous as the evolution proceeds, a relationship like (\[arrow\]) led Penrose to propose a gravitational arrow of time in terms of the Weyl tensor [@Pe]. However the fact that such a relationship is no longer valid in the presence of local anisotropy of the pressure and/or dissipative processes, already discussed in [@Hetal], explains its failure in scenarios where the above-mentioned factors are present [@arrow]. Here we see how shear viscosity and dissipative fluxes affect the link between the Weyl tensor and density inhomogeneity. From the above it is evident that density inhomogeneities may appear in a conformally flat spacetime, if dissipative processes occur. Examples of this kind have been presented in [@anali]. Conclusions =========== We have established the set of equations governing the structure and evolution of self–gravitating spherically symmetric dissipative viscous fluids. Dissipative variables have been assumed to satisfy transport equations derived from causal thermodynamics, and viscous/heat coupling coefficients have been included. As a result of this aproach we obtain a dynamic equation (\[V4\]) which shows the influence of dissipative variables and viscous/heat coupling coefficients on the value of the “effective” inertial mass ( “passive” gravitational mass). In a presupernovae event, dissipative parameters (in particular $\kappa$) may be large enough as to poduce a significative decreasing of the gravitational force term, resulting in a reversal of the collapse. A numerical model exhibiting this kind of “bouncing” has been presented in [@bounc]. Nevertheless, the role that this effect might play in the outcome of gravitational collapse of massive stars will critically depend on specific numerical values of those quantities. Such estimations are, however, well beyond the scope of this work. Here we just want to display the way those quantities enter into the dynamic equation and stress the fact that they should not be excluded a priori, particularly during the most rapid phases of the collapse. From (\[II21\]) it is apparent that the production of density inhomogeneities is related to a quantity involving dissipative fluxes and shear viscosity. Thus if following Penrose we adopt the point of view that self–gravitating systems evolve in the sense of increasing of density inhomogeneity, then any alternative definition for an arrow of time should include those variables. Finally, it is worth mentioning that we have considered the fluid to be neutral. The reason for this is that, as it can be easily verified, there is not terms coupling electromagnetic and dissipative variables, in the relevant equations. Therefore the role of electric charge in the dynamics of collapse is the same already discussed in [@DHN]. Acknowledgements ================ LH wishes to thank FUNDACION EMPRESAS POLAR for financial support and Universitat de les Illes Balears for financial support and hospitality. ADP acknowledges hospitality of the Physics Department of the Universitat de les Illes Balears and financial support from the CDCH at Universidad Central de Venezuela. LH and ADP also acknowledge financial support from the CDCH at Universidad Central de Venezuela under grants PG 03-00-6497-2007. [88]{} J. Oppenheimer and H. Snyder, [*Phys. Rev.*]{} [**56**]{}, 455 (1939). M. May and R. White, [*Phys. Rev.*]{} [**141**]{}, 1232 (1966); J. Wilson, [*Astrophys. J.*]{} [**163**]{}, 209 (1971); A. Burrows ans J. Lattimer, [*Astrophys. J.*]{} [**307**]{}, 178 (1986); R. Adams, B. Cary and J. Cohen, [*Astrophys. Space Sci.*]{} [**155**]{}, 271 (1989); W. B. Bonnor, A. Oliveira and N. O. Santos, [*Phys. Rep.*]{} [**181**]{}, 269 (1989); M. Govender, S. Maharaj and R. Maartens, [*Class.Quantum Grav.*]{} [**15**]{}, 323 (1998); D. Schafer and H. Goenner,[*Gen.Rel.Grav.*]{},[**32**]{}, 2119 (2000); M. Govender, R. Maartens and S. Maharaj [*Phys.Lett.A*]{} 283, 71 (2001); S. Wagh [*et al.*]{} [*Class. Quantum Grav.*]{} [**18**]{}, 2147 (2001); L. Herrera, W. Barreto, A. Di Prisco and N. O. Santos [*Phys. Rev. D*]{} [**65**]{},104004 (2002); N. Naidu, M. Govender and K. Govinder [*Int. J. Mod. Phys. D*]{} [**15**]{}, 1053 (2006); R. Goswami [*arXiv: 0707.1122*]{}; P. Joshi and R. Goswami [*arXiv: 0711.0426*]{}; S. Misthry, S. Maharaj and P. Leach [*Math. Meth. App. Sci.*]{} [**31**]{}, 363 (2008); S. Rajah and S. Maharaj [*J. Math. Phys.*]{} [**49**]{} ,012501(2008). I. Iben, [*Astrophys. J.*]{} [**138**]{}, 1090 (1963). E. Myra and A. Burrows [*Astrophys. J.*]{} [**364**]{}, 222 (1990). L. Herrera and N. O. Santos [*Phys. Rev. D*]{} [**70**]{}, 084004 (2004). L. Herrera, A. Di Prisco, J. Martín, J. Ospino, N. O. Santos and O. Troconis [*Phys. Rev. D*]{} [**69**]{}, 084026 (2004). A. Mitra [*Phys. Rev. D*]{} [**74**]{}, 024010 (2006). D. Kazanas and D. Schramm [*Sources of Gravitational Radiation*]{}, L. Smarr ed., (Cambridge University Press, Cambridge) (1979). W. D. Arnett [*Astrophys. J.*]{} [**218**]{}, 815 (1977). D. Kazanas [*Astrophys. J.*]{} [**222**]{}, L109 (1978). J. Lattimer [*Nucl. Phys.*]{} [**A478**]{}, 199 (1988). N. Flowers and N. Itoh [*Astrophys. J.*]{} [**230**]{}, 847 (1979); [*Astrophys. J.*]{} [**250**]{}, 750 (1981); P. Shternin and D. Yakovlev [*Phys. Rev. D*]{} [**74**]{}, 043004(2006); A. Chugunov and P. Haensel [*Month. Not. R. Astron. Soc.*]{} [**381**]{}, 1143, (2007). J. Martínez [*Phys. Rev. D*]{}, [**53**]{}, 6921 (1996). C. Cattaneo, [*Atti. Semin. Mat. Fis. Univ. Modena*]{}, [**3**]{}, 3 (1948). I. Müller, [*Z. Physik*]{} [**198**]{}, 329, (1967). W. Israel [*Ann. Phys., NY*]{} [**100**]{}, 310 (1976). W. Israel and J. Stewart [*Phys. Lett.*]{} [**A58**]{}, 213 (1976); [*Ann. Phys. NY*]{} [**118**]{}, 341 (1979). B. Carter [*Journées Relativistes*]{}, Ed. M. Cahen, Deveber R. and Geheniahau J., (ULB, Brussels) (1976). D. Pavón, D. Jou and J. Casas-Vázquez, [*Ann. Inst. H Poicaré*]{} [**A36**]{}, 79 (1982). W. Hiscock and L. Lindblom, [*Ann. Phys. NY*]{} [**151**]{}, 466 (1983). D. Jou, J. Casas-Vázquez J. and G. Lebon, [*Rep. Prog. Phys.*]{}, [**51**]{}, 1105 (1988). D. Joseph and L. Preziosi, [*Rev. Mod. Phys.*]{} [**61**]{}, 41 (1989). J. Triginer and D. Pavón, [*Class. Quantum Grav.*]{} [**12**]{}, 689 (1995). D. Jou, J. Casas–Vázquez and G. Lebon, [*Extended Irreversible Thermodynamics*]{}, second edition (Springer–Verlag, Berlin, 1996). D. Y. Tzou, [*Macro to Micro Scale Heat Transfer: The Lagging Behaviour*]{}, (Taylor & Francis, Washington, 1996). R. Maartens [*astro-ph*]{}/9609119. A . Anile, D. Pavón and V. Romano [*gr–qc*]{}/9810014. L. Herrera and D. Pavón, [*Physica A*]{} [**307**]{}, 121 (2002). C. W. Misner and D. H. Sharp [*Phys. Rev.*]{} [**136**]{}, B571, (1964). C. W. Misner [*Phys. Rev.*]{} [**137**]{}, B1360, (1965). C. B. Collins and J. Wainwright [*Phys. Rev. D*]{} [**27**]{}, 1209 (1983); E. N. Glass [*J. Math. Phys.*]{} [**20**]{}, 1508 (1979); R. Chan [*Mon. Not. R. Astron. Soc.*]{} [**299**]{}, 811 (1998); P. Joshi, N. Dadhich and R. Maartens [*gr–qc*]{}/0109051; P. Joshi, R. Goswami and N. Dadhich [*gr–qc*]{}/0308012; L. Herrera and N. O. Santos [*Month. Not. R. Astron. Soc.*]{} [**343**]{}, 1207 (2003). A. Di Prisco, L. Herrera, G. Le Denmat, M. MacCallum and N.O. Santos. [*Phys. Rev. D*]{} [**76**]{}, 064017 (2007). C. Eckart [*Phys. Rev.*]{} [**58**]{}, 919 (1940). L. Landau and E. Lifshitz, [*Fluid Mechanics*]{} (Pergamon Press, London) (1959). N. Anderson, G. Comer and K. Glampedakis [*Nucl. Phys.*]{} [**A763**]{}, 212 (2005). B. Sa’d, I. Shovkovy and D. Rischke [*astro–ph/0703016*]{}. M. Alford and A. Schmitt [*arXiv:0709.4251*]{}. D. Blaschke and J. Berdermann [*arXiv:0710.5293*]{}. A. Drago, A. Lavagno and G. Pagliara [*astro–ph/0312009*]{}. P. Jones [*Phys. Rev. D*]{} [**64**]{}, 084003 (2001). E. van Dalen and A. Dieperink [*Phys. Rev. C*]{} [**69**]{}, 025802 (2004). H. Dong, N. Su and O. Wang [*astro–ph/0702181*]{}. L. Herrera [*Int. J. Modern. Phys. D*]{} [**15**]{}, 2197(2006). R. Chan [*Mon. Not. R. Astron. Soc.*]{} [**316**]{}, 588 (2000). C. Hansen and S. Kawaler [*Stellar Interiors: Physical Principles, Structure and Evolution*]{}, (Springer Verlag, Berlin) (1994); R. Kippenhahn and A. Weigert [*Stellar Structure and Evolution*]{}, (Springer Verlag, Berlin), (1990); M. Schwarzschild [*Structure and Evolution of the Stars*]{}, (Dover, New York), (1958). R. Penrose, [*General Relativity, An Einstein Centenary Survey*]{}, Ed. S. W. Hawking and W. Israel (Cambridge: Cambridge University Press) p. 581–638 (1979). W. B. Bonnor, [*Phys. Lett.*]{} [**122A**]{}, 305 (1987); S. W. Goode, A. Coley and J. Wainwright, [*Class. Quantum Grav.*]{} [**9**]{}, 445 (1992); N. Pelavas and K. Lake, [*gr–qc/9811085*]{}. L. Herrera, A. Di Prisco and J. Ospino [*Phys. Rev. D*]{} [**74**]{}, 044001 (2006). L. Herrera, A. Di Prisco and W Barreto [*Phys. Rev. D*]{} [**73**]{}, 024008 (2006). [^1]: Postal address: Apartado 80793, Caracas 1080A, Venezuela. [^2]: e-mail: laherrera@cantv.net.ve [^3]: email: adiprisc@fisica.ciens.ucv.ve [^4]: email: efuenma@fisica.ciens.ucv.ve [^5]: email: otroconis@fisica.ciens.ucv.ve
--- author: - Anonymous Submission - | Thao Minh Le, Vuong Le, Svetha Venkatesh, Truyen Tran\ Applied Artificial Intelligence Institute, Deakin University, Australia\ `{lethao,vuong.le,svetha.venkatesh,truyen.tran}@deakin.edu.au`\ bibliography: - 'LognetIjcai.bib' title: - 'IJCAI–PRICAI–20 Formatting Instructions' - Dynamic Language Binding in Relational Visual Reasoning --- Introduction ============ Related Work ============ Language-binding Object Graph Network ===================================== Experiments =========== Discussion ==========
--- abstract: 'Visual compatibility is critical for fashion analysis, yet is missing in existing fashion image synthesis systems. In this paper, we propose to explicitly model visual compatibility through fashion image inpainting. To this end, we present Fashion Inpainting Networks (FiNet), a two-stage image-to-image generation framework that is able to perform compatible and diverse inpainting. Disentangling the generation of shape and appearance to ensure photorealistic results, our framework consists of a shape generation network and an appearance generation network. More importantly, for each generation network, we introduce two encoders interacting with one another to learn latent code in a shared compatibility space. The latent representations are jointly optimized with the corresponding generation network to condition the synthesis process, encouraging a diverse set of generated results that are visually compatible with existing fashion garments. In addition, our framework is readily extended to clothing reconstruction and fashion transfer, with impressive results. Extensive experiments on fashion synthesis task quantitatively and qualitatively demonstrate the effectiveness of our method.' author: - | Xintong Han$^{1,2}$ Zuxuan Wu$^3$ Weilin Huang$^{1,2}$ Matthew R. Scott$^{1,2}$ Larry S. Davis$^3$\ $^1$Malong Technologies, Shenzhen, China\ $^2$Shenzhen Malong Artificial Intelligence Research Center, Shenzhen, China\ $^3$University of Maryland, College Park\ bibliography: - 'egbib.bib' title: Compatible and Diverse Fashion Image Inpainting --- Conclusion ========== We introduce FiNet, a two-stage generation network for synthesizing compatible and diverse fashion images. By decomposition of shape and appearance generation, FiNet can inpaint garments in target region with diverse shapes and appearances. Moreover, we integrate a compatibility module that encodes compatibility information to the network, constraining the generated shapes and appearances to be close to the existing clothing pieces in a learned latent style space. The superior performance of FiNet suggests that it can be potentially used for compatibility-aware fashion design and new fashion item recommendation. Acknowledgement {#acknowledgement .unnumbered} =============== Larry S. Davis and Zuxuan Wu are partially supported by the Office of Naval Research under Grant N000141612713.
--- abstract: 'Jaeger [@Jaeger:spin] discovered a remarkable checkerboard state model based on the Higman-Sims graph that yields a value of the Kauffman polynomial, which is a quantum invariant of links. We present a simple argument that the state model has the desired properties using the combinatorial $B_2$ spider [@Kuperberg:spiders].' author: - Greg Kuperberg title: 'Jaeger’s Higman-Sims state model and the $B_2$ spider' --- [^1] Introduction ============ Two related approaches to defining quantum topological invariants are skein relations and state models. One important example of the former is the Kauffman polynomial, while an important class of the latter is the class of checkerboard state models. Given two numbers or indeterminates $Q$ and $d$, the Kauffman polynomial is a function on link projections on the 2-sphere defined axiomatically by the rules: $$\begin{aligned} {\pspicture[.4](-.6,-.5)(.6,.5) \qline(.5;135)(.5;315) \psline[border=.1](.5;45)(.5;225) \endpspicture}- {\pspicture[.4](-.6,-.5)(.6,.5) \qline(.5;45)(.5;225) \psline[border=.1](.5;135)(.5;315) \endpspicture}&= (Q-Q^{-1})\left({\pspicture[.4](-.6,-.5)(.6,.5) \psbezier(.5;45)(.25;45)(.25;315)(.5;315) \psbezier(.5;225)(.25;225)(.25;135)(.5;135) \endpspicture}- {\pspicture[.4](-.6,-.5)(.6,.5) \psbezier(.5;45)(.25;45)(.25;135)(.5;135) \psbezier(.5;225)(.25;225)(.25;315)(.5;315) \endpspicture}\right) \label{eskein} \\ {\pspicture[.4](-.6,-.5)(.6,.5) \pscircle(0,0){.4} \endpspicture}&= \frac{Q^{d-1} + Q - Q^{-1} - Q^{1-d}}{Q - Q^{-1}} \nonumber\\ {\pspicture[.4](-.6,-.5)(.6,.5) \psbezier(.5;120)(-.25,0)(.25,-.5)(.25,0) \psbezier[border=.1](.5;240)(-.25,0)(.25,.5)(.25,0) \endpspicture}&= Q^{d-1}{\pspicture[.4](-.6,-.5)(.6,.5) \psbezier(.5;240)(.25;240)(.25;120)(.5;120) \endpspicture}\nonumber\end{aligned}$$ by the rule that it is invariant under the second and third Reidemeister moves: $$\begin{aligned} \pspicture[.4](-1.1,-.5)(1.1,.5) \pccurve[angleA=-45,angleB=-135,ncurv=1](-.85,.264)(.85,.264) \pccurve[border=.1,angleA=45,angleB=135,ncurv=1](-.85,-.264)(.85,-.264) \endpspicture &= \pspicture[.4](-1.1,-.5)(1.1,.5) \pccurve[angleA=-45,angleB=-135,ncurv=.33](-.85,.264)(.85,.264) \pccurve[border=.1,angleA=45,angleB=135,ncurv=.33](-.85,-.264)(.85,-.264) \endpspicture \\ \pspicture[.42](-1.1,-1)(1.1,1) \pcarc[border=.1,arcangle=30](1;0)(1;180) \pcarc[border=.1,arcangle=30](1;120)(1;300) \pcarc[border=.1,arcangle=30](1;240)(1;60) \endpspicture &= \pspicture[.42](-1.1,-1)(1.1,1) \pcarc[border=.1,arcangle=30](1;180)(1;0) \pcarc[border=.1,arcangle=30](1;300)(1;120) \pcarc[border=.1,arcangle=30](1;60)(1;240) \endpspicture\end{aligned}$$ and by the rule that its value at the empty link is 1. These rules are an example of a skein theory, a concept which can be understood with some elementary background. A (tame) knot or link is represented by a knot projection (a tetravalent graph embedded in the 2-sphere with vertices decorated to distinguish over-crossings from under-crossings); it is known that a function on links which is invariant under the three Reidemeister moves (indicated above) is a function on links. It is further understood that invariance under the second and third Reidemeister moves (regular isotopy invariance) is almost as strong as invariance under all three, in a manner analogous to the difference between linear and projective representations of a group. In this paper, we will loosely call a regular isotopy invariant a link invariant. A skein theory describes a function on knot projections by axioms relating projections that differ only in a small region. Typically one implicitly considers an invariant $I$ and one writes graphical equations where a knot projection $P$ denotes $I(P)$. Furthermore, if an equation involves link fragments (tangles), the fragments should all have the same boundary, and the equation is read as relating link projections that differ only in the indicated fragments. For example, equation \[eskein\] is really infinitely many equations relating any quadruple of link projections that differ in only one crossing. Kauffman [@Kauffman:regular] proved that the Kauffman polynomial exists uniquely; [[*i.e.*]{}]{}, that the skein relations are consistent and complete. It is therefore a Laurent polynomial in $Q$ and $Q^d$. (Our parameterization of the Kauffman polynomial is slightly different from Kauffman’s.) The specialization $d = -2$ is called the Kauffman bracket, another polynomial which is up to normalization the same as the Jones polynomial. (Both the Kauffman polynomial and the Kauffman bracket are clearly not invariant under the first Reidemeister move, but a simple extra normalization factor achieves this invariance as well and produces a function fully invariant under link isotopy. This is the main difference between the Kauffman bracket and the Jones polynomial.) An invariant with a skein theory often has an alternative definition using a state model; we will consider a particular type of state model for link projections called a checkerboard model[@Jones:pacific]. A checkerboard model is given by a state set $S$, a number $x$, and two symmetric functions $W_+$ and $W_-$ from $S \times S$ to a commutative ring $R$. Given a link projection on the 2-sphere, a checkerboard coloring is one of the two alternating black-white colorings of the complementary regions: $$\pspicture(-1.2,-1.4)(1.2,1.6) \pscustom[fillstyle=crosshatch,linestyle=none]{{\code{/fill {eofill} def /clip {eoclip} def}}\psbezier(1.1;90)(1.27;60)(.7;30)(.35;330) \psbezier(.35;330)(.7;270)(1.27;240)(1.1;210) \psbezier(1.1;210)(1.27;180)(.7;150)(.35;90) \psbezier(.35;90)(.7;30)(1.27;0)(1.1;330) \psbezier(1.1;330)(1.27;300)(.7;270)(.35;210) \psbezier(.35;210)(.7;150)(1.27;120)(1.1;90) \psarc[liftpen=2](1.3;330){.5}{0}{360}} \psbezier[border=.1](.35;330)(.7;270)(1.27;240)(1.1;210) \psbezier[border=.1](.35;90)(.7;30)(1.27;0)(1.1;330) \psbezier[border=.1](.35;210)(.7;150)(1.27;120)(1.1;90) \pscircle[border=.1](1.3;330){.5} \psbezier[border=.1](1.1;90)(1.27;60)(.7;30)(.35;330) \psbezier[border=.1](1.1;210)(1.27;180)(.7;150)(.35;90) \psbezier[border=.1](1.1;330)(1.27;300)(.7;270)(.35;210) \rput(.5,-1.1){$-$}\rput(1.2,.1){$-$} \rput(.9;270){$+$}\rput(.9;150){$+$}\rput(.9;30){$+$} \endpspicture$$ Given a checkerboard coloring, the crossings can be labelled as positive or negative, as indicated, according to their sense relative to the neighboring black regions. (The labelling depends on an orientation of the 2-sphere.) The black regions are called the atoms of the state model. A state is a function from the atoms to the state set. Given a state, the weight of a positive (respectively negative) crossing, which takes values in ${\mathbb{C}}$ or some other field, is given by some function $W_+(a,b)$ (resp. $W_-(a,b)$) if the two incident atoms are assigned states $a$ and $b$; these functions are called interactions. The weight of a state is then the product of the weights of the atoms, and the state sum $Z$ is the total weight of all states. If $\chi$ is the total Euler characteristic of all black regions, it may happen that the normalized state sum $Z' = x^{-\chi} Z$ is a regular isotopy invariant and in particular one that satisfies skein relations. For example, the Potts model is a checkerboard model whose normalized state sum is a value of the Kauffman bracket, and in particular is a regular isotopy invariant. Choose a real or complex $q$ such that $n = q+2+q^{-1}$ is a positive integer and choose a root $q^{1/4}$. Then the Potts model of order $n$ has a state set with $n$ elements and the following weights: $$\begin{aligned} W_+(a,a) &= q^{3/4} \\ W_-(a,a) &= q^{-3/4} \\ W_+(a,b) &= -q^{-1/4} \qquad \mbox{when $a \ne b$} \\ W_-(a,b) &= -q^{1/4} \qquad \mbox{when $a \ne b$}\end{aligned}$$ The Euler normalization $x = -(q^{1/2} + q^{-1/2}) = \pm \sqrt{n}$. One can check that the normalized state sum is then the Kauffman bracket with $Q = q^{1/4}$. There are only a few known non-trivial checkerboard models that produce link invariants [@dHJ:graph]. By far the most interesting of these is the Higman-Sims state model discovered by Jaeger [@Jaeger:spin]. This model produces the value of the Kauffman polynomial at $Q = \tau$, the golden ratio, and $d = -4$. In quantum groups terms, this corresponds to the $q = \tau^2$ point of the quantum group $U_q({\mathrm{sp}}(4))$. In this paper, we present an alternative argument that the Higman-Sims state model produces a topological invariant and a value of the Kauffman polynomial. The idea is to use the combinatorial $B_2$ spider [@Kuperberg:spiders], a skein theory of graphs that produces the Kauffman polynomial for arbitrary $Q$ with $d = -4$. The author hopes that this argument will help draw attention to Jaeger’s remarkable state model and help elucidate a mysterious relationship between the Higman-Sims sporadic simple group and the quantum group $U_q({\mathrm{sp}}(4))$. We also refer the reader to another review of Jaeger’s model by de la Harpe [@delaHarpe:jaeger]. Spiders, skein modules, and invariants ====================================== The combinatorial $B_2$ spider is essentially a skein theory of trivalent, planar graphs (called webs) with unoriented edges of two types, which are called type 1 and type 2 strands and are denoted by single and double edges. Such a skein theory uses the same concepts and allows the same notation as a skein theory for link projections, except that it describes a function on a class of planar graphs and we require no particular topological invariance a priori (other than invariance under isotopy in the 2-sphere). The only allowed vertices in $B_2$ webs are those with two single edges and one double edge: $$\pspicture(-.7,-.7)(.7,.7) \qline(-.5,0)(0,0)\qline(0,-.5)(0,0)\psline[doubleline=true](0,0)(.35,.35) \endpspicture$$ The skein relations are: $$\begin{aligned} {\pspicture[.4](-.6,-.5)(.6,.5) \pscircle(0,0){.4} \endpspicture}&= -(q^2+q+q^{-1}+q^{-2}) \label{eloop} \\ {\pspicture[.4](-.6,-.5)(.6,.5) \pscircle[doubleline=true](0,0){.4} \endpspicture}&= q^3+q+1+q^{-1}+q^{-3} \nonumber \\ \pspicture[.4](-.6,-.5)(.6,.5) \psline[doubleline=true](-.5,0)(0,0) \psbezier(0,0)(.7,.7)(.7,-.7)(0,0) \endpspicture &= 0 \nonumber \\ \pspicture[.4](-.8,-.5)(.8,.5) \psline[doubleline=true](-.7,0)(-.3,0) \pcarc[arcangle=45](-.3,0)(.3,0) \pcarc[arcangle=-45](-.3,0)(.3,0) \psline[doubleline=true](.3,0)(.7,0) \endpspicture &= -(q+2+q^{-1})\pspicture[.4](-.6,-.5)(.6,.5) \psline[doubleline=true](-.5,0)(.5,0) \endpspicture \nonumber \\ \pspicture[.4](-.9,-.9)(.9,.9) \psline[doubleline=true](.4;90)(.8;90) \psline[doubleline=true](.4;210)(.8;210) \psline[doubleline=true](.4;330)(.8;330) \pcarc[arcangle=-15](.4;90)(.4;210) \pcarc[arcangle=-15](.4;210)(.4;330) \pcarc[arcangle=-15](.4;330)(.4;90) \endpspicture &= 0 \nonumber \\ {\pspicture[.4](-.6,-.7)(.6,.7) \qline(0, .25)(.35, .6)\qline(0, .25)(-.35, .6) \qline(0,-.25)(.35,-.6)\qline(0,-.25)(-.35,-.6) \psline[doubleline=true](0,-.25)(0,.25) \endpspicture}&- {\pspicture[.4](-.8,-.5)(.8,.5) \qline( .25,0)( .6,.35)\qline( .25,0)( .6,-.35) \qline(-.25,0)(-.6,.35)\qline(-.25,0)(-.6,-.35) \psline[doubleline=true](-.25,0)(.25,0) \endpspicture}= {\pspicture[.4](-.6,-.5)(.6,.5) \psbezier(.5;45)(.25;45)(.25;135)(.5;135) \psbezier(.5;225)(.25;225)(.25;315)(.5;315) \endpspicture}- {\pspicture[.4](-.6,-.5)(.6,.5) \psbezier(.5;45)(.25;45)(.25;315)(.5;315) \psbezier(.5;225)(.25;225)(.25;135)(.5;135) \endpspicture}\label{eexchange}\end{aligned}$$ Here $q$ is a complex number or an indeterminate with a preferred square root $q^{1/2}$. Together with the stipulation that the empty web has value 1, these skein relations again define a unique function on $B_2$ webs [@Kuperberg:g2]. To understand something of the relation between the $B_2$ spider and the Lie algebra $B_2$, we can consider skein modules, which are another general concept in skein theory. Instead of considering functions on link projections or $B_2$ webs, we consider formal linear combinations of such objects over a field such as ${\mathbb{C}}(q^{1/2})$, or sometimes over a ring. We quotient the space of all formal linear combinations by the skein relations, more properly interpreted as relators. Discarding stipulations about empty diagrams, we can say that the skein module for the Kauffman polynomial and the skein module for the $B_2$ spider are both 1-dimensional. More generally, if we fix a boundary of a tangle or a $B_2$ web (meaning that the tangle or web is embedded in a disk and has prespecified univalent endpoints on the boundary of the disk), we can consider the skein module of tangles or webs with this boundary. In the $B_2$ spider, the skein module is called a web space and all of its elements are called webs. If a web space has $n$ endpoints of type 1 and $k$ of type 2, then it is isomorphic to the invariant space ${\mathrm{Inv}}(V(\lambda_1)^{{\otimes}k} {\otimes}V(\lambda_2)^{{\otimes}n})$, where $V(\lambda_1)$ and $V(\lambda_2)$ are, respectively, the 5-dimensional and 4-dimensional irreducible representations of the quantum group $U_q({\mathrm{sp}}(4))$ [@Kuperberg:spiders]. (Such a quantum group is an algebra which specializes to the usual universal enveloping algebra when $q=1$.) Using $U_q({\mathrm{sp}}(4))$, one can define an algebraic $B_2$ spider, and one can say that the algebraic and combinatorial $B_2$ spiders are isomorphic when $q$ is a transcendental element in a field or is not a root of unity. Working in the $B_2$ spider, we may define particular webs called crossings by the equations $$\begin{aligned} {\pspicture[.4](-.6,-.5)(.6,.5) \qline(.5;135)(.5;315) \psline[border=.1](.5;45)(.5;225) \endpspicture}=&\ -q^{1/2}{\pspicture[.4](-.6,-.5)(.6,.5) \psbezier(.5;45)(.25;45)(.25;315)(.5;315) \psbezier(.5;225)(.25;225)(.25;135)(.5;135) \endpspicture}- \frac{q^{-1}}{q^{1/2} + q^{-1/2}}{\pspicture[.4](-.6,-.5)(.6,.5) \psbezier(.5;45)(.25;45)(.25;135)(.5;135) \psbezier(.5;225)(.25;225)(.25;315)(.5;315) \endpspicture}\nonumber \\ &\ - \frac{1}{q^{1/2} + q^{-1/2}}{\pspicture[.4](-.6,-.7)(.6,.7) \qline(0, .25)(.35, .6)\qline(0, .25)(-.35, .6) \qline(0,-.25)(.35,-.6)\qline(0,-.25)(-.35,-.6) \psline[doubleline=true](0,-.25)(0,.25) \endpspicture}\label{ecrossing} \\ \pspicture[.4](-.6,-.5)(.6,.5) \psline[doubleline=true](.5;135)(.5;315) \psline[doubleline=true,border=.1](.5;45)(.5;225) \endpspicture =&\ q \pspicture[.4](-.6,-.5)(.6,.5) \psbezier[doubleline=true](.5;45)(.25;45)(.25;315)(.5;315) \psbezier[doubleline=true](.5;225)(.25;225)(.25;135)(.5;135) \endpspicture + q^{-1} \pspicture[.4](-.6,-.5)(.6,.5) \psbezier[doubleline=true](.5;45)(.25;45)(.25;135)(.5;135) \psbezier[doubleline=true](.5;225)(.25;225)(.25;315)(.5;315) \endpspicture \nonumber \\ &\ + \frac{1}{q + 2 +q^{-1}} {\pspicture[.4](-.9,-.9)(.9,.9) \psline[doubleline=true](.4;45)(.8;45) \psline[doubleline=true](.4;135)(.8;135) \psline[doubleline=true](.4;225)(.8;225) \psline[doubleline=true](.4;315)(.8;315) \psline(.4;45)(.4;135)\psline(.4;135)(.4;225) \psline(.4;225)(.4;315)\psline(.4;315)(.4;45) \endpspicture}\nonumber\end{aligned}$$ These webs then satisfy the regular isotopy equations and yield invariants of framed graphs in $S^3$ or in a ball which take values in the appropriate web spaces. Moreover, a crossing of two type 1 strands satisfies Kauffman’s relations with $Q = q^{1/2}$ and $d = -4$, while a crossing of two type 2 strands satisfies Kauffman’s relation with $Q = q$ and $d = 5$. So one can say that there is a homomorphism (in fact an epimorphism which is far from injective) from a one-variable slice of the Kauffman spider to the $B_2$ spider. The model ========= Given a $B_2$ web on the sphere or in a disk, a checkerboard coloring is a coloring of its faces such that two regions that meet at a type 1 strand have opposite colors, while two regions that meet at a type 2 strand have the same color. For technical reasons we only consider webs with no type 2 strands at the boundary and with no closed loops of type 2. We can then write the same $B_2$ skein relations for this class of colored webs, for example: $$\begin{aligned} \pspicture[.4](-1.3,-.5)(1.3,.5) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psarc(-.7,0){.5}{135}{225} \psline(-1.05,-.35)(-.7,0)(-1.05,.35)} \psline(-1.05,-.35)(-.7,0)(-1.05,.35) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psarc(.7,0){.5}{315}{45} \psline(1.05,.35)(.7,0)(1.05,-.35)} \psline(1.05,.35)(.7,0)(1.05,-.35) \psline[doubleline=true](-.7,0)(-.3,0) \pscustom[fillstyle=crosshatch]{ \psbezier(-.3,0)(-.15,.26)(.15,.26)(.3,0) \psbezier(.3,0)(.15,-.26)(-.15,-.26)(-.3,0)} \psline[doubleline=true](.3,0)(.7,0) \psline(1.05,-.35)(.7,0)(1.05,.35) \endpspicture &= -(q+2+q^{-1}) \pspicture[.4](-.9,-.5)(.9,.5) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psarc(-.25,0){.5}{135}{225} \psline(-.6,-.35)(-.25,0)(-.6,.35)} \psline(-.6,-.35)(-.25,0)(-.6,.35) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psarc[linewidth=0](.25,0){.5}{315}{45} \psline(.6,.35)(.25,0)(.6,-.35)} \psline(.6,.35)(.25,0)(.6,-.35) \psline[doubleline=true](-.25,0)(.25,0) \endpspicture \\ \pspicture[.4](-1.2,-1.2)(1.2,1.2) \pscustom[fillstyle=crosshatch,linestyle=none]{{\code{/fill {eofill} def /clip {eoclip} def}}\psarc(0,0){1.15}{345}{75} \psline(1.15;75)(.8;90)(1.15;105) \psarc(0,0){1.15}{105}{195} \psline(1.15;195)(.8;210)(1.15;225) \psarc(0,0){1.15}{225}{315} \psline(1.15;315)(.8;330)(1.15;345) \psbezier[liftpen=2](.4;90)(.3;130)(.3;170)(.4;210) \psbezier(.4;210)(.3;250)(.3;290)(.4;330) \psbezier(.4;330)(.3;10)(.3;50)(.4;90)} \psline(1.15;75)(.8;90)(1.15;105) \psline(1.15;195)(.8;210)(1.15;225) \psline(1.15;315)(.8;330)(1.15;345) \psline[doubleline=true](.4;90)(.8;90) \psline[doubleline=true](.4;210)(.8;210) \psline[doubleline=true](.4;330)(.8;330) \psbezier(.4;90)(.3;130)(.3;170)(.4;210) \psbezier(.4;210)(.3;250)(.3;290)(.4;330) \psbezier(.4;330)(.3;10)(.3;50)(.4;90) \endpspicture &= 0\end{aligned}$$ These relations constitute a perfectly valid skein theory even though the colorings of the faces are somewhat redundant. We consider state models on checkerboard colorings of $B_2$ webs. Let $S$ be a state set; as before, a state is a function from the black regions (atoms) to the state set. Assuming that there are no crossings, we consider two interactions (functions) $W_H$ and $W_I$ from $S \times S$ to ${\mathbb{C}}$. The weight of a state has a factor of $W_I(a,b)$ for every type 2 edge that bridges an atom in state $a$ with an atom in state $b$ and a factor of $W_H(a,b)$ for every type 2 edge that lies on the border between an atom in state $a$ from an atom in state $b$: $$\pspicture(-.6,-1.3)(.6,.8) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psarc(0,-.25){.5}{225}{315} \psline(.35,-.6)(0,-.25)(-.35,-.6)} \psline(.35,-.6)(0,-.25)(-.35,-.6) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psarc(0,.25){.5}{45}{135} \psline(-.35,.6)(0,.25)(.35,.6)} \psline(-.35,.6)(0,.25)(.35,.6) \psline[doubleline=true](0,-.25)(0,.25) \cput*(0,.7){$a$} \cput*(0,-.7){$b$} \rput(0,-1.3){$W_I(a,b)$} \endpspicture \qquad \pspicture(-.9,-1.3)(.9,.8) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psline(.6,-.35)(.25,0)(.6,.35) \psarc(0,0){.695}{30.256}{149.744} \psline(-.6,.35)(-.25,0)(-.6,-.35) \psarc(0,0){.695}{-149.744}{-30.256}} \psline(-.6,-.35)(-.25,0)(-.6,.35) \psline(.6,.35)(.25,0)(.6,-.35) \psline[doubleline=true](-.25,0)(.25,0) \cput*(0,.4){$a$} \cput*(0,-.4){$b$} \rput(0,-1.3){$W_H(a,b)$} \endpspicture$$ As before, the state sum $Z$ is the total weight of all states and $Z' = x^{-\chi} Z$ is the normalized state sum for some constant $x$. Note that for a web in a disk, for every fixed state of all atoms at the boundary, there is a state sum over all states in the interior; we do not sum over the colorings of the boundary regions. If we establish that the state sum $Z'$ satisfies the above skein relations (equations (\[eloop\]) to (\[eexchange\])), then we can say that it is a linear functional on web spaces. We can further define interactions $W_-$ and $W_+$ for crossings by using equation (\[ecrossing\]) (see also the relation between $W_H$ and $W_I$ below): $$W_{\pm} = -q^{\pm 1/2} x I - \frac{q^{\mp 1}}{q^{1/2}+q^{-1/2}}N - \frac{W_I}{q^{1/2}+q^{-1/2}}$$ Here and below the interactions are interpreted as $n \times n$ matrices, $I$ is the identity matrix, and $N$ is the matrix of all 1’s. The interactions $W_\pm$ then constitute a checkerboard model for link projections, one whose normalized state sum is automatically a value of the Kauffman polynomial at $d=-4$. Consider the restrictions on $W_H$, $W_I$, and $x$ given by the skein relations. Assume that the state set $S$ has $n$ elements. Let $$\begin{aligned} h &= -(q+2+q^{-1}) \\ \ell &= -(q^2+q+q^{-1}+q^{-2})\end{aligned}$$ The relations $$\pspicture[.4](-.6,-.5)(.6,.5) \pscircle[fillstyle=crosshatch](0,0){.4} \endpspicture = \ell \hspace{2cm} \pspicture[.4](-.8,-.7)(.8,.7) \pscustom[fillstyle=crosshatch,linestyle=none]{{\code{/fill {eofill} def /clip {eoclip} def}}\psframe(-.7,-.7)(.7,.7) \psarc[liftpen=2](0,0){.4}{0}{360}} \pscircle(0,0){.4} \endpspicture = \ell$$ say that $\ell = \frac{n}{x}$ and $\ell = x$, which implies that $n = \ell^2$. The relation $$\pspicture[.4](-.6,-.5)(.8,.5) \pscustom[fillstyle=crosshatch,linestyle=none]{{\code{/fill {eofill} def /clip {eoclip} def}}\psframe(-.5,-.5)(.7,.5) \psbezier(0,0)(.7,.7)(.7,-.7)(0,0)} \psline[doubleline=true](-.5,0)(0,0) \psbezier(0,0)(.7,.7)(.7,-.7)(0,0) \endpspicture = 0$$ says that $W_H(a,a) = 0$. The relation $$\pspicture[.4](-1.3,-1)(1.3,1) \pscustom[fillstyle=crosshatch,linestyle=none]{{\code{/fill {eofill} def /clip {eoclip} def}}\psline(-1.05,-.35)(-.7,0)(-1.05,.35) \psbezier(-1.05,.35)(-.35,1.05)(.35,1.05)(1.05,.35) \psline(1.05,.35)(.7,0)(1.05,-.35) \psbezier(1.05,-.35)(.35,-1.05)(-.35,-1.05)(-1.05,-.35) \psbezier[liftpen=2](-.3,0)(-.15,.26)(.15,.26)(.3,0) \psbezier(.3,0)(.15,-.26)(-.15,-.26)(-.3,0)} \psline(-1.05,.35)(-.7,0)(-1.05,-.35) \psline[doubleline=true](-.7,0)(-.3,0) \psbezier(-.3,0)(-.15,.26)(.15,.26)(.3,0) \psbezier(.3,0)(.15,-.26)(-.15,-.26)(-.3,0) \psline[doubleline=true](.3,0)(.7,0) \psline(1.05,-.35)(.7,0)(1.05,.35) \endpspicture = h {\pspicture[.4](-.9,-.7)(.9,.7) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psline(.6,-.35)(.25,0)(.6,.35) \psarc(0,0){.695}{30.256}{149.744} \psline(-.6,.35)(-.25,0)(-.6,-.35) \psarc(0,0){.695}{-149.744}{-30.256}} \psline(-.6,-.35)(-.25,0)(-.6,.35) \psline(.6,.35)(.25,0)(.6,-.35) \psline[doubleline=true](-.25,0)(.25,0) \endpspicture}$$ says that $W_H(a,b)$ is either 0 or $h$ for all $a$ and $b$. Since $W_H(a,b)$ is symmetric, it is proportional to the adjacency matrix $A$ of some graph $J$ with vertex set $S$. (The graph $J$ need not be planar or otherwise resemble a web.) The relation $$\pspicture[.4](-1.2,-1.2)(1.2,1.2) \pscustom[fillstyle=crosshatch,linestyle=none]{{\code{/fill {eofill} def /clip {eoclip} def}}\psarc(0,0){1.15}{345}{75} \psline(1.15;75)(.8;90)(1.15;105) \psarc(0,0){1.15}{105}{195} \psline(1.15;195)(.8;210)(1.15;225) \psarc(0,0){1.15}{225}{315} \psline(1.15;315)(.8;330)(1.15;345) \psbezier[liftpen=2](.4;90)(.3;130)(.3;170)(.4;210) \psbezier(.4;210)(.3;250)(.3;290)(.4;330) \psbezier(.4;330)(.3;10)(.3;50)(.4;90)} \psline(1.15;75)(.8;90)(1.15;105) \psline(1.15;195)(.8;210)(1.15;225) \psline(1.15;315)(.8;330)(1.15;345) \psline[doubleline=true](.4;90)(.8;90) \psline[doubleline=true](.4;210)(.8;210) \psline[doubleline=true](.4;330)(.8;330) \psbezier(.4;90)(.3;130)(.3;170)(.4;210) \psbezier(.4;210)(.3;250)(.3;290)(.4;330) \psbezier(.4;330)(.3;10)(.3;50)(.4;90) \endpspicture = 0$$ says that $J$ is triangle-free. The relation $${\pspicture[.4](-.6,-.8)(.6,.8) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psarc(0,-.25){.5}{225}{315} \psline(.35,-.6)(0,-.25)(-.35,-.6)} \psline(.35,-.6)(0,-.25)(-.35,-.6) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psarc(0,.25){.5}{45}{135} \psline(-.35,.6)(0,.25)(.35,.6)} \psline(-.35,.6)(0,.25)(.35,.6) \psline[doubleline=true](0,-.25)(0,.25) \endpspicture}- {\pspicture[.4](-.9,-.7)(.9,.7) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psline(.6,-.35)(.25,0)(.6,.35) \psarc(0,0){.695}{30.256}{149.744} \psline(-.6,.35)(-.25,0)(-.6,-.35) \psarc(0,0){.695}{-149.744}{-30.256}} \psline(-.6,-.35)(-.25,0)(-.6,.35) \psline(.6,.35)(.25,0)(.6,-.35) \psline[doubleline=true](-.25,0)(.25,0) \endpspicture}= {\pspicture[.4](-.6,-.5)(.6,.5) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psbezier(.5;135)(.25;135)(.25;45)(.5;45) \psarc(0,0){.5}{45}{135} \psbezier[liftpen=2](.5;315)(.25;315)(.25;225)(.5;225) \psarc(0,0){.5}{225}{315}} \psbezier(.5;45)(.25;45)(.25;135)(.5;135) \psbezier(.5;225)(.25;225)(.25;315)(.5;315) \endpspicture}- {\pspicture[.4](-.6,-.5)(.6,.5) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psbezier(.5;315)(.25;315)(.25;45)(.5;45) \psarc(0,0){.5}{45}{135} \psbezier(.5;135)(.25;135)(.25;225)(.5;225) \psarc(0,0){.5}{225}{315}} \psbezier(.5;45)(.25;45)(.25;315)(.5;315) \psbezier(.5;225)(.25;225)(.25;135)(.5;135) \endpspicture}$$ reads algebraically as $$W_I - W_H = N - xI.$$ where $W_I$ and $W_H$ can be interpreted as matrices, $N$ is the matrix whose entries are all 1, and $I$ is the identity matrix. This equation can be taken as a definition of $W_I$ in terms of $W_H = hA$: $$W_I = hA + N - xI \label{ewi}$$ The relation $$\pspicture[.4](-.6,-.5)(.6,.5) \psline[doubleline=true](-.5,0)(0,0) \psbezier[fillstyle=crosshatch](0,0)(.7,.7)(.7,-.7)(0,0) \endpspicture = 0$$ is equivalent to the equation $W_IN = 0$. Using equation (\[ewi\]) and the identity $N^2 = nN$, we obtain $$AN = \frac{n-x}{h}N.$$ This equation says that $J$ is regular (1-point regular as defined below) and the degree of a vertex is $v = \frac{n-x}{h}$. The relation $$\pspicture[.4](-1.3,-.5)(1.3,.5) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psarc(-.7,0){.5}{135}{225} \psline(-1.05,-.35)(-.7,0)(-1.05,.35)} \psline(-1.05,-.35)(-.7,0)(-1.05,.35) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psarc(.7,0){.5}{315}{45} \psline(1.05,.35)(.7,0)(1.05,-.35)} \psline(1.05,.35)(.7,0)(1.05,-.35) \psline[doubleline=true](-.7,0)(-.3,0) \pscustom[fillstyle=crosshatch]{ \psbezier(-.3,0)(-.15,.26)(.15,.26)(.3,0) \psbezier(.3,0)(.15,-.26)(-.15,-.26)(-.3,0)} \psline[doubleline=true](.3,0)(.7,0) \psline(1.05,-.35)(.7,0)(1.05,.35) \endpspicture = h \pspicture[.4](-.9,-.5)(.9,.5) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psarc(-.25,0){.5}{135}{225} \psline(-.6,-.35)(-.25,0)(-.6,.35)} \psline(-.6,-.35)(-.25,0)(-.6,.35) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psarc[linewidth=0](.25,0){.5}{315}{45} \psline(.6,.35)(.25,0)(.6,-.35)} \psline(.6,.35)(.25,0)(.6,-.35) \psline[doubleline=true](-.25,0)(.25,0) \endpspicture$$ reads algebraically as the matrix equation $$W_I^2 = xh W_I,$$ which implies that $W_I$ as a linear operator has only two eigenvalues, namely 0 and $xh$. Moreover, given that $J$ is regular, the property that some linear combination of $A$, $N$, and $I$ has a quadratic minimal polynomial is equivalent to the property that $J$ is strongly regular [@Jaeger:spin], or 2-point regular as defined below. Thus, we see that the $B_2$ skein relations determine the parameters $x$ and $n$ of a checkerboard state model in terms of $q$, that they imply that the model is essentially determined by a certain graph $J$, and that they place strong restrictions on $J$. Finally, there is the relation $$\pspicture[.4](-1.2,-1.2)(1.2,1.2) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psarc(0,0){1.15}{75}{105} \psline(1.15;75)(.8;90)(1.15;105) \psarc[liftpen=2](0,0){1.15}{195}{225} \psline(1.15;195)(.8;210)(1.15;225) \psarc[liftpen=2](0,0){1.15}{315}{345} \psline(1.15;315)(.8;330)(1.15;345) \psbezier[liftpen=2](.4;90)(.3;130)(.3;170)(.4;210) \psbezier(.4;210)(.3;250)(.3;290)(.4;330) \psbezier(.4;330)(.3;10)(.3;50)(.4;90)} \psline(1.15;75)(.8;90)(1.15;105) \psline(1.15;195)(.8;210)(1.15;225) \psline(1.15;315)(.8;330)(1.15;345) \psline[doubleline=true](.4;90)(.8;90) \psline[doubleline=true](.4;210)(.8;210) \psline[doubleline=true](.4;330)(.8;330) \psbezier(.4;90)(.3;130)(.3;170)(.4;210) \psbezier(.4;210)(.3;250)(.3;290)(.4;330) \psbezier(.4;330)(.3;10)(.3;50)(.4;90) \endpspicture = 0 \label{etriang}$$ To analyze it, we establish some conventions about graphs: In general, if $G$ is a graph, $V_G$ denotes the vertex set of $G$, and if $x,y \in V_G$, $xEy$ is the relation that $x$ and $y$ are connected by an edge. An injection $f:V_G \to V_H$ is edge-respecting means that $f(x)Ef(y)$ if and only if $xEy$. The graph $G$ is $n$-point transitive means that if $H$ is a full subgraph of $G$ with at most $n$ vertices, every edge-respecting injection $f:V_H \to V_G$ extends to an automorphism of $G$. More generally, $G$ is $n$-point regular means the following: For every $H$ with $k \le n$ vertices, for every $H'$ with $k+1$ vertices, for every edge-respecting map $f:V_H \to V_{H'}$, and for every edge-respecting map $g:V_H \to V_G$, the number of ways to complete the commutative diagram $$\begin{array}{cc} \\ \Rnode{a}{V_H} & \hspace{1cm} \Rnode{b}{V_G} \\[1cm] \Rnode{c}{V_{H'}} \end{array} \ncLine[nodesep=4pt,linecolor=black]{->}{a}{b}\Aput{g} \ncLine[nodesep=4pt,linecolor=black]{->}{a}{c}\Bput{f} \ncLine[nodesep=4pt,linecolor=black,linestyle=dashed,arrows=->]{c}{b}\Bput{h}$$ with an edge-respecting map $h$ depends only on $H$, $H'$, and $f$ and not on $g$. Clearly, if $G$ is $n$-point transitive, then it is $n$-point regular. Consider the numerical equalities implicit in equation (\[etriang\]); for any choice $(a,b,c)$ of three vertices of $J$, not necessarily distinct, the left side becomes a state sum by labelling the three outside regions by $a$, $b$, and $c$ and summing over the state of the inside region: $$\pspicture[.4](-1.2,-1.2)(1.2,1.2) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psarc(0,0){1.15}{75}{105} \psline(1.15;75)(.8;90)(1.15;105) \psarc[liftpen=2](0,0){1.15}{195}{225} \psline(1.15;195)(.8;210)(1.15;225) \psarc[liftpen=2](0,0){1.15}{315}{345} \psline(1.15;315)(.8;330)(1.15;345) \psbezier[liftpen=2](.4;90)(.3;130)(.3;170)(.4;210) \psbezier(.4;210)(.3;250)(.3;290)(.4;330) \psbezier(.4;330)(.3;10)(.3;50)(.4;90)} \psline(1.15;75)(.8;90)(1.15;105) \psline(1.15;195)(.8;210)(1.15;225) \psline(1.15;315)(.8;330)(1.15;345) \psline[doubleline=true](.4;90)(.8;90) \psline[doubleline=true](.4;210)(.8;210) \psline[doubleline=true](.4;330)(.8;330) \psbezier(.4;90)(.3;130)(.3;170)(.4;210) \psbezier(.4;210)(.3;250)(.3;290)(.4;330) \psbezier(.4;330)(.3;10)(.3;50)(.4;90) \cput*(1.25;330){$a$} \cput*(1.25;90){$b$} \cput*(1.25;210){$c$} \endpspicture = 0 \label{ethree}$$ If $J$ is 3-point regular, this sum only depends on which of $a$, $b$ and $c$ are equal and which are connected by edges of $J$. If the graph $J$ of a checkerboard state model is 3-point regular, equation (\[etriang\]) is a corollary of the other checkerboard skein relations. (Conversely, Jaeger [@Jaeger:spin] proved that $J$ must be 3-point regular if all of the checkerboard skein relations hold.) For convenience, we define a new type of strand, denoted by dashes, as a linear combination of other webs: $$\pspicture[.4](-.9,-.7)(.9,.7) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psline(.6,-.35)(.25,0)(.6,.35) \psarc(0,0){.695}{30.256}{149.744} \psline(-.6,.35)(-.25,0)(-.6,-.35) \psarc(0,0){.695}{-149.744}{-30.256}} \psline[border=.05,linestyle=dashed](-.25,0)(.25,0) \psline(-.6,-.35)(-.25,0)(-.6,.35) \psline(.6,.35)(.25,0)(.6,-.35) \endpspicture = {\pspicture[.4](-.6,-.5)(.6,.5) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psbezier(.5;135)(.25;135)(.25;45)(.5;45) \psarc(0,0){.5}{45}{135} \psbezier[liftpen=2](.5;315)(.25;315)(.25;225)(.5;225) \psarc(0,0){.5}{225}{315}} \psbezier(.5;45)(.25;45)(.25;135)(.5;135) \psbezier(.5;225)(.25;225)(.25;315)(.5;315) \endpspicture}- \frac{1}{x}{\pspicture[.4](-.6,-.5)(.6,.5) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psbezier(.5;315)(.25;315)(.25;45)(.5;45) \psarc(0,0){.5}{45}{135} \psbezier(.5;135)(.25;135)(.25;225)(.5;225) \psarc(0,0){.5}{225}{315}} \psbezier(.5;45)(.25;45)(.25;315)(.5;315) \psbezier(.5;225)(.25;225)(.25;135)(.5;135) \endpspicture}- \frac{1}{h}{\pspicture[.4](-.9,-.7)(.9,.7) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psline(.6,-.35)(.25,0)(.6,.35) \psarc(0,0){.695}{30.256}{149.744} \psline(-.6,.35)(-.25,0)(-.6,-.35) \psarc(0,0){.695}{-149.744}{-30.256}} \psline(-.6,-.35)(-.25,0)(-.6,.35) \psline(.6,.35)(.25,0)(.6,-.35) \psline[doubleline=true](-.25,0)(.25,0) \endpspicture}$$ This strand has its own weight matrix $W_D$ which is a linear combination of $W_H$, $N$, and $I$; the weights are chosen so that $W_D(a,b) = 1$ if $a$ and $b$ are distinct but not connected by an edge of $J$ and $W_D(a,b) = 0$ if they are connected by an edge. Then each case of equation (\[ethree\]) can be converted to a statement about a state sum of a web on the sphere. For example, if $a$, $b$, and $c$ form an anti-triangle and $J$ has $t$ anti-triangles, then the left side of equation (\[ethree\]) differs by a factor of $t$ from the state sum of the graph $$\pspicture[.4](-2,-2)(2,2) \pscustom[fillstyle=crosshatch]{{\code{/fill {eofill} def /clip {eoclip} def}}\psbezier(.4;90)(.3;130)(.3;170)(.4;210) \psbezier(.4;210)(.3;250)(.3;290)(.4;330) \psbezier(.4;330)(.3;10)(.3;50)(.4;90) \psbezier[liftpen=2]( .8; 90)(1.3;110)(1.5;130)(1.5;150) \psbezier(1.5;150)(1.5;170)(1.3;190)( .8;210) \psbezier( .8;210)(1.3;230)(1.5;250)(1.5;270) \psbezier(1.5;270)(1.5;290)(1.3;310)( .8;330) \psbezier( .8;330)(1.3;350)(1.5; 10)(1.5; 30) \psbezier(1.5; 30)(1.5; 50)(1.3; 70)( .8; 90) \psarc[liftpen=2](0,0){2}{0}{360}} \psline[doubleline=true](.4;90)(.8;90) \psline[doubleline=true](.4;210)(.8;210) \psline[doubleline=true](.4;330)(.8;330) \psline[border=.05,linestyle=dashed](1.5;150)(2;150) \psline[border=.05,linestyle=dashed](1.5;270)(2;270) \psline[border=.05,linestyle=dashed](1.5; 30)(2; 30) \pscircle(0,0){2} \endpspicture = 0$$ Let $w$ be either this web or its counterpart from one of the other cases of equation (\[ethree\]). Given that the $B_2$ skein relations are consistent, and given that $w$ certainly does vanish modulo the skein relations together, it suffices to show that $w$ is a multiple of the empty web modulo all skein relations other than equation (\[etriang\]); the coefficient is then necessarily zero. Finally, since $w$ has at most four black regions, it satisfies this condition by Lemma \[lseven\]. Any colored $B_2$ web $w$ on the sphere with at most seven black regions is proportional to the empty web using only skein relations other than equation (\[etriang\]). \[lseven\] The proof is by induction on the number of vertices. Following the usual description of the $B_2$ spider [@Kuperberg:spiders], we assign formal angles of 45, 135 degrees, and 135 degrees to each vertex: $$\pspicture(-.6,-.5)(.6,.5) \psline(-.2,0)(-.2,-.2)(0,-.2) \qline(-.5,0)(0,0)\qline(0,-.5)(0,0) \psline[doubleline=true](0,0)(.5,.5) \rput[rb](-.1,.1){$135^\circ$} \rput[lt](.1,-.1){$135^\circ$} \endpspicture \hspace{1cm} \pspicture(-.6,-.5)(.6,.5) \psline(.2;135)(.2828;180)(.2;225) \qline(.5;45)(.5;225)\qline(.5;135)(.5;315) \endpspicture$$ We use these formal angles to define a non-standard Euler characteristic of each face of $w$ with the property that the total Euler characteristic is 2. (It should not be confused with the ordinary Euler characteristic used above for normalization.) It may be defined for a face as $1 - \frac{A}{360^\circ}$, where $A$ is the sum of all exterior angles, and an exterior angle at a given vertex is defined as $180^\circ$ minus the usual interior angle. Alternatively, we may recall that the Euler characteristic of a face or a vertex is 1 and that of an edge is $-1$, and then modify this definition by dividing the Euler characteristic of an edge equally between the two faces that contain it and, at each vertex, giving the right-angled faces $1/4$ of the characteristic of the vertex and the other faces $3/8$ each. This second definition makes clear that the total is still 2. Note also that the Euler characteristic of any face is a multiple of $1/4$. Therefore $w$ has either a white face with positive Euler characteristic or a black face whose Euler characteristic is at least 1/2. Modulo webs with fewer vertices, we can apply the exchange equation (\[eexchange\]) to all sides of such a face which are type 2 strands, for example: $$\begin{aligned} \pspicture[.5](-1.2,-.7)(1.2,.7) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psline(-.85,-.6)(-.5,-.25) \psbezier(-.5,-.25)(-.2,-.55)(.2,-.55)(.5,-.25) \psline(.5,-.25)(.85,-.6) \psarc(.25,0){.849}{315}{45} \psline(.85,.6)(.5,.25) \psbezier(.5,.25)(.2,.55)(-.2,.55)(-.5,.25) \psline(-.5,.25)(-.85,.6) \psarc(-.25,0){.849}{135}{225}} \psline(-.85,-.6)(-.5,-.25) \psbezier(-.5,-.25)(-.2,-.55)(.2,-.55)(.5,-.25) \psline(.5,-.25)(.85,-.6) \psline(.85,.6)(.5,.25) \psbezier(.5,.25)(.2,.55)(-.2,.55)(-.5,.25) \psline(-.5,.25)(-.85,.6) \psline[doubleline=true](-.5,-.25)(-.5,.25) \psline[doubleline=true](.5,-.25)(.5,.25) \endpspicture &{\hspace{.5cm}\pspicture[.5](0,-.1)(1,.1) \psline[linecolor=black]{->}(0,0)(1,0) \endpspicture\hspace{.5cm}}\pspicture[.5](-1.2,-.7)(1.3,.7) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psarc(-.25,0){.849}{135}{225} \psline(-.85,-.6)(-.5,-.25) \psbezier(-.5,-.25)(-.2,-.55)(.15,-.26)(.3,0) \psbezier(.3,0)(.15,.26)(-.2,.55)(-.5,.25) \psline(-.5,.25)(-.85,.6)} \psline(-.85,-.6)(-.5,-.25) \psbezier(-.5,-.25)(-.2,-.55)(.15,-.26)(.3,0) \psbezier(.3,0)(.15,.26)(-.2,.55)(-.5,.25) \psline(-.5,.25)(-.85,.6) \psline[doubleline=true](-.5,-.25)(-.5,.25) \psline[doubleline=true](.3,0)(.7,0) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psarc(.7,0){.5}{315}{45} \psline(1.05,.35)(.7,0)(1.05,-.35)} \psline(1.05,.35)(.7,0)(1.05,-.35) \endpspicture \\ &{\hspace{.5cm}\pspicture[.5](0,-.1)(1,.1) \psline[linecolor=black]{->}(0,0)(1,0) \endpspicture\hspace{.5cm}}\pspicture[.5](-1.3,-.5)(1.3,.5) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psarc(-.7,0){.5}{135}{225} \psline(-1.05,-.35)(-.7,0)(-1.05,.35)} \psline(-1.05,-.35)(-.7,0)(-1.05,.35) \pscustom[fillstyle=crosshatch,linestyle=none]{ \psarc(.7,0){.5}{315}{45} \psline(1.05,.35)(.7,0)(1.05,-.35)} \psline(1.05,.35)(.7,0)(1.05,-.35) \psline[doubleline=true](-.7,0)(-.3,0) \pscustom[fillstyle=crosshatch]{ \psbezier(-.3,0)(-.15,.26)(.15,.26)(.3,0) \psbezier(.3,0)(.15,-.26)(-.15,-.26)(-.3,0)} \psline[doubleline=true](.3,0)(.7,0) \endpspicture\end{aligned}$$ This operation does not change the Euler characteristic of the face. The face can then be simplified by one of the other skein relations, for it has at most three sides if it is white and at most two sides if it is black. There are only two known graphs $J$ that satisfy all of the above conditions, namely the pentagon and the Higman-Sims graph. The Higman-Sims graph is a graph on $n=100$ vertices whose symmetry group contains the Higman-Sims sporadic simple group as a subgroup of index two[@HS:group]. The symmetry group acts 3-point transitively, so that the graph is 3-point regular. We claim that there is a corresponding state model that satisfies the $B_2$ skein relations with $q^{1/2} = \tau$, which implies that $\ell = -10$ and $h = -5$. Clearly $n = \ell^2$. The graph has no triangles and each vertex has valence $22 = \frac{n-x}{h}$, since $x = \ell$. The eigenvalues of its adjacency matrix $A$ are 22, 2, and -8; the image of $N$ is the unique line with eigenvalue 22. Therefore the eigenvalues of $W_I = hA + N - xI$ are $-110+100+10 = 0$, $-10+10=0$, and $40+10 = 50$, which implies $W_I^2 = xhW_I$, as desired. Discussion ========== The properties of the Higman-Sims state model imply a number of mysterious numerological connections between the Higman-Sims group ($HS$) and the (quantum) representation theory of ${\mathrm{sp}}(4)$. In this discussion, “representation” will in general mean a finite-dimensional linear representation. The smallest representations of ${\mathrm{sp}}(4)$ are $V(\lambda_1)$, the defining 4-dimensional representation, $V(\lambda_2)$, the 5-dimensional representation that identifies ${\mathrm{sp}}(4)$ with ${\mathrm{so}}(5)$, and $V(2\lambda_1)$, the 10-dimensional representation which is the symmetric tensor square of $V(\lambda_1)$. In quantum representation theory, representations have a quantum dimension, which is a natural generalization of the non-quantum or honest dimension. (It also coincides with the character of a certain circle in the non-quantum representation theory and appears in some proofs of the Weyl dimension formula [@Bourbaki:lie].) The quantum dimensions of these representations are $$\begin{aligned} \dim_q V(\lambda_1) &= q^2+q+q^{-1}+q^{-2} \\ \dim_q V(\lambda_2) &= q^3 + q + 1 + q^{-1} + q^{-3} \\ \dim_q V(2\lambda_1) &= q^4+q^3+q^2+q+2+q^{-1}+q^{-2}+q^{-3}+q^{-4}\end{aligned}$$ At the same time, the quantum dimension is the value of a closed loop in the $B_2$ spider; in the first two cases the loop is a type 1 or 2 strand, and in the third case it is a dashed loop. By the existence of the Higman-Sims state model, these three numbers must also be 10, the square root of the number of vertices of the Higman-Sims graph; 22, the degree of a vertex; and 77, the anti-degree of a vertex. Now the Higman-Sims graph has a special duality realized by switching colors in the state model, and this duality tells us that 22 and 77 must also be dimensions of representations of $HS$; as it happens, the two smallest irreducible representations. Thus we learn that quantum dimensions of representations of $U_{q=\tau^2}({\mathrm{sp}}(4))$ coincide with honest dimensions of representations of $HS$. This pattern extends to all representations of $U_{q=\tau^2}({\mathrm{sp}}(4))$, except that the corresponding representations of $HS$ are eventually not irreducible. In fact, the $B_2$ spider can be understood as (the Hom spaces of) the representation category of $U_q({\mathrm{sp}}(4))$, and the Higman-Sims state model establishes a functor from (the even half of) the representation category of $U_{q=\tau^2}({\mathrm{sp}}(4))$ to the representation category of $HS$. Such a functor would exist if there were an algebra homomorphism from the group algebra ${\mathbb{C}}[HS]$ to the quantum group $U_{q=\tau^2}({\mathrm{sp}}(4))$, but this possibility is nonsense (as the referee mentioned), because it would relate honest dimensions to honest dimensions and not quantum dimensions to honest dimensions. But perhaps one can construct $HS$ from $U_{q=\tau^2}({\mathrm{sp}}(4))$ using this functor. As a warm-up to this problem, one can try to construct a relationship between the quantum group $U_q({\mathrm{sl}}(2))$ and the symmetric group $S_n$ on $n = q + 2 + q^{-1}$ letters, which is the symmetry group of the Potts model. The Potts model relates these objects in the same way that the Higman-Sims model relates $U_{q=\tau^2}({\mathrm{sp}}(4))$ and $HS$. It would be especially interesting if one could construct not only the Higman-Sims group but also other sporadic simple groups using quantum groups at special values of $q$. Besides its numerology, the Higman-Sims state model also has the following interesting aspect. The most common axiomatic description of a spider, which is a collection of web spaces for disks with different boundaries but with the same skein relations, is that it is a kind of monoidal category, or a braided category if crossings exist. However, another interesting point of view is that a spider is a certain kind of $2$-category with only one $0$-morphism (or “object”), with a $1$-morphism for every choice of boundary, and such that each web is a $2$-morphism [@BD:higher0]. In this setting, a collection of checkerboard skein modules has a suggestive definition also, namely as a 2-category with two 0-morphisms. The author would like to thank Vaughan Jones, Pierre de la Harpe, and François Jaeger for fruitful discussions, as well as the reviewer and John Baez for useful remarks and corrections. The author used the TeX macro package PSTricks [@pstricks] to typeset the equations and figures. [10]{} John C. Baez and James Dolan, *Higher-dimensional algebra and topological quantum field theory*, J. Math. Phys. **36** (1995), no. 11, 6073–6105, . Nicolas Bourbaki, *Groupes et algébres de [Lie]{}*, vol. 4–6, Hermann, Paris, 1968. Pierre de la Harpe, *Spin models for link polynomials, strongly regular graphs, and [Jaeger’s]{} [Higman-Sims]{} model*, Pacific J. Math. **162** (1994), no. 1, 57–96. Pierre de la Harpe and Vaughan F. R. Jones, *Graph invariants related to statistical mechanical models: examples and problems*, J. Combin. Theory Ser. B **57** (1993), no. 2, 207–227. Donald G. Higman and Charles C. Sims, *A simple group of order 44,352,000*, Math. Zeitschr. **105** (1968), 110–113. François Jaeger, *Strongly regular graphs and spin models for the [Kauffman]{} polynomial*, Geom. Dedicata **44** (1992), no. 1, 23–52. Vaughan F. R. Jones, *On knot invariants related to some statistical mechanical models*, Pacific J. Math. **137** (1989), no. 2, 311–334. Louis H. Kauffman, *An invariant of regular isotopy*, Trans. Amer. Math. Soc. **318** (1990), no. 2, 417–471. Greg Kuperberg, *The quantum [$G_2$]{} link invariant*, Internat. J. Math. **5** (1994), no. 1, 61–85. [to3em]{}, *Spiders for rank 2 [Lie]{} algebras*, Comm. Math. Phys. **180** (1996), no. 1, 109–151, . *[PSTricks]{}*, http://www.tug.org/applications/PSTricks/. [^1]: The author was supported by NSF grant \#DMS-9423300.
--- abstract: 'Content-Centric Networking (CCN) introduces a paradigm shift from a host centric to an information centric communication model for Future Internet architectures. It supports the retrieval of a particular content regardless of the physical location of the content. Content caching and content delivery networks are the most popular approaches to deal with the inherent issues of content delivery on the Internet that are caused by its design. Moreover, intermittently connected mobile environments or disruptive networks present a significant challenge to CCN deployment. In this paper, we consider the possibility of using mobile users in improving the efficiency of content delivery. Mobile users are producing a significant fraction of the total internet traffic and modern mobile devices have enough storage to cache the downloaded content that may interest other mobile users for a short period too. We present an analytical model of the content centric networking framework that integrates a Delay Tolerant Networking (DTN) architecture into the native CCN, and we present large scale simulation results. Caching on mobile devices can improve the content retrieval time by more than 50$\%$, while the fraction of the requests that are delivered from other mobile devices can be more than 75$\%$ in many cases.' author: - '\' bibliography: - 'biblio.bib' title: Boosting the Performance of Content Centric Networking using Delay Tolerant Networking Mechanisms --- Introduction ============ Today’s Internet architecture relies on the fundamental assumption that there exists an end-to-end path between the source and destination during the communication session. However, the vast majority of Internet usage is dominated by content distribution and retrieval involving a large amount of digital content and this makes the conventional Internet architecture inefficient. In response, Information-Centric Networking (ICN) [@xylomenos2014survey] emerges as a paradigm shift from a host centric to an information centric communication model. It supports the retrieval of a particular content without any reference to the physical location of the content. *Named data* is the central element of ICN communication instead of its physical location. When a node needs content, it sends a request for a particular content. If any node on the route of the request has the content in its content store, it replies with that content to the request. The main argument for this architectural shift is that named data provide better abstraction than named hosts. Among all the ICN proposals, Content Centric Networking (CCN) architecture [@jacobson2009networking] is gaining more and more interest for its architectural design. CCN supports two types of messages: *Interest* and *Data*. Each CCN node maintains three data structures; the *Content Store (CS), Pending Interest Table (PIT) and Forwarding Information Base (FIB)*. CCN communication is consumer driven, i.e., a consumer sends *Interest* packet towards the content source based on the information stored in the FIB. When a node receives an interest, it checks its local cache for the matching content. Otherwise, the node forwards the *Interest* packet to the interface(s) based on the FIB table until the *Interest* packet reaches a content source that can satisfy the interest. Intermediate nodes store the interests in the PIT so that the data can be sent back to the proper requester. In addition, PIT is used to suppress the forwarding duplicate interests over the same interface and provides response aggregations. CCN interests that are not satisfied within a reasonable amount of time are retransmitted. As CCN senders are stateless [@jacobson2009networking], the consumer is responsible for re-expressing interests if not satisfied. Intermittently connected network topology or network disruption means a significant challenge for ICN deployment. For instance, name resolution may fail due to network disruptions, especially when the elements of the distributed resolution services are affected by network partitioning. Delay-tolerant networking [@cerf2007delay] architectures are proposed for such scenarios, which are characterised by long delay paths, frequent unpredictable disconnections, and network partitions. Such architectures provide flexible and resilient protocols that build an opportunistic network on top of existing underlying Layer 2 and Layer 3 protocols. This is achieved through asynchronous communication along with the use of underlying Convergence Layer Adapters (CLA) (TCP, UDP, Bluetooth, etc.). DTN is based on store-carry-and-forward models that utilise persistent storage that is distributed in the network. Data are cached in the network and are available for opportunistic transmissions. In particular, content based routing has been explored in DTN architectures [@costa2006adaptive]. The multitude of the network interfaces in modern mobile devices allows DTN mechanisms to work in parallel with conventional ones. For instance, mobile users who are connected to the internet via the cellular interface, can also use the WiFi-direct interface to exchange messages with their neighbours. DTN architectures assimilate properties of ICN architectures and vice versa. ![The examined ecosystem that combines Content Centric and Delay Tolerant architectures. []{data-label="fig:scenario"}](arch.pdf){width="\columnwidth"} In this work, we adapt mechanisms from DTN networking to the ICN architecture in order to improve the efficiency of the content retrieval procedure of mobile users. In more detail, we consider the scenario where mobile users request *content* via a CCN mechanism. These requests can be of many types, such as a single piece of data (e.g a request for the map of the current location of the user to Google Maps), a data stream (e.g., the homepage of a news website) or related to a specific type of information (e.g., opened restaurants close to the user). Some of the requests can be served more effectively by the cellular network but there are cases, like the third type of the request, that can be served locally. Such requests can potentially be served more effectively by nearby devices or by the cellular towers that have cached the requested content because another user requested that earlier. We considered the case, where contents are cached in the cellular towers but can also be requested from other nearby devices that have stored them. To achieve this, we modified the PIT table of the native CCN while operating on an opportunistic network. The modified PIT table stores the pending requester(s) information in the PIT table. The motivation behind this change is the fact that the original PIT table of the CCN keeps track of the arrival interfaces of the *Interest* packets in a way that is not feasible in a highly dynamic network. The contributions of this work can be summarised in the following list: - We explain what are the required modifications in the conventional CCN mechanism in order for it to be functional in a DTN environment. - We propose a Content Centric DTN network architecture for mobile devices and introduce the required modification for the native CCN so that the native CCN can bridge with the Content Centric DTN protocol. The Content Centric DTN protocol operates independently of existing DTN routing protocols, i.e., DTN routing protocols run on top of the Content Centric DTN protocol. While designing our proposed architecture, we leverage the inherent properties of CCN and DTN architecture [@cerf2007delay]. - We discuss the ways via which a mobile user can receive a requested content and show that the download time of a content can be decreased significantly via caching in the cellular access points and in other mobile devices. - We show that the underlying routing protocol does not have a substantial effect on the download time of contents due to the limited number of hops in the DTN. Figure \[fig:scenario\] depicts the examined scenario where at any time mobile users are connected to the cellular network and are able to potentially communicate directly with other mobile users, depending on the distance between them and the underlying communication framework for the device-to-device communication. All the contents are stored in an origin server, which is located in a cloud infrastructure and can be cached to cellular access points and to mobile devices. Depending on the placement and the number of the cellular access points, the proportion of the content receptions from other mobile users differs significantly and, as we can see from our large scale simulations (Section \[sec:evaluation\]), mobile users are able to successfully handle the requested contents in various different cases of routing schemes. After discussing the related work in the next section, we provide a more detailed explanation of the examined ecosystem in Section \[sec:model\]. Next, in Section \[sec:alg\], we present the proposed protocol which is evaluated in Section \[sec:evaluation\]. Finally, in \[sec:concl\] we conclude the paper and list our future work. Related Work ============ Based on the publish/subscribe paradigm, there exist numerous research efforts [@asadi2014survey] on device-to-device (D2D) communication in cellular networks, which is defined as direct communication between two mobile users without intervening Base Station (BS) or core network. This concept was first proposed in [@lin2000multihop]. Although D2D, from an architectural perspective, seems similar to Mobile Ad-hoc Networks, the key difference between these two is the involvement of the Cellular Access point. Casetti *et al.* [@casetti2015content] presents content-centric routing in a D2D architecture based on Wifi Direct. The content-centric routing is based on two data structures: PIT of the native CCN and the Content Routing table (CRT). CRT provides the routing information to reach the content items. However, it is not feasible to maintain CRT and the PIT table in dynamic networks where mobile users provide intermittent connectivity. In contrast, our proposed scheme exploits the different PIT table which stores the requester(s) information instead of the arrival face of the original CCN so that the reverse path can be different from the forwarding path of the *Interest* packets. Nevertheless, most recently, Garcia *et al.* [@garcia2016ifip] have concluded that *Interest* aggregation should not be an integral component of Content-Centric Networks and propose far smaller and more efficient forwarding data structures (e.g., CCN-DART [@garcia2016light]. Another similar effort has been proposed in [@helgason2010mobile] that allows wireless content dissemination between mobile nodes without relying on infrastructure support. The proposed architecture is based on the publish/subscribe paradigm. Their focus is mainly on implementation aspects based on 802.11 in ad-hoc mode. In contrast, our architecture is based on CCN and DTN architecture and hence there are many architectural differences between their effort and our proposal. Most recently, Liu et al. [@liu2017information] presents detailed descriptions on content routing based on ICMANET, and describes a concept model for content routing, and categorizes content routing into proactive, reactive and opportunistic types, then analyzes representative schemes, which can be referred to for the study of joint optimization between content routing and caching in ICMANET. There are also several research efforts in the DTN environment [@tysontowards], [@trossen2016towards]. In [@tysontowards], the author investigates the possibility of integrating the ICN and the DTN principles into a shared ICDTN architecture. Combining the ICN and the DTN has been demonstrated in a recent effort called RIFE architecture [@trossen2016towards]. The RIFE is a universal communication architecture that combines the publish/subscribe based POINT architecture [@point] and the DTN through a number of handlers for existing IP-based protocols (e.g., HTTP, CoAP, basic IP) which are mapped onto appropriate named objects within the ICN core. The IP endpoints are connected through the ICN using a gateway. In contrast, our proposed model exploits the DTN architecture in the native CCN architecture that results in a Multihop Cellular Network (MCN) [@lin2000multihop]. Amadeo et al. [@amadeo2016information] have discussed the potential of the ICN paradigm as a networking solution for connected vehicles. The authors have summarized ICN-VANETs relevant literature and presented the open challenges in this area. Nevertheless, the analysis of their work shows that the native design principles of ICN well match the main distinctive features of VANETs and the targeted wide set of future vehicular applications. The authors of [@saxena2017implementation] presents IP-based data DTN routing mechanisms using CCN on the sparsely-connected real vehicular testbed and validate the performance and usability of CCN over VANET. However, their proposed schemes have not considered the forwarding loop and duplicates at the content level while operating on IP-based routing mechanisms. Our proposed model operates independently of DTN routing and can detect the duplicates, and forwarding loop at content level. User-centric data dissemination in DTNs has been widely explored from various points of view [@sourlas2015information; @costa2008socially; @yoneki2007socio; @lu2014information]. Authors of [@gao2011user] proposed a user-assisted in-network caching scheme, where users who request, download, and keep the content contribute to in-network caching by sharing their downloaded content with other users in the same network domain. Sourlas *et al.* [@sourlas2015information] proposed an information-resilience scheme in the context of Content-Centric Networks (CCN) for the retrieval of content in disruptive, fragmented networks depending on the in-network caching of its attached user. The proposed scheme enhanced the Named Data Networking (NDN) router design as well as the *Interest* forwarding mechanisms so that users can retrieve cached content when the content origin is not reachable. To achieve this, the authors introduce a new table, called Satisfied Interest Table (SIT), which keeps track of the Data packets that are forwarded to users. In case the content origin is not reachable, the proposed scheme exploits the cache of the other users following SIT entries. However, the proposed scheme performs well only if the users listed in the SIT entries are connected. In [@anastasiades2016information], the authors present agent-based content retrieval on top of CCN which provides information-centric DTN support as an application module without modifications to CCN message processing. However, their proposed scheme may suffer from PIT bottleneck in delay tolerant environment. In contrast, our proposed scheme exploits the opportunistic communication of mobile users using DTN mechanisms. From a social-based point of view, the authors of SocialCast [@costa2008socially] proposed a routing framework that exploits the social ties among users for effective relay selection, while Yoneki et al. in [@yoneki2007socio] discussed the design of a publish-subscribe communication overlay based on the distributed detection of social groups by means of centrality measures. However, this routing mechanisms can be complementary to our proposed scheme, which operates independently of any routing algorithm. Lu *et al.* at [@lu2014information] used the K-means clustering algorithm to build the social level forwarding scheme in order to reduce the transmitted messages. This approach raises several inevitable limitations: *(i)* the interest may fail to reach the encountered node with the same social level that might have the content to satisfy the interest, *(ii)* the request from the higher social level will never reach a content provider with a lower social level, *(iii)* the proposed scheme cannot detect the routing loop of the *Interest* packet and *(iv)* the authors have not considered how to optimise similar interests from multiple users. These limitations are addressed in our solution. D2D communication highly depends on the participation of mobile users in sharing contents. Mobile users may be selfish and would not be willing to forward data to others due to limited resources (e.g., memory, battery power). To handle this issue, a number of incentive mechanisms [@noura2016survey; @jaimes2015survey; @zhang2016incentives] has been proposed to motivate users to work in a cooperative way. D2D is still immature and faces many technical challenges and issues regarding aspects such as device discovery, relay selection, security and interference mitigation. The authors of [@jethawa2017incentive] presents an incentive mechanism for data centric message delivery in DTN that exploits the social relationships. This mechanism prevents users from becoming selfish and motivates them to relay the most popular content. Nevertheless, the incentive mechanisms are complementary to our proposed model and can be applied on top of our solution. In this work, we assume that all mobile users are participating in a cooperative way. System Model {#sec:model} ============ Preliminaries ------------- We analyse a CCN architecture where mobile users make a request for named data contents $c \in \mathcal{C}$. We consider a set of mobile users $\mathcal{M}$ that browse in a large scale metropolitan area and produce their requests for content[^1]. A set of Cellular Access Points (CAPs) $\mathcal{A}$ are deployed in the area and we assume that at any time $t$ any mobile device $m \in \mathcal{M}$ is associated with one cellular access point and we denote this by $m_{a}(t) \in \mathcal{A}$, while any CAP $a$ has $\mathcal{N}_{a}(t)$ mobile devices associated with it at time $t$. Each CAP $a$ also operates as a CCN node by being connected to the fixed network and it maintains a *Pending Interest Table* $P_a$ and a *Forwarding Information Table* $F_a$. Also, part of its storage $S_a$ is used for caching contents and works as a *Content Store*. The cache of each CAP is measured based on the proportion of total contents that it can store $S_a = \alpha_{\mathcal{A}}|\mathcal{C}|$, $0< \alpha_{\mathcal{A}} << 1$. In addition to the three traditional tables that are used in CCN architectures we add one table, motivated by the work of [@sourlas2015information], which stores the satisfied interests. We denote that table with $D_a$. The entries of $D_a$ are of the form: $<$[content, user, time]{}$>$ and work in a similar way to the forwarding interest table, but with the difference that they keep who has satisfied its interest. In addition, we add another table that stores the *Pending Requester Information table (PRIT)* that stores the requester information instead of the arrival interface. PRIT is used when the CAP receives requests from the DTN interface. Any mobile node $m$ is able to communicate directly with its neighbours $\mathcal{N}_{m}(t)$, whose number depends on the mobility of the users, and the interface used for the connectivity between them[^2]. We also denote by $\mathcal{N}_{m}^{k}(t)$ the mobile users $m$ being able to communicate in $k>1$ hops, at time $t$. Similarly to the CAPs, each mobile node $m$ keeps three tables a *Pending Requester Information Table* $P_m$, a *Forwarding Information Table* $F_m$ and a *Satisfied Interest Table* $D_m$ and has a *Content Store*, $S_m$. The cache of each node is measured based on the proportion of total contents that it can store $S_m = \alpha_{\mathcal{M}}|\mathcal{C}|$. We assume that the storage capabilities of cellular access points is much higher than that of the mobile devices (e.g., some Terabytes compared to a few Megabytes), $0< \alpha_{\mathcal{M}} << \alpha_{\mathcal{A}} << 1$. Table \[tab:notation\] contains the introduced notation[^3]. Problem Formulation ------------------- Each mobile user $m \in \mathcal{M}$ requests contents $c \in \mathcal{C}$ at the rate $r_{m}^{c}$. We use the vector $\textbf{r}^{c} \in \mathcal{R}_{+}^{|\mathcal{M}|}$ to denote all the request rates of all the mobile users for content $c$ and the zero norm[^4] of $\textbf{r}^{c}$, $||\textbf{r}^{c}||_{0}$ to indicate the number of the mobile users that are requesting content $c$. The request rate may depend on multiple factors, but in this work we consider only the popularity of the content $\pi_c$ and the profile of the user $u_m$, which indicates the probability of a mobile user requesting each content. We denote the profiles of all users with the vector $\textbf{u}$. So the request rate of content $c$ by user $m$ is given by: $$r_{m}^{c} = u_m \cdot \pi_c$$ The service rate of an expressed interest from a user $m$ and a content $c$ depends on the popularity of the content $\pi_c$ and the content placement strategy that will be explained in detail in Section \[sec:alg\]. An interest in a content from a mobile user can be served in three ways: ### Core Network The mobile device, via the cellular network, sends the *Interest* packet and the content is retrieved in the traditional CCN way from the Content Store of any intermediate node or from the server of origin, where the content was initially placed upon its creation. At any time $t$ there exists at least one node that has the required content. In such a case, the service rate of content $c$ is denoted by $s_{N_{1}}^{c}$ and depends on the popularity of the content and the characteristics of the network (load, bandwidth, etc) and the caching policy (e.g., LRU, FIFO, LFU), $c_{N_{1}}$. Without loss of generality we assume: $$s_{N_{1}}^{c}(t) = c_{N_{1}}(t) \cdot \pi_c$$ ### Cellular Access Point The mobile device downloads the cached content from the cellular tower because another user had requested the content earlier. Given that the available cache of each cellular tower is limited compared to the storage size of the server of origin, the cached contents are limited ($\alpha_{\mathcal{A}}|\mathcal{C}|$) but, depending on the caching policy, can achieve a high hit rate due to the popularity distribution of the contents and the spatial skewness [@Fayazbakhsh:2013:LPM:2486001.2486023]. In that case, we denote the service rate with: $$s_{N_{2}}^{c}(t) = \alpha_{\mathcal{A}} \cdot c_{N_{2}}(t) \cdot \pi_c$$ ### Delay Tolerant Network The mobile device gets the content from another mobile device via a single-hop or a multi-hop path. The number of the hops depends on *(i)* the physical distance between the users, *(ii)* the number of the users and *(iii)* the popularity of the content. Popular contents are more probably found closer to the user who initiated the request. Although mobile devices are not able to cache many content items, the social relationship between mobile users that have, with high probability, similar mobility patterns, makes it probable for two socially close mobile users to express interest in similar items [@hui2011bubble]. In that case, we denote the service rate with: $$s_{N_{3}}^{c}(t) = \alpha_{\mathcal{M}} \cdot c_{N_{3}}(t) \cdot \pi_c \cdot \prod_{m \in \mathcal{N}_{m}(t)}u_{m}$$ [m[0.1]{} m[0.75]{}]{} $\mathcal{C}$ & set of available contents\ $\mathcal{M}$ & set of mobile users\ $\mathcal{A}$ & set of cellular access points\ $\alpha_{\mathcal{X}}$ & cache capacity of mobile users $\mathcal{M}$ or CAPs $\mathcal{A}$\ $\mathcal{N}_{x}(t)$ & mobile users accessible by user $m$ or CAPs $a$ at time $t$\ $P_x$ & Pending Interest Table of user $m$ or CAP $a$\ $F_x$ & Forwarding Infromation Table of user $m$ or CAP $a$\ $S_x$ & Content storage of user $m$ or CAP $a$\ $D_x$ & Satisfied Interest Table of user $m$ or CAP $a$\ $\pi_c$ & Popularity of content $c$\ $u_m$ & Content request profile of mobile user $m$\ $r_{m}^{c}$ & Request rate of content $c$ from mobile user $m$\ $s_{N_{y}}^{c}(t)$ & Service rate of content $c$ at time $t$ through network $N_{y}$\ We employ a Markov process $\{ X_{c}(t), 0 \leq t < \infty \}$ with stationary transition probabilities that shows the number of the nodes in the whole ecosystem (mobile users, cellular access points, the server of origin as well as the network components such as switches that are part of the CCN ecosystem that have the content $c$ in their caches). If at any time $\tilde{t}$, $X_{c}(\tilde{t}) = 0$, this will mean that the content is not available at all, which can be true only in the case of a very unpopular item that is not cached in any node and the server of origin is not accessible because of network partitioning. However, although this is not realistic, we can use the Markov process as a birth-death process with a single absorbing state, which we define to be $X_{c}(t) = 0$ in order to then use the absorption time formula [@taylor2014introduction] that includes the cost parameters for each type of network as an objective function to optimise. In more detail [@taylor2014introduction]: $$\begin{aligned} T_{n}^{c} &=& \sum_{i=1}^{\infty} \frac{1}{\lambda_{i}^{c} \rho_{i}^{c}} + \sum_{k=1}^{n-1}\rho_{k}^{c} \sum_{j=k+1}^{\infty}\frac{1}{\lambda_{j}^{c} \rho_{j}^{c}}, \\ &\text{if}& \hspace{2 cm} \sum_{i=1}^{\infty}\frac{1}{\lambda_{i}^{c} \rho_{i}^{c}} <\infty\end{aligned}$$ and $T_{n}^{c} = \infty$, if $\sum_{i=1}^{\infty}\frac{1}{\lambda_{i}^{c} \rho_{i}^{c}} = \infty$, where: $\lambda_{n}^{c}$ is the birth rate of the process at state $n$, $\mu_{n}^{c}$ is the death rate and $$\rho_{n}^{c} = \prod_{i=1}^{n} \frac{\mu_{i}^{c}}{\lambda_{i}^{c}}$$ The birth rate of the process at state $n$ and for content $c$, $\lambda_{n}^{c}$, depends on the request rates for the examined item of each of the users $r_{m}^{c}$. $$\lambda_{n}^{c} \sim \sum_{ \substack{ m \in \mathcal{M} \\ || \textbf{r}^{c}||_{0} = n}} r_{m}^{c}$$ while the death rate depends on the type of the service rate and the caching policies. The required time for the Markov process to reach the absorption state depends on the initial state and the difference between the service rates and the request rates. The service rate depends on the probability of a content being placed close to the mobile users that generate requests for it. The probability of a content $c$ being cached in the CAP which mobile user $m$ is associated with at time $t$, $m_{a}(t)$ is: $$p_{m_{a}}^{c} (t)\coloneqq P[c \in S_{m_{a}(t)}]$$ and the probability of $c$ being stored in at least one of $m$’s neighbours is $$p_{\mathcal{N}_{m}}^{c}(t) \coloneqq 1 - \sum_{j \in \mathcal{N}_{m}(t)} P[c \notin S_{j}(t)],$$ while for $K$ hops away from $m$, the probability of $c$ being cached is: $$p_{\mathcal{N}_{m}^{K}}^{c}(t) \coloneqq 1 - \sum_{j \in \mathcal{N}_{m}^{K}(t)} P[c \notin S_{j}(t)].$$ So the probability for a mobile user not being able to retrieve $c$ from the access point that he or she is associated with and from any mobile user whose distance is at most $K$-hops is [^5]: $$p_{m}^{c}(K,t) \coloneqq 1 - p_{m_{a}}^{c} (t) - p_{\mathcal{N}_{m}}^{c}(t) - \sum_{k=2}^{K} p_{\mathcal{N}_{m}^{k}}^{c}(t).$$ We also define the probability of a content $c$ being cached in the cellular access point $a$ of at least one of the mobile devices that are associated with $a$: $$\label{eq:prob_c_in_a} p_{a}^{c}[\mathcal{N}_{a}(t),t] = 1 - P[c \notin S_{a}(t)]\cdot \prod_{j \in \mathcal{N}_{a}(t)} P[c \notin S_{j}(t)]$$ The size of the Content storage in the CAPs and mobile devices, and more specifically the proportion of the total items they can store is what affects $p_{m}^{c}(K,t)$ and $p_{a}^{c}[\mathcal{N}_{a}(t),t]$. Another determinant parameter is the number of mobile users that are associated with the same access point as the user that requested a content item and, consequently, the diversity in the subset of the objects that are cached in all these devices. We denote with $\mathcal{C}_{a}(t) \subset \mathcal{C}$ the set of the content items that are cached in at least one device that is accessible from CAP $a$ or are cached in $a$. Then equation (\[eq:prob\_c\_in\_a\]) can be expressed shortly as $P[c \in \mathcal{C}_{a}(t)]$. Next, in Section \[sec:alg\], we present a protocol that determines which contents should be cached in each device and for how long. The protocol is designed to consider highly dynamic mobile users with limited resources as well as the static access points that operate as the glue between the dynamic users and the fixed infrastructure. ![The cellular access points connect the CCN with the static links between the nodes to the highly dynamic and unpredictable DTN.[]{data-label="sec:scenario"}](static_dynamic.pdf){width="\columnwidth"} Protocol {#sec:alg} ======== The original design of CCN is based on the fact that the multiple network interfaces can be integrated via the mechanism of the forwarding information base [@jacobson2009networking]. Each entry on the FIB points to a list of interfaces that can be used to forward *Interest* packets towards the desired content producer. At this point, the traditional CCN can be combined with DTN network protocols, as presented in figure \[sec:scenario\]. The integration of DTN architecture with the native CCN architecture results in a Multihop Cellular Network (MCN) [@lin2000multihop]. The general concept of MCN comprises a cellular network in which user devices can communicate with each other, either via means of a conventional cellular mode or via means of direct D2D communication if they are mutually reachable. To enable this paradigm, the functionalities of the proposed protocol can be decomposed into three parts: **(1) The control plane** that performs packet (*Interest/Data*) management. The control plane is implemented on top of the DTN mechanisms, and its functionalities are responsible for performing specific actions based on the packet type (*Interest/Data*). To achieve this, the control plane inserts the meta-information in the DTN messages. **(2) The forwarding plane** that consists of two parts and, depending on the type of the node, it can be either the native *CCN forwarding* or the *DTN forwarding (Store-carry-and-forward)*. This module provides an interface between the CAP and the mobile nodes so that the CAP can hand over the packet to a mobile node. The mobile node exploits DTN architecture to forward the packets in D2D fashion while operating in an opportunistic network without the intervention of the cellular network. The CAP includes a separate PIT table called the Pending Requester Information Table (PRIT) which stores the requester(s) information instead of the arrival faces of the *Interest* packets. The mobile nodes only use our proposed architecture while operating in a DTN environment, i.e., the control plane is implemented on top of the DTN forwarding plane and enables the host centric DTN to perform in content centric fashion. To bridge between CCN and DTN, each message carries meta-information of the CCN mechanism that assists the content centric operation in DTN environment. **(3) The routing decision engine** is the process by which one router sends packets to another router by means of routing protocols which decide the appropriate path for the packet. The routing protocol assists the router in choosing the best path out of many paths. The routing decision engine operates independently on top of our proposed model. The proposed protocol deals with two control decisions: 1. *Request/Response Processing*: Although the CAPs operate as conventional CCN nodes regarding the forwarding and the routing of *Interest* and *Response* packets, it is not the same for mobile users. Whenever a mobile user of a CAP receives a content request or a content response, there is the question of what actions should be taken? 2. *Content Management*: Given that a mobile user or a CAP has a content item, should it store it in the content store or drop it? The CAPs have higher storage capabilities than the mobile users, but still they can not cache all the available contents. Request Processing ------------------ In the relatively static CCNs, the *Interest* packets are propagated as upstream towards the potential data sources, while leaving a trail of *bread crumbs* for the matching data packets to follow back to the original requester(s). On the other hand, in dynamic environments the nodes are mobile and the connections are intermittent, which means that it is not feasible to keep track of the changes in the network topology. Unlike the conventional PITs in CCN, mobile users keep the address information of the requester(s) in the $Pending\ Requester\ Information\ Table\ (PRIT)$ so that they can forward similar content towards potential requester(s). PRIT is also used to detect forwarding loop and aggregate the similar interests. Mobile users exploit the $Satisfied\ Request\ Information\ Table\ (SRIT)$ to remember all the satisfied interests of the requester(s) so that it can provide information on the potential content source for the similar interests in future. By doing that, an intermediate node having an entry matching with the interest packet in the SRIT can forward the *Interest* packet towards those potential content provider(s). The CAP acts similarly to a mobile node if it receives the *Interest* packet from the DTN interface. Nevertheless, if the CAP has FIB entry for this *Interest*, it can also apply the native CCN mechanism. The overview of the request processing is presented in Algorithm \[interest\]. $key \leftarrow [Interest]$ $content \leftarrow Cache(key)$ $response \leftarrow createResponse(content)$ $requester \leftarrow [Interest] $ $Send\ response\ to\ requester\ following\ PRIT$ $Send\ response\ following\ PIT\ breadcrumb$ $satisfied\_req\_provider \leftarrow lookup\_SRIT(Interest)$ $Send\ Interest\ to\ satisfied\_req\_provider\ $ $pending\_requester \leftarrow lookup\_PRIT(Interest)$ $ drop\ the\ interest\ packet$ $Add\ requester\ to\ PRIT\ table$ $forward\ the\ Interest\ to\ next\ Hop $ $ FIB\_entry \leftarrow native\_CCN\_mechanism(Interest)$ $satisfied\_req\_provider \leftarrow lookup\_SRIT(Interest)$ $Send\ Interest\ to\ satisfied\_req\_provider\ $ $pending\_requester \leftarrow lookup\_PRIT(Interest)$ $ drop\ the\ interest\ packet$ $Add\ requester\ to\ PRIT\ table$ $forward\ the\ Interest\ to\ mobile\ user $ On the reception of an *Interest* packet, a mobile node initially searches in its Content Store and if there is no match, the node checks its SRIT table to verify if there is any entry matching with the *Interest* packet. If any matching is found, the node forwards the request towards those potential content provider(s) from SRIT. The node also enters the *Interest* packet in the PRIT table. The PRIT is used to keep track of the IDs [^6] of the interest(s) creators that are used as destinations in the response packets. In more detail, upon the reception of an *Interest* packet the mobile node checks its PRIT. If there is an older entry for the same content, it updates the entry only if the requester is different, otherwise it drops the *Interest* packet. On the other hand, the CAP first applies the native CCN mechanism, i.e., it searches the content store to verify if it can satisfy the request. If there is a match, the CAP sends the content back to the requester. If no matching is found, the CAP forwards the request further, based on the information of the FIB. The CAP can also forward the request to the mobile node which runs our proposed architecture. Before forwarding the request to the mobile node, the CAP will store the requester information in the PRIT table, but only if it receives the request from the DTN interface. Regardless of the total number of users, our proposal does not spread the *Interest* packets all over the ecosystem because it is inefficient and not worthwhile doing since the mobile nodes are submitting their requests in parallel to both the CAPs they are connected to and their neighbouring mobile devices. More importantly, the respective CAPs inform the mobile nodes whether there exists another mobile node that has the requested content in the same cell, and depending on the level of the assistance from the CAPs, as will be discussed in the next section, the mobile nodes can either receive their request via a multi-hop-but-short path from another node in the same cell, or via a two hop path with the help of the CAP. So, a request as shown in Figure \[fig:downloadpaths\] can be served in four ways: *(A)* from the Content Store of the associated CAP, *(B)* via the associated CAP that retrieved the content from the conventional CCN network, *(C)* from another mobile node that sent the content via a multi-path among the other mobile nodes, and *(D)* from another mobile node that sent the content to the CAP, which then forwarded the content to the requester. $ destination\_id \leftarrow [Response]$ $content\_provider \leftarrow [Response] $ $insert\ content\_provider\ in\ SRIT\ table $ $notify\ application$ $key \leftarrow [Response]$ $pending\_requester \leftarrow lookup\_PRIT(key) $ $drop\ the\ packet$ $ forward\ response\ to\ pending\_requester $ $key \leftarrow [Response]$ $content\_provider \leftarrow [Response] $ $insert\ content\_provider\ in\ SRIT\ table $ $pending\_requester \leftarrow lookup\_PRIT(key) $ $drop\ the\ packet$ $ forward\ response\ to\ pending\_requester $ $follow\ the\ native\ CCN\ mechanism$ Response Processing ------------------- Algorithm \[responsealg\] presents an overview of the response processing on a network node. When the *Interest* packet reaches a node having content matching with the *Interest* packet, the node constructs a response packet with the content and sends it back to the originator of the request. If the intermediate node is a mobile node, the node checks the PRIT table and removes the entry if there is a match for the response packet. If the PRIT entry has the information on multiple requesters, the intermediate node adds all the source IDs of those requesters to the response packet as meta-information. If the intermediate node does not find any matches in the PRIT table, it simply forwards the response packet to the next best contact. Subsequently, if the response packet reaches the target node, it checks the meta-information to verify if there is any other pending requester(s) who requested this content. If there exists no pending requester information, the recipient node drops the packet to avoid further transmission by the DTN mechanism. Otherwise, if the node finds other pending requesters, it will forward the response to those pending requesters. If the meta information has multiple pending requesters, the node adds one requester as the destination address for the response and other requester(s) as meta-information. If the intermediate node is the CAP, it checks both PIT and PRIT to forward the response in an appropriate manner. If the CAP finds a match in its PIT, it follows the native CCN mechanism. A match in PRIT follows our proposed scheme. ![The four potential ways via which a mobile user can get the requested content item. []{data-label="fig:downloadpaths"}](deliveryCases.pdf){width="\columnwidth"} Content Management ------------------ Mobile nodes can provide storage memory depending on their resource availabilities and policies. Using its storage memory, a mobile node can serve as the network medium to share the content. Furthermore, this cache can also be used by store-carry-and-forward based DTN protocols. However, the persistent storage of the DTN protocol keeps the message until the successful delivery of the message to the next best opportunistic contact. In our proposed architecture, the storage memory of mobile node keeps the response packet to satisfy the future requests. However, only in the ideal case is the mobile node’s storage big enough to store all received content. Under the assumption that a mobile node can only store a small proportion of the total contents, a caching policy is required. Additional information, such as the popularity distribution of the contents and the request profile of the mobile users, can be used by a caching policy to determine the probability of each content being requested by the user and, based on that, a decision is made whether a newly received content be dropped or replaced with the unpopular one, given that there is no available space in the mobile node. However, the CAP has information about the stored content in the whole cell that is not utilised. We utilise this additional information by defining the expected retrieval cost of each content by combining this information: *(i)* the popularity of each content item, *(i)* the profile of the users and the *(i)* estimated time required to retrieve the content via one of the aforementioned ways. This costs are calculated with the assistance of the CAPs, which is able to recommend a mobile node on whether to keep an item or not. Evaluation {#sec:evaluation} ========== We evaluate our proposed architecture using the Opportunistic Network Simulator (ONE) [@keranen2009one]. The goal of our evaluation is to investigate the performance of our proposal in terms of *(i)* Average end-to-end delay, *(ii)* service load ratio, *(iii)* packet drop ratio and *(iv)* Traffic split. Table \[tab:metrics\] contains a description of each of these metrics. [m[0.2]{} m[0.65]{}]{} Average end-to-end delay & The average time passed to receive a content in response to a request.\ Packet drop & The amount of packets (Interest/Data) that is effectively suppressed by the content router.\ Traffic split & The service rate, i.e., the amount of requests processed by different types of nodes in the network: mobile user, CAP, content source.\ Service load & The amount of requests processed by the content provider.\ The ONE simulator contains map data of the Helsinki downtown area (e.g., roads, tram routes and pedestrian walkways) and various Map-based Movement models: *(1)* Random Map-Based Movement, *(2)* Shortest Path Map-Based Movement, and *(3)* Routed Map-Based Movement. We employ the Shortest Path Map-Based Movement since it is more realistic because the mobile users, after choosing a destination point on the map, follow the shortest path to that point from their current location. The destination point is chosen randomly from a list of Points of Interest (POI), which includes popular real world destinations (e.g., shops, restaurants, tourist attractions). The simulation area approximately is 20km$^2$. In the simulation, we considered mobile users that are either walking at a speed that is in the range of 1.8 kilometres per hour to 5.4 kilometres per hour or driving a car or using the tram. We categorised the mobile users into two groups: *(i)* requesters and *(ii)* intermediate users. The requesters were 10 and the intermediate users were 150. All of them were divided into four different groups and assigned with different probabilities of choosing the next group specific POI or random places to visit. Regarding the content generation, we considered content generated by 10 other mobile users or from non-mobile content generators (e.g., a news website). Apart from the mobile users, we also considered 30 CAPs that have caching capabilities. None of the users had any content in the beginning of the simulation, but whenever one requester imposed a request on a CAP, the content was retrieved from the content provider in the cloud if it had not already been cached from a previous request, and delivered to the requester. The simulation time was 5 days and we used the first day as a warm up phase. All the details of the simulation parameters are listed in Table \[sec:SimulationParameters\]. Parameter Value --------------------------------------- ------------------ Simulation Duration 5 days (432000s) Number of Requesters 10 Time interval of generating Interests 5min Number of Relay Nodes 160 Number of Access Points 30 Cache of mobile users 10 items Cache of Access points 50 items TTL value 500s Transmission range of Access Points 100m Transmission speed of Access Points 10Mbps. Transmission range of Mobile devices 10m Transmission speed of Mobile devices 2.5 Mbps. : Simulation Parameters \[sec:SimulationParameters\] DTN Routing {#sec:dr} ----------- The Content-Centric functionalities of our proposal are routing independent, and for that reason we examine the performance of our proposal in four different cases regarding the routing strategies: *(i)* Epidemic [@vahdat2000epidemic], *(ii)* Spray-and-Wait [@spyropoulos2005spray], *(iii)* First contact [@jain2004routing] and *(iv)* a hybrid one that works like the Epidemic in the forwarding step until reaching the destination and also like the Spray-and-Wait in the reverse path creation step. Epidemic routing has no limitation on generating copies for each message. In this routing scheme, each node carries a list of all messages whose delivery is pending. Whenever a node encounters another node, they exchange all that messages that are not common in their list. Spray-and-Wait generates a limited number of copies for every message and spreads initially. If a node does not find the destination in the spray phase, it waits for the destination to perform direct transmission. In our experiment, Spray-and-Wait generated 10 copies for every message in the spray phase. First contact generates only one copy per message. The hybrid one sprayed *Interest* packets (limited to 10 copies) until the request reached the *content providers* and then used the Spray-and-Wait routing to deliver the content back to the requester. Query Distribution {#sec:dist} ------------------ We generate user interest based on the available contents $\mathcal{C}$, which we assume are 1000 (i.e. $|\mathcal{C}| = 100$). We assume that the $i$-th content $c_{i}$ is the $i$-th most popular one $\pi_i < \pi_j$ $\forall i \leq j$. The users’ request profiles are randomly generated via the uniform distribution. Content popularity is correlated with user requests [@liu2005client] and follows the well-known Zipf distribution [@breslau1999web]. In this work we consider two cases for the content popularity: *(i)* uniform and *(ii)* Zipf on initialising $\pi_c \forall c \in \mathcal{C}$. For the Zipf distribution we initialised the parameter to 1 and the normalizing constant to 0.2. Performance boost of Proposed Architecture ------------------------------------------ We measure the performance of our proposal using the metrics that are listed in Table \[tab:metrics\]. Figure \[fig:trafficSplit\] shows how the content requests are served of each type of DTN routing protocol. Practically, we show how the hit rates of each content provider type are related. Upon every request, the proposed mechanism uses all the possible ways in parallel in order to download the content as soon as possible. As we can see from both Figure \[fig:trafficSplitUn\] and Figure \[fig:trafficSplitZipf\] the content caches in the CAPs together with the caches in the mobile nodes can handle more than 90% of the requests. Only in the case of the Spray and Wait routing protocol the requests are served by the content producer around 25% when the content popularity follows the uniform distribution and 20% when they follow the Zipf distribution. In order to measure the contribution of the CCN mechanisms and the caches in the mobile users and in the CAPs, we implemented three simpler mechanisms and we compare them with our proposal in Figure \[fig:load\]. The first one is a simple content search using the DTN mechanism, i.e., that is operating as a request-response application on top of the DTN routing protocols and is denoted by *DTN*. The second one is an improved version of the first one that has content caches. Each content cache can store 10 objects. This mechanism is denoted by *DTN with user caching*. The last one is the same as our proposal but without caching in the mobile users and is denoted by *Proposed Model without user Caching*. As we can see from both \[fig:loadUni\] and \[fig:loadZipf\], our model is under-loading the content providers more than the other competitors. It is worth mentioning that in the case of Epidemic routing, the content provider is overloaded because unlimited number copies of each request is generated until the request reaches the content provider. Next, we present in Figure \[fig:packetDrop\] the benefit of using CCN mechanisms in conjunction with the DTN routing protocols because they filter the requests and stop forwarding identical packets. We observe that if user caching is not used, the number of duplicate packets significantly increase. This is happening because the request packets stay in the network longer to reach the potential content provider, and, hence the communication overhead in terms of additional traffic (interest/data packet) increases. Our proposal detects those duplicates and drops them accordingly. For instance, in the case of uniform distribution and multi-copy routing (e.g., Epidemic, Spray and Wait and Hybrid routing), we observe that more than 50% of the duplicate packets are reduced in our proposal as compared to our proposed model without user cache. This is happening because these protocols are producing multiple copies per request and each content has the same chance of being requested multiple times and being found in a nearby user’s pending interest table. On the other hand, in the case of First contact (single copy routing), we observe 27% duplicate packet reduction. In the case of Zipf content popularity and First Contact, we observe more than 60% duplicate packets reduction. This is because without the user cache, the requests take a longer time to reach the potential content provider, whereas the other three DTN protocols can potentially reach the content provider faster than First Contact. We also examine the service load on mobile users with our proposal to compare *DTN with user caching*. Figure \[fig:userload\] shows that our model reduces the service load on the mobile user in all routing. Especially in First Contact routing, the service load on the mobile user is significantly reduced by 57$\%$ when the content popularity follows uniform distribution. On the other hand, when Hybrid and Epidemic routing is used, the service load is reduced by 37$\%$ and 28$\%$ respectively. This is because First contact generates single copy for each request, whereas others use multiple copies. Multiple copies increase the probability of reaching the content provider faster. Service load is not significantly reduced (10$\%$) by the Spray and Wait routing due to a limited number of message copies. We observe that in the case of Zipf content popularity, service load reduction (approximately 10$\%$) on mobile users by Spray and Wait routing is almost similar to Epidemic routing. Furthermore, we examine the changes in the average delay of the content retrieval in Figure \[fig:e2edelay\]. As expected, we had a decrease in the delay in most of the cases because of the caching mechanisms. Especially in the case of contents with popularity that follows the Zipf distribution, the contents were accessed faster because they were cached somewhere nearby. However, there are cases where the delay can be increased because there are not many requests for contents in the Spray and Wait routing protocol with contents that follow the uniform distribution (Figure \[fig:e2edelaySnW\]). Conclusion and Future Work {#sec:concl} ========================== In this paper, we investigated the possibility of using mobile users in improving the performance of content delivery. For this, we explain the necessary required modifications in the conventional CCN mechanism in order for it to be functional in a DTN environment. Furthermore, we present a mathematical model of the content centric networking framework that exploits the opportunistic communications among mobile users. The proposed framework is implemented in ONE simulator to evaluate the concept. The simulation result shows that caching on mobile devices and cellular access points can improve the content retrieval time by more than 50$\%$, while the proportion of the requests that are delivered from other mobile devices can be more than 75$\%$ in many cases. Our next steps will be focused on the development of caching policies and on various types of contents that are application dependent. Moreover, we plan to consider incentives that motivate mobile users to cooperate and store other content. [Hasan M A Islam]{} received his Bachelor degree in Computer Science and Engineering from Bangladesh University of Engg and Techology which is the top ranked university in Bangladesh in 2008. In 2013, he received his M.Sc degree in Networking and Services from University of Helsinki. He is currently working as a Doctoral Candidate in the Department of Computer Science, Aalto university. His research interests include Information Centric Networking, Communication Network Architecture, and Network protocols. [Dimitris Chatzopoulos]{} received his Diploma and his MSc in Computer Engineering and Communications from the Department of Electrical and Computer Engineering of University of Thessaly, Volos, Greece. He is currently a PhD student at the Department of Computer Science and Engineering of The Hong Kong University of Science and Technology and a member of Symlab. His main research interests are in the areas of device-to-€“device ecosystems, mobile computing, mobile augmented reality and cryptocurrencies. [Dmitrij Lagutin]{} received his M.Sc (Tech) degree in 2005 from Helsinki University of Technology and a D.Sc. (Tech) degree in 2010 from Aalto University, Finland. He is currently working as a project manager and postdoctoral researcher in the EU Horizon 2020 POINT project. Previously he worked as a researcher in several research projects at Helsinki University of Technology and Aalto University, including EU FP7 PSIRP and PURSUIT projects. His research interests include network security and privacy, future network technologies, and the Internet of Things. [Pan Hui]{} received his Ph.D degree from the Computer Laboratory, University of Cambridge, and earned both his MPhil and BEng from the Department of Electrical and Electronic Engineering, University of Hong Kong. He is currently a faculty member of the Department of Computer Science and Engineering at the Hong Kong University of Science and Technology where he directs the HKUST-DT System and Media Lab. He also serves as a Distinguished Scientist of Telekom Innovation Laboratories (Tlabs) Germany and an adjunct Professor of social computing and networking at Aalto University Finland. Before returning to Hong Kong, he spent several years in T-labs and Intel Research Cambridge. He has published more than 150 research papers and has some granted and pending European patents. He has founded and chaired several IEEE/ACM conferences/workshops, and has been serving on the organising and technical program committee of numerous international conferences and workshops including ACM SIGCOMM, IEEE Infocom, ICNP, SECON, MASS, Globecom, WCNC, ITC, ICWSM and WWW. He is an associate editor for IEEE Transactions on Mobile Computing and IEEE Transactions on Cloud Computing, and an ACM Distinguished Scientist. [Antti Yl[ä]{}-J[ä]{}[ä]{}ski]{} is the vice head of the Department Computer Science, Aalto University. Prior to his current position, he was with Nokia 1994-2004 in several research and research management positions, with focus on future Internet, mobile networks, applications, services and service architectures. He has supervised 231 masters thesis and 21 doctoral dissertations during his professorship in Aalto University 2002-2015; he has published over 100 peer-reviewed articles and he holds several international patents. His current research interest is focused on mobile and cloud computing, crowdsourcing and energy efficient computing and communications. [^1]: The terms user, node and device are used interchangeably depending on the context. [^2]: Bluetooth has a coverage radius of some tens of meters, WiFi-direct of a few hundreds and the soon-to-be-available LTE-direct is expected to have a coverage radius of half a kilometer. [^3]: To avoid listing the same variables for both mobile devices and CAPs we use $x$, $\mathcal{X}$ and $y$ (i.e. $x = \{m,a\}$, $\mathcal{X} = \{\mathcal{M},\mathcal{A}\}$ and $y=\{1,2,3\}$). [^4]: The zero norm of a vector equals to the non-zero elements of the vector. [^5]: Small values of $K$ are enough for successful content discovery [@Wang:2015:PUS:2810156.2810162]. [^6]: Without loss of generality we assume that the id of user $m$ is $m$.
--- abstract: 'We have reconsidered theoretical upper bounds on the scalar boson masses within the two-Higgs-doublet model (THDM), employing the well-known technical condition of tree-level unitarity. Our treatment provides a modest extension and generalization of some previous results of other authors. We present a rather detailed discussion of the solution of the relevant inequalities and offer some new analytic formulae as well as numerical values for the Higgs mass bounds in question. A comparison is made with the earlier results on the subject that can be found in the literature.' author: - | J. Hořejší, M. Kladiva\ Institute of Particle and Nuclear Physics, Faculty of Mathematics and Physics,\ Charles University, V Holešovičk' ach 2, CZ-180 00 Prague 8, Czech Republic date: 'October 12, 2005' title: 'Tree-unitarity bounds for THDM Higgs masses revisited' --- Introduction ============ The two-Higgs-doublet model (THDM) of electroweak interactions is one of the simplest extensions of the Standard Model (SM). It incorporates two complex scalar doublets in the Higgs sector, but otherwise its structure is the same as that of the SM. Obviously, such a theory is rather appealing on purely aesthetic grounds: in view of the familiar doublet pattern of the elementary fermion spectrum, one can speculate that an analogous organizational principle might work for the “scalar Higgs matter” as well. Further, any Higgs sector built upon doublets only is known to preserve naturally the famous lowest-order electroweak relation $\rho=1$ (where $\rho=m_W^2/(m_Z^2 \cos^2\theta_W)$), which has been tested with good accuracy. On the phenomenological side, an important aspect of the THDM is that its Higgs sector may provide an additional source of  violation; in fact, this was the primary motivation for introducing such a model in the early literature on spontaneously broken gauge theories in particle physics [@Lee:1973]. Of course, there is at least one more reason why the THDM has become popular[^1] during the last two decades or so: its Higgs sector essentially coincides with that of the minimal supersymmetric SM (MSSM), but the values of the relevant parameters are less restricted. The spectrum of physical Higgs particles within THDM consists of five scalar bosons, three of them being electrically neutral (denoted usually as $h$, $H$ and $A^0$) and the other two charged ($H^\pm$). At present, some partial information concerning direct experimental lower bounds for the Higgs masses is available, coming mostly from the LEP data (cf. [@Experiment]). On the other hand, it is also interesting to know what could be possible theoretical limitations for masses of the so far elusive Higgs particles within such a “quasi-realistic” model. For this purpose, some rather general methods have been invented, based mostly on the requirements of internal consistency of the quantum field theoretical description of the relevant physical quantities. One particular approach, which is perhaps most straightforward in this regard, relies on perturbative unitarity of the $S$-matrix. In its simplest form it is implemented at the lowest order, by imposing unitarity constraints on the tree-level amplitudes of a suitable set of scattering processes. Let us recall that this technique was originally developed by B.W. Lee, C. Quigg and H. Thacker (LQT), who employed it in their well-known analysis of perturbative upper bound for the SM Higgs boson mass [@LQT]. The LQT method was subsequently applied also to electroweak models with extended Higgs sectors; some results can be found under refs. [@OldPapers], [@KKT], [@AAN]. In particular, authors of the papers [@KKT], [@AAN] analyzed in this way a restricted version of the THDM with -conserving Higgs sector and obtained slightly differing values of the bounds in question (due to slightly different implementations of the LQT method). Recently, the issue of tree-unitarity constraints for THDM Higgs boson masses has been taken up again in the work [@Ginzburg:2003] (see also [@Ginzburg:2004],[@Ginzburg:2005]), where a rather general model involving  violation has been considered; this seems to be another vindication of the persisting interest in the subject. The purpose of the present paper is to supplement and extend the existing results concerning the THDM Higgs mass upper bounds. We carry out a rather detailed analysis of a relevant set of inequalities that follow from the requirement of tree-level unitarity. In particular, the procedure of explicit solution of these constraints is discussed in considerable detail and, among other things, some results of the corresponding numerical calculations within a general THDM are presented. For the model without  violation we were able to find a set of analytic expressions as well. Note that in this latter case, most of the calculational details are contained also in an earlier unpublished work by one of us (see [@diplomka]). Let us also remark that there is no substantial overlap of the material presented in [@Ginzburg:2003; @Ginzburg:2004; @Ginzburg:2005] with our results, so we believe that it makes sense to offer our detailed analysis as a contribution to the current literature on the particular problem in question. The plan of our paper is as follows: In Sect. \[sec:potential\] the THDM scalar potential and the scalar fields are described in some detail, in Sect. \[sec:LQT\] we summarize briefly the LQT method and its implementation within THDM and in Sect. \[sec:inequalities\] the relevant inequalities expressing the tree-unitarity constraints are examined. The main analytic results for the mass bounds in question are contained in sections \[sec:MaMpm\], \[sec:MhMH\], \[sec:Mlightest\] and Sect. \[sec:numeric\] contains numerical results obtained in the -violating case (where we have not been able to find analytical results). The main results are summarized in Sect. \[sec:conclusion\]. THDM scalar potential {#sec:potential} ===================== The most general scalar potential within THDM that is invariant under $\SU2\times\U1$ can be written as (cf. [@Georgi] or [@Guide]) $$\begin{gathered} V(\Phi)=\lambda_1 \left( \Phi_1^\dg \Phi_1 - \tfrac{v_1^2}2 \right)^2 + \lambda_2 \left( \Phi_2^\dg \Phi_2 - \tfrac{v_2^2}2 \right)^2 + \lambda_3 \left( \Phi_1^\dg \Phi_1 - \tfrac{v_1^2}2 + \Phi_2^\dg \Phi_2 - \tfrac{v_2^2}2 \right)^2 + \\ \lambda_4 \left[ (\Phi_1^\dg \Phi_1) (\Phi_2^\dg \Phi_2) - (\Phi_1^\dg \Phi_2) (\Phi_2^\dg \Phi_1) \right] + \lambda_5 \left[ \Re(\Phi_1^\dg \Phi_2) - \tfrac{v_1 v_2}2 \cos\xi \right]^2 + \lambda_6 \left[ \Im(\Phi_1^\dg \Phi_2) - \tfrac{v_1 v_2}2 \sin\xi \right]^2 \label{eq:potential}\end{gathered}$$ Note that such a form involves  violation, which is due to $\xi\neq0$ [@Guide]. It also possesses an approximate discrete $Z_2$ symmetry under $\Phi_2 \to -\Phi_2$; this is broken “softly”, by means of the quadratic term $$\begin{gathered} v_1 v_2 \left( \lambda_5 \cos\xi\, \Re(\Phi_1^\dg \Phi_2) + \lambda_6 \sin\xi\,\Im(\Phi_1^\dg \Phi_2) \right) = v_1 v_2\,\Re\left[ \left( \lambda_5 \cos\xi - \imag \lambda_6 \sin\xi \right) \Phi_1^\dg \Phi_2 \right] \end{gathered}$$ Let us recall that the main purpose of such an extra partial symmetry within THDM is to suppress naturally the flavour-changing processes mediated by neutral scalar exchanges that could otherwise arise within the quark Yukawa sector [@Glashow:1976]. Note also that if such a symmetry were exact, there would be no  violation in the Higgs sector of the considered model. For further remarks concerning the role of the $Z_2$ symmetry see e.g. [@Ginzburg:2003] and references therein. As a quantitative measure of the $Z_2$ violation we introduce a parameter $\nu$, defined as $$\nu=\sqrt{\lambda_5^2 \cos^2\xi + \lambda_6^2 \sin^2\xi} \label{eq:nu}$$ (note that our definition of the $\nu$ differs slightly from that used in [@Ginzburg:2003].) The minimum of the potential occurs at $$\Phi_1= \frac1{\sqrt2}\doublet0{v_1}, \qquad \Phi_2= \frac1{\sqrt2}\doublet0{v_2} \e^{\imag\xi}$$ where we have adopted, for convenience, the usual simple choice of phases. Such a minimum determines vector boson masses through the Higgs mechanism; in particular, for the charged $W$ boson one gets $m_W^2=\frac12g^2 (v_1^2 + v_2^2)$, with $g$ standing for $\SU2$ coupling constant. In a standard notation one then writes $v_1=v\cos\beta, v_2=v\sin\beta$, where $v$ is the familiar electroweak scale, $v=(G_F \sqrt{2})^{-1/2}\doteq 246 \GeV$ and $\beta$ is a free parameter. THDM involves eight independent scalar fields: three of them can be identified with the would-be Goldstone bosons $w^\pm, z$ (the labelling is chosen so as to indicate that they are direct counterparts of the massive vector bosons $W^\pm, Z$ within an $R$-gauge) and the remaining five correspond to physical Higgs particles — the charged $H^\pm$ and the neutral ones $h, H, A^0$. We will now describe the above-mentioned Goldstone and Higgs bosons in more detail. To this end, let us start with a simple representation of the doublets, namely $$\Phi_1=\doublet{w_1^-}{\frac1{\sqrt{2}}(v_1 + h_1 + \imag z_1 )} \qquad \Phi_2=\doublet{w_2^-}{\frac1{\sqrt{2}}(\e^{\imag\xi}v_2 + h_2 + \imag z_2 )} \label{eq:param}$$ Of course, the scalar fields introduced in are in general unphysical; the $w_{1,2}^\pm$ are taken to be complex and the remaining ones real, but otherwise arbitrary. Note that an advantage of such a parametrization is that the form of the quartic interactions is then the same as in -conserving case. The proper Goldstone and Higgs fields are found through a diagonalization of the quadratic part of the potential . When doing it, a convenient starting point is a slightly modified doublet parametrization $$\Phi_1=\doublet{w_1^-}{\frac1{\sqrt{2}}(v_1 + h_1 + \imag z_1 )} \qquad \Phi_2=\doublet{w_2^{\prime-}}{\frac1{\sqrt{2}}(v_2 + h'_2 + \imag z'_2 )} \e^{\imag\xi} \label{eq:inaparam}$$ that is obtained from by means of the unitary transformation $h'_2=h_2\cos\xi + z_2\sin\xi$, $z'_2=z_2\cos\xi-h_2\sin\xi$ a $w_2^{\prime\pm}=\e^{-\imag\xi}w_2^\pm$ . Next, the scalar fields in are rotated pairwise as $$\begin{gathered} \doublet{H'}{h'} = \begin{pmatrix} \cos\beta & \sin\beta \\ -\sin\beta & \cos\beta \end{pmatrix} \doublet{h_1}{h'_2} \qquad \doublet{A'}{z} = \begin{pmatrix} \cos\beta & \sin\beta \\ -\sin\beta & \cos\beta \end{pmatrix} \doublet{z_1}{z'_2} \\ \doublet{\zeta}{w} = \begin{pmatrix} \cos\beta & \sin\beta \\ -\sin\beta & \cos\beta \end{pmatrix} \doublet{w_1}{w'_2} \end{gathered}$$ When the quadratic part of is recast in terms of the new variables, one finds out that the $z,w^\pm$ are massless Goldstone bosons and the $H^\pm$ represent massive charged scalars. At this stage, the fields $h',H' ,A'$ are still mixed and their mass matrix reads $$\frac12 \left( \begin{smallmatrix} \s_{2\beta}^2 (\lambda_1 + \lambda_2) + \c_{2\beta}^2 \left( \c^2_{\xi} \lambda_5 + \s^2_{\xi} \lambda_6 \right) & \s_{2\beta} \left[ -2\c^2_{\beta} \lambda_1 + 2\s^2_{\beta} \lambda_2 + \c_{2\beta} \left(\c^2_{\xi}\lambda_5 + \s^2_{\xi}\lambda_6 \right) \right] & \frac12 \c_{2\beta}\s_{2\xi} \left( \lambda_6 - \lambda_5 \right) \\ \s_{2\beta} \left[ -2\c^2_{\beta} \lambda_1 + 2\s^2_{\beta} \lambda_2 + \c_{2\beta} \left(\c^2_{\xi}\lambda_5 + \s^2_{\xi}\lambda_6 \right) \right] & 4 \left[\c^4_{\beta}\lambda_1 + \s^4_{\beta}\lambda_2 + \lambda_3 + \c^2_{\beta}\s^2_{\beta} \left( \c^2_{\xi}\lambda_5 + \s^2_{\xi}\lambda_6 \right) \right] & \frac12 \s_{2\beta} \s_{2\xi} \left( \lambda_6 - \lambda_5 \right) \\ \frac12 \c_{2\beta} \s_{2\xi} \left( \lambda_6 - \lambda_5 \right) & \frac12 \s_{2\beta} \s_{2\xi} \left( \lambda_5 - \lambda_6 \right) & \s^2_{\xi}\lambda_5 + \c^2_{\xi}\lambda_6 \end{smallmatrix} \right) \label{eq:matrixm0}$$ By diagonalizing it, one gets the true Higgs bosons $h, H,A^0$. The operation of charge conjugation $C$ means the complex conjugation of these physical fields (i.e. not of those appearing in the parametrization ). However, we can employ the representation involving fields that are linear combinations of real variables without complex coefficients. Note that for $\xi=0$ (the -conserving case) the $A$ is a -odd Higgs boson ($A'$ = $A$ in such a case) and $H$, $h$ are  even. Such a statement is also true when $\xi=\pi/2$ and/or $\lambda_5=\lambda_6$; as we shall see later in this section, for these particular values of parameters there is again no violation in the potential . For $\xi=0$ the Higgs boson masses can be calculated explicitly, and subsequently one can express the coupling constants $\lambda_i$ in terms of masses and a mixing angle defined through $$\doublet{h_1}{h_2} = \begin{pmatrix} \cos\alpha & -\sin\alpha \\ \sin\alpha & \phantom{-}\cos\alpha \end{pmatrix} \doublet hH \\ \label{eq:defmhmH}$$ Let us now express the $\lambda_{1,2,3,4}$ in terms of the Higgs boson masses in the case $\xi=0$ (as we have only four distinct masses, we leave the $\lambda_5$ as a free parameter). One gets $$\begin{aligned} \lambda_4&= 2 v^{-2} m_\pm^2 \qquad \lambda_6 = 2 v^{-2} m_A^2 \qquad \lambda_3 = 2 v^{-2} \frac{s_\alpha c_\beta}{s_\beta c_\beta} (m_H^2 - m_h^2) - \frac{\lambda_5}4 \\ \lambda_1&= \frac12 v^{-2} \left[ c_\alpha^2 m_H^2 + s_\alpha^2 m_h^2 - \frac{s_\alpha c_\beta}{\tan\beta} (m_H^2 - m_h^2) \right] -\frac{\lambda_5}4 \left( \frac1{\tan^2 \beta} - 1 \right) \\ \lambda_2&= \frac12 v^{-2} \left[ s_\alpha^2 m_H^2 + c_\alpha^2 m_h^2 - s_\alpha c_\beta{\tan\beta} (m_H^2 - m_h^2) \right] -\frac{\lambda_5}4 \left( \frac1{\tan^2 \beta} - 1 \right) \end{aligned} \label{eq:lambdatomasses}$$ Note also that the matrix of the quadratic form of the scalar fields is the Hessian of the potential at its minimum. The condition for the existence of a minimum is that the Hessian is positive definite, and this in turn means that the Higgs boson masses (squared) are positive. Finally, let us discuss briefly the particular cases $\xi=0$, $\xi=\pi/2$ and $\lambda_5=\lambda_6$. The case $\xi=0$ represents a model without  violation within the scalar sector, as it is described in [@Guide]. The case $\xi=\pi/2$ can be analyzed easily in the parametrization ; using this, the potential can be viewed as the case $\xi=0$ with the change of notation $$\Phi'_1 = \Phi_1 \qquad \Phi'_2 = \imag \Phi_2 \qquad \lambda_5\leftrightarrow\lambda_6$$ Thus, the two cases are equivalent. When $\lambda_6=\lambda_5$, the $\xi$-dependent part of the potential can be recast as $$\lambda_5 \left( \Re(\Phi_1^\dg \Phi_2) - \frac{v_1 v_2}{2} \cos{\xi} \right)^2 +\lambda_6 \left( \Im(\Phi_1^\dg \Phi_2) - \frac{v_1 v_2}{2} \sin{\xi} \right)^2 = \lambda_6 \left\rvert \Phi_1^\dg \Phi_2 - \frac{v_1 v_2}{2} \e^{\imag \xi} \right\lvert^2$$ The remaining terms do not depend on the relative phase between $\Phi_1$ and $\Phi_2$, so that the phase factor $\e^{\imag\xi}$ can be transformed away and one thus again has a -conserving case. A particular consequence of such an analysis is that for $\nu=0$ there can be no  violation. LQT method {#sec:LQT} ========== For finding the upper bounds on the Higgs boson masses we will employ the well-known LQT method invented three decades ago [@LQT]. This method relies on imposing the condition of perturbative (in particular, tree-level) unitarity on an appropriate set of physical scattering processes. Within a renormalizable theory, the scattering amplitudes are “asymptotically flat”, i.e. they do not exhibit any power-like growth in the high-energy limit. However, the dominant couplings are typically proportional to the scalar boson masses and one can thus obtain useful technical constraints on their values. In the pioneering paper [@LQT] the method was applied to the minimal SM, and several groups of authors employed it subsequently within models involving an extended Higgs sector, in particular the THDM (cf. [@OldPapers], [@KKT], [@AAN]). The results of various authors differ slightly, so it perhaps makes sense to reconsider the corresponding calculation and present, for the sake of clarity, some additional technical details of the whole procedure. In the spirit of the LQT approach, our analysis is based on the condition of tree-level $S$-matrix unitarity within the subspace of two-particle states. Instead of the unitarity condition used in the original paper [@LQT], we can adopt an improved constraint for the $s$-wave partial amplitude $\M_0$, namely $$\left|\Re\M_0\right| \le \frac12 \label{eq:podunitarity}$$ (cf. [@Marciano:1989ns]). Note that the tree-level matrix elements in question are real, and in the high-energy limit their leading contributions do not involve any angular dependence. Thus, the $\M_0$ generally coincides with the full tree-level (asymptotic) matrix element $\M$, up to a conventional normalization factor of $16\pi$ appearing in the standard partial-wave expansion. The effective unitarity constraint then becomes $$|\M| \le 8\pi \label{eq:osempi}$$ For an optimal implementation of the unitarity constraints we will consider the eigenvalues of the matrix $M_{ij}=\M_{i\to j}$ where the indices $i$ and $j$ label symbolically all possible two-particle states. Having in mind our primary goal, we take into account only binary processes whose matrix elements involve the Higgs boson masses in the leading order, in particular in the $O(E^0)$ terms. Invoking arguments analogous to those used in the original paper [@LQT], one can show that the relevant contributions descend from the interactions of Higgs scalars and longitudinal vector bosons. Using the equivalence theorem for longitudinal vector bosons and Goldstone bosons (see e.g. [@LQT], [@Equivalence]) one finds out, in accordance with the LQT treatment, that the only relevant contributions come from the amplitudes involving Higgs bosons and unphysical Goldstone bosons (that occur in an $R$-gauge formulation of the theory). It means that we will examine the above-mentioned matrix $M_{ij}$ , including all two-particle states made of the scalars (both physical and unphysical) $w^\pm, z, H^\pm, A^0, H, h$. It is not difficult to see that the leading terms in the individual amplitudes are determined by the direct (contact) quartic scalar interactions, while the triple vertices enter second order Feynman graphs and their contributions are suppressed by the propagator effects in the high energy expansion. As noted above, we will be mainly concerned with the eigenvalues of the two-particle scattering matrix. It means that for our purpose we can consider, equivalently, any unitary transformation of the matrix $M_{ij}$. In particular, it is more convenient to take, instead of the $M_{ij}$, a matrix consisting of the scattering amplitudes between the two-particle states made of the “particles” $w^\pm_a, z_a, h_a$ corresponding to the parametrization . The eigenvalues of this matrix can be found in the earlier paper [@AAN]. Matrix elements for the scattering processes corresponding to the two-particle states $(w_1^+ w_2^-, w_2^+ w_1^-,$ $ h_1 z_2, h_2 z_1, z_1 z_2, h_1 h_2)$ form the submatrix $$\bordermatrix{ &\m w_1^+ w_2^-&\m w_2^+ w_1^-&\m h_1 z_2&\m h_2 z_1&\m z_1 z_2&\m h_1 h_2 \cr\vbox{\hrule} \m w_1^+ w_2^- & 2 {{\lambda }_3} + \frac{{{\lambda }_5}}{2} + \frac{{{\lambda }_6}}{2} & 4 \left( \frac{{{\lambda }_5}}{4} - \frac{{{\lambda }_6}}{4} \right) & \frac{i } {2} {{\lambda }_4} - \frac{i }{2} {{\lambda }_6} & \frac{-i }{2} {{\lambda }_4} + \frac{i }{2} {{\lambda }_6} & \frac{-{{\lambda }_4}}{2} + \frac{{{\lambda }_5}}{2} & \frac{-{{\lambda }_4}} {2} + \frac{{{\lambda }_5}}{2} \cr \m w_2^+ w_1^-& 4 \left( \frac{{{\lambda }_5}}{4} - \frac{{{\lambda }_6}}{4} \right) & 2 {{\lambda }_3} + \frac{{{\lambda }_5}}{2} + \frac{{{\lambda }_6}}{2} & \frac{-i }{2} {{\lambda }_4} + \frac{i }{2} {{\lambda }_6} & \frac{i }{2} {{\lambda }_4} - \frac{i }{2} {{\lambda }_6} & \frac{-{{\lambda }_ 4}}{2} + \frac{{{\lambda }_5}}{2} & \frac{-{ {\lambda }_4}}{2} + \frac{{{\lambda }_5}}{2} \cr \m h_1 z_2 & \frac{-i }{2} {{\lambda }_4} + \frac{i }{2} {{\lambda }_6} & \frac{i }{2} {{\lambda }_4} - \frac{i }{2} {{\lambda }_6} & 4 \left( \frac{{{\lambda }_3}}{2} + \frac{{{\lambda }_6}}{4} \right) & \frac{{{\lambda }_5}}{2} - \frac{{{\lambda }_6}}{2} & 0 & 0 \cr \m h_2 z_1 & \frac{i }{2} {{\lambda }_4} - \frac{i }{2} {{\lambda }_6} & \frac{-i }{2} {{\lambda }_4} + \frac{i }{2} {{\lambda }_6} & \frac{{{\lambda }_5}}{2} - \frac{{{\lambda }_6}}{2} & 4 \left( \frac{{{\lambda }_3}}{2} + \frac{{{\lambda }_6}}{4} \right) & 0 & 0 \cr \m z_1 z_2 & \frac{-{{\lambda }_4}}{2} + \frac{{{\lambda }_5}}{2} & \frac{-{{\lambda }_4}}{2} + \frac{{{\lambda }_5}}{2} & 0 & 0 & 4 \left( \frac{{{\lambda }_3}}{2} + \frac{{{\lambda }_5}}{4} \right) & \frac{{{\lambda }_5}}{2} - \frac{{{\lambda }_6}}{2} \cr \m h_1 h_2 & \frac{-{{\lambda }_4}}{2} + \frac{{{\lambda }_5}}{2} & \frac{-{{\lambda }_4}}{2} + \frac{{{\lambda }_5}}{2} & 0 & 0 & \frac{{{\lambda }_ 5}}{2} - \frac{{{\lambda }_6}}{2} & 4 \left( \frac{{{\lambda }_3}}{2} + \frac{{{\lambda }_5}}{4} \right) \cr }$$ with eigenvalues $$\begin{aligned} e_1 &= 2 \lambda_3 - \lambda_4 - \frac12 \lambda_5 + \frac52 \lambda_6 \\ e_2 &= 2 \lambda_3 + \lambda_4 - \frac12 \lambda_5 + \frac12 \lambda_6 \\ f_+ &= 2 \lambda_3 - \lambda_4 + \frac52 \lambda_5 - \frac12 \lambda_6 \\ f_- &= 2 \lambda_3 + \lambda_4 + \frac12 \lambda_5 - \frac12 \lambda_6 \\ f_1 &= f_2 = 2 \lambda_3 + \frac 12 \lambda_5 + \frac12 \lambda_6 \end{aligned}$$ Another submatrix is defined by means of the states $(w_1^+ w_1^-, w_2^+ w_2^-, \frac{z_1 z_1}{\sqrt 2}, \frac{z_2 z_2}{\sqrt 2}, \frac{h_1 h_1}{\sqrt 2}, \frac{h_2 h_2}{\sqrt 2})$; it reads $$\m\bordermatrix{ &\m w_1^+ w_1^-&\m w_2^+ w_2^-&\m \frac{z_1 z_1}{\sqrt 2}&\m \frac{z_2 z_2}{\sqrt 2}&\m \frac{h_1 h_1}{\sqrt2}&\m \frac{h_2 h_2}{\sqrt 2} \cr\m w_1^+ w_1^-&\m 4\left( {{\lambda }_1} + {{\lambda }_3} \right) &\m 2{{\lambda }_3} + \frac{{{\lambda }_5}}{2} + \frac{{{\lambda }_6}}{2} &\m {\sqrt{2}}\left( {{\lambda }_1} + {{\lambda }_3} \right) &\m {\sqrt{2}} \left( {{\lambda }_1} + {{\lambda }_3} \right) &\m {\sqrt{2}}\left( {{\lambda }_3} + \frac{{{\lambda }_4}}{2} \right) &\m {\sqrt{2}} \left( {{\lambda }_3} + \frac{{{\lambda }_4}}{2} \right) \cr\m w_2^+ w_2^-&\m 2{{\lambda }_3} + \frac{{{\lambda }_5}}{2} + \frac{{{\lambda }_6}}{2} &\m 4\left( {{\lambda }_2} + {{\lambda }_3} \right) &\m {\sqrt{2}}\left( {{\lambda }_3} + \frac{{{\lambda }_4}}{2} \right) &\m {\sqrt{2}} \left( {{\lambda }_3} + \frac{{{\lambda }_4}}{2} \right) &\m {\sqrt{2}} \left( {{\lambda }_2} + {{\lambda }_3} \right) &\m {\sqrt{2}}\left( {{\lambda }_2} + {{\lambda }_3} \right) \cr \frac{z_1 z_1}{\sqrt 2} &\m {\sqrt{2}} \left( {{\lambda }_1} + {{\lambda }_3} \right) &\m {\sqrt{2}}\left( {{\lambda }_3} + \frac{{{\lambda }_4}}{2} \right) &\m 3 \left( \lambda_1 + \lambda_3\right) &\m \lambda_1 + \lambda_3 &\m \lambda_3 + \frac{{{\lambda }_5}}{2} &\m 2 \left( \frac{{{\lambda }_3}}{2} + \frac{{{\lambda }_6}}{4} \right) \cr \frac{z_2 z_2}{\sqrt 2}&\m {\sqrt{2}} \left( {{\lambda }_1} + {{\lambda }_3} \right) &\m {\sqrt{2}}\left( {{\lambda }_3} + \frac{{{\lambda }_4}}{2} \right) &\m 2 \left( \frac{{{\lambda }_1}}{2} + \frac{{{\lambda }_3}}{2} \right) &\m 3 \left(\lambda_1 + \lambda_3 \right) &\m 2 \left( \frac{{{\lambda }_3}}{2} + \frac{{{\lambda }_6}}{4} \right) &\m 2 \left( \frac{{{\lambda }_3}}{2} + \frac{{{\lambda }_5}}{4} \right) \cr \frac{h_1 h_1}{\sqrt2}&\m {\sqrt{2}} \left( {{\lambda }_3} + \frac{{{\lambda }_4}}{2} \right) &\m {\sqrt{2}} \left( {{\lambda }_2} + {{\lambda }_3} \right) &\m 2\left( \frac{{{\lambda }_3}}{2} + \frac{{{\lambda }_5}}{4} \right) &\m 2 \left( \frac{{{\lambda }_3}}{2} + \frac{{{\lambda }_6}}{4} \right) &\m 3 \left( \lambda_2 + \lambda_3 \right) &\m 2 \left( \frac{{{\lambda }_2}}{2} + \frac{{{\lambda }_3}}{2} \right) \cr \frac{h_2 h_2}{\sqrt 2}&\m {\sqrt{2}} \left( {{\lambda }_3} + \frac{{{\lambda }_4}}{2} \right) &\m {\sqrt{2}} \left( {{\lambda }_2} + {{\lambda }_3} \right) &\m 2\left( \frac{{{\lambda }_3}}{2} + \frac{{{\lambda }_6}}{4} \right) &\m 2 \left( \frac{{{\lambda }_3}}{2} + \frac{{{\lambda }_5}}{4} \right) &\m 2 \left( \frac{{{\lambda }_2}}{2} + \frac{{{\lambda }_3}}{2} \right) &\m 3 \left( \lambda_2 + \lambda_3 \right) \cr }$$ and its eigenvalues are $$\begin{aligned} a_\pm & = 3 (\lambda_1 + \lambda_2 + 2\lambda_3) \pm \sqrt{9(\lambda_1 - \lambda_2)^2 + [4\lambda_3 + \lambda_4 + \tfrac12 (\lambda_5 + \lambda_5)]^2} \\ b_\pm & = \lambda_1 + \lambda_2 + 2\lambda_3 \pm \sqrt{(\lambda_1-\lambda_2)^2 + \tfrac14 (-2 \lambda_4 + \lambda_5 + \lambda_6)^2} \\ c_\pm & = \lambda_1 + \lambda_2 + 2 \lambda_3 \pm \sqrt{(\lambda_1 - \lambda_2)^2 + \tfrac 14 (\lambda_5 - \lambda_6)^2} \\ \end{aligned} \label{eq:defabc}$$ A third submatrix $$\bordermatrix{ &\m h_1 z_1&\m h_2 z_2 \cr \m h_1 z_1&2\left( \lambda_2 + \lambda_3 \right) & \tfrac12 (\lambda_5 - \lambda_6) \cr \m h_2 z_2 & \tfrac12(\lambda_5 - \lambda_6) & 2\left( \lambda_1 + \lambda_3 \right) \cr }$$ has eigenvalues $c_\pm$ (see ). Finally, there are submatrices corresponding to charged states $(h_1 w_1^+$, $h_2 w_1^+$, $z_1 w_1^+$, $z_2 w_1^+$, $h_1 w_2^+$, $h_2 w_2^+$, $z_1 w_2^+$, $z_2 w_2^+)$: $$\bordermatrix { &\m h_1 w_1^+&\m h_2 w_1^+&\m z_1 w_1^+&\m z_2 w_1^+ \cr \m h_1 w_1^+ & 2\left( {{\lambda }_1} + {{\lambda }_3} \right) & \frac{-{{\lambda }_4}}{2} + \frac{{{\lambda }_5}}{2} & 0 & \frac{i }{2} {{\lambda }_4} - \frac{i }{2}{{\lambda }_6} \cr \m h_2 w_1^+ & \frac{-{{\lambda }_4}}{2} + \frac{{{\lambda }_5}}{2} & 2 \left( {{\lambda }_2} + {{\lambda }_3} \right) & \frac{i }{2}{{\lambda }_4} - \frac{i }{2}{{\lambda }_6} & 0 \cr \m z_1 w_1^+ & 0 & \frac{-i }{2}{{\lambda }_4} + \frac{i }{2}{{\lambda }_6} & 2 \left( {{\lambda }_1} + {{\lambda }_3} \right) & \frac{-{{\lambda }_4}}{2} + \frac{{{\lambda }_5}}{2} \cr \m z_2 w_1^+ & \frac{-i }{2} {{\lambda }_4} + \frac{i }{2}{{\lambda }_6} & 0 & \frac{-{{\lambda }_4}}{2} + \frac{{{\lambda }_5}}{2} & 2 \left( {{\lambda }_2} + {{\lambda }_3} \right) \cr }$$ $$\bordermatrix{ &\m h_1 w_2^+&\m h_2 w_2^+&\m z_1 w_2^+&\m z_2 w_2^+ \cr \m h_1 w_2^+&2\left( {{\lambda }_3} + \frac{{{\lambda }_4}}{2} \right) & \frac{-{{\lambda }_4}}{2} + \frac{{{\lambda }_5}}{2} & 0 & \frac{i }{2}{{\lambda }_4} - \frac{i }{2}{{\lambda }_6} \cr \m h_2 w_2^+& \frac{-{{\lambda }_4}}{2} + \frac{{{\lambda }_5}}{2} & 2 \left( {{\lambda }_3} + \frac{{{\lambda }_4}}{2} \right) & \frac{i }{2}{{\lambda }_4} - \frac{i }{2}{{\lambda }_6} & 0 \cr \m z_1 w_2^+& 0 & \frac{-i }{2}{{\lambda }_4} + \frac{i }{2}{{\lambda }_6} & 2 \left( {{\lambda }_3} + \frac{{{\lambda }_4}}{2} \right) & \frac{-{{\lambda }_4}}{2} + \frac{{{\lambda }_5}}{2} \cr \m z_2 w_2^+ & \frac{-i }{2} {{\lambda }_4} + \frac{i }{2}{{\lambda }_6} & 0 & \frac{-{{\lambda }_4}}{2} + \frac{{{\lambda }_5}}{2} & 2 \left( {{\lambda }_3} + \frac{{{\lambda }_4}}{2} \right) \\ }$$ Their eigenvalues are the $f_-$, $e_2$, $f_1$, $c_\pm$, $b_\pm$ shown above and, in addition, $$p_1 = 2 (\lambda_3 + \lambda_4) - \frac 12 \lambda_5 - \frac 12 \lambda_6$$ Unitarity conditions for the eigenvalues listed above give the constraints $$|a_\pm|, |b_\pm|, |c_\pm|, |f_\pm|, |e_{1,2}|, |f_1|, |p_1| \le 8\pi \label{eq:inequalities}$$ Note that an independent derivation of these inequalities based on symmetries of the Higgs potential can be found in the papers [@Ginzburg:2003; @Ginzburg:2004]. Independent inequalities {#sec:inequalities} ======================== However, the inequalities are not all independent. Indeed, it is not difficult to observe some simple relations as $$\begin{aligned} 3 f_1 &= p_1 + e_1 + f_+ \\ 3 e_2 &= 2 p_1 + e_1 \\ 3 f_- &= 2 p_1 + f_+ \end{aligned} \label{vztahyfef}$$ and this means that the inequalities $|p_1|, |f_+|, |e_1| \le 8\pi$ imply $|f_1|, |e_2|, |f_-|\le 8\pi$. Further, the eigenvalues in the remaining inequalities can be rewritten as $$\begin{aligned} a_\pm &= 3 \lambda_{123} \pm \sqrt{(3\lambda_{12})^2 + \tfrac14(f_++e_1+2p_1)^2} \\ b_\pm &= \lambda_{123} \pm \sqrt{(\lambda_{12})^2 +\tfrac1{36}(f_++e_1-2p_1)^2} \\ c_\pm &= \lambda_{123} \pm \sqrt{(\lambda_{12})^2 + \tfrac1{36}(f_+ - e_1)^2} \end{aligned} \label{eq:newabc}$$ where $\lambda_{123}=\lambda_1 + \lambda_2 + 2 \lambda_3$ and $\lambda_{12}=\lambda_1-\lambda_2$. In the case $\lambda_{123}>0$ the inequalities for the $a_-$, $b_-$, $c_-$ follow from $a_+, b_+, c_+ \le 8\pi$. For $\lambda_{123}<0$ the situation is similar, with interchanges $(a,b,c)_\pm\to (a,b,c)_\mp$ and $\lambda_{123}\to -\lambda_{123}$. The authors [@KKT] noticed that among the latter inequalities, the strongest one is $a_+<8\pi$; Indeed, using $\eqref{eq:inequalities}$ and one can show that for $\lambda_{123}>0$ the remaining ones follow from it. In the case $\lambda_{123}<0$ the same statement is true concerning $a_+<8\pi$. Thus, it is sufficient to solve the inequalities $$|a_\pm|, |f_+|, |e_1|, |p_1| \le 8\pi \label{uniq}$$ In fact, the inequality $a_{-}<8\pi$ need not be taken into account in subsequent discussion; it turns out that this is weaker than the remaining ones and does not influence bounds in question (one can verify *a posteriori* that our solutions satisfy the constraints $a_{-}<8\pi$ automatically). Upper bounds for $M_A$ and $M_\pm$ with $\xi=0$ {#sec:MaMpm} =============================================== Before starting our calculation, let us recall that the condition $\xi = 0$ means that the $Z_2$ symmetry-breaking parameter $\nu$ becomes $\nu = \lambda_5$ (see ). To proceed, we shall first fix convenient notations. The LQT bound for the SM Higgs mass sets a natural scale for our estimates, so let us introduce it explicitly: $$m_\text{LQT}=\sqrt{\frac{4\pi\sqrt2}{3G_\text{F}}}= \sqrt{\frac{8\pi}{3}} v \doteq 712 \GeV \label{eq:mlqt}$$ (note that in writing eq. we do not stick strictly to the original value [@LQT], using rather the improved bound [@Marciano:1989ns]). In the subsequent discussion we shall then work with the dimensionless ratios $$M = \frac{m}{m_\text{LQT}}$$ instead of the true scalar boson masses (denoted here generically as $m$). Further, an overall constant factor $16\pi/3$ can be absorbed in a convenient redefinition of the coupling constants, by writing $$\lambda'_i=\frac{3\lambda_i}{16\pi} \label{lambdaprime}$$ Finally, we introduce new variables $$X=M^2_H+M^2_h, \quad Y=M^2_H - M^2_h, \quad Z=\frac{\sin 2\alpha}{\sin 2\beta} Y$$ that will help to streamline a bit the solution of the inequalities in question. Using equations and the definitions shown above, the $\lambda'$ can be expressed as $$\begin{aligned} \lambda'_4 &= M^2_\pm \\ \lambda'_6 &= M^2_A \\ \lambda'_3 &= \frac14 \frac{\sin 2\alpha}{\sin 2\beta} Y - \frac14\lambda'_5 = \frac Z4 - \frac{\lambda'_5}4 \\ \lambda'_{12} &= \frac1{2\sin^2{2\beta}} \left[ (X-2\lambda'_5) \cos 2 \beta - Y \cos 2\alpha\right] \\ \lambda'_{123} &= \frac1{2\sin^22\beta} (X - Y \cos 2\alpha \cos 2\beta - 2\lambda'_5)+ \frac{\lambda'_5}2 \end{aligned} \label{eq:lambdaparam}$$ Let us now discuss the possible bounds for the $M_\pm, M_A$. These can be obtained from the inequalities for $|e_1|,|f_+|, |p_1|$, which read, in our new notation $$\begin{aligned} \left| \frac Z2 - \lambda'_5 - M^2_\pm + \frac52 M^2_A \right| &\le \frac32 \\ \left| \frac Z2 + 2 \lambda'_5 - M^2_\pm - \frac12 M^2_A \right| &\le \frac32 \\ \left| \frac Z2 - \lambda'_5 + 2M^2_\pm - \frac12 M^2_A \right| &\le \frac32 \\ \end{aligned} \label{eq:mxima1}$$ The relations are linear with respect to the $M^2_\pm, M^2_A$ and one can thus view the domain defined by these inequalities as a hexagon in the plane $(M^2_\pm,M^2_A)$. Then it is clear that the highest possible value of a mass variable in question will correspond to a vertex (or a whole hexagon side). By examining all possible cases one finds easily that for $M^2_\pm$, such a “critical” vertex satisfies the condition $-f_+=p_1=8\pi$; in view of this means that it corresponds to the values $$(M^2_\pm, M^2_A) = (1+\lambda'_5, 1 + Z + 2 \lambda'_5) \label{eq:mxima2}$$ Such a maximum value of the $M^2_\pm$ is indeed formally admissible (in the sense that by reaching it one does not leave the parametric space of the considered model). To see this, one can substitute in eq. $M^2_A= \lambda'_5 , M^2_H=1 + \lambda'_5, M^2_h=0, \alpha=\pi-\beta$. Thus, the bound becomes $$M^2_\pm \le 1 + \lambda'_5 \label{Mxibound}$$ Similarly, for $M^2_A$ the extremal solution corresponds to a hexagon vertex defined by $e_1=-f_+=8\pi$ and its coordinates in the $(M^2_\pm, M^2_A)$ plane are then $$(M^2_\pm, M^2_A) = (1+\frac Z2+\frac32 \lambda'_5, 1 + \lambda'_5)$$ The parameter values that saturate this maximum are analogous and one has to take $M^2_\pm= \lambda'_5/2 , M^2_H=1 + \lambda'_5, M^2_h=0, \alpha=\pi-\beta$. In this way, the bound for $M^2_A$ becomes the same as that for the $M^2_\pm$ , namely $$M^2_A \le 1 + \lambda'_5 \label{MAbound}$$ Upper bounds for $M_h, M_H$ with $\xi=0$ {#sec:MhMH} ======================================== Let us now proceed to discuss the upper bounds for $M_H$ and $M_h$. If we considered the relevant constraints without any further specification of the scalar bosons $h$ and $H$, we would get the same result for both particles, since their interchange corresponds just to the replacement $\alpha \to -\alpha$ (cf. eq. ). Thus, let us add the condition $M_h \le M_H$ (i.e. $Y > 0$). In such a case, we will solve just the inequality $a_+<8\pi$ (which puts the most stringent bounds on the variables $X, Y$) and in the obtained solution we will constrain the $M_A, M_\pm$ so as to satisfy the rest of the inequalities. The basic constraint $a_+<8\pi$ is quadratic with respect to the $X, Y$ and reads (cf. the expression ) $$\begin{gathered} (X-Y \cos 2\alpha \cos 2\beta) - \lambda'_5(2-\sin^2 2\beta) + \\ \sqrt{\big[ (X-2\lambda'_5)\cos2\beta - Y \cos 2\alpha \big]^2 + \left(\frac23\right)^2 \sin^4 2\beta \Big( Y\frac {\sin2\alpha}{\sin 2\beta} - \frac {\lambda'_5}2 + M^2_\pm + \frac {M^2_A}2 \Big)^2} \le \sin^2 2\beta \label{eq:aplus}\end{gathered}$$ To work it out, we will employ the following trick: As a first step, we will consider a simpler inequality, which is obtained from (34) by discarding the second term under the square root; in other words, we will first assume that $$Y\frac {\sin2\alpha}{\sin 2\beta} - \frac {\lambda'_5}2 + M^2_\pm + \frac {M^2_A}2 = 0 \label{eq:apluszerocondition}$$ Of course, the “reduced” constraint $$X-Y \cos2\alpha \cos2\beta - \lambda'_5 (2-\sin^22\beta) + \left| X\cos2\beta - Y\cos2\alpha - 2\lambda'_5 \cos2\beta \right| \le \sin^2 2\beta \label{eq:aplussimple}$$ is in general weaker than the original one. Nevertheless, in a next step we will be able to show that the obtained mass bound does get saturated for appropriate values of the other parameters (such that the condition is met) - i.e. that in this way we indeed get the desired minimum upper mass bound corresponding to the original constraint . Thus, let us examine the inequality . Obviously, we have to distinguish two possible cases: 1. $ (X-2\lambda'_5)\cos2\beta \ge Y\cos2\alpha $. Then one has $$X(1+\cos2\beta) - Y (1+\cos2\beta)\cos2\alpha - \lambda'_5 (1+\cos2\beta) \le \sin^22\beta \label{eq:aplusfirstcase}$$ Making use of our assumption, we can get from a simple constraint that does not involve $Y$, namely $$X \le 1 + \lambda'_5 \label{Xupperbound}$$ (to arrive at the last relation, we had to divide by the factor $1-\cos2\beta$; when it vanishes, we can use directly the original inequality and get the same result). 2. $(X-2\lambda'_5) \cos 2\beta \le Y \cos2\alpha$. In a similar way as in the preceding case, the inequality implies the same bound . Thus, having constrained $X=M^2_H+M^2_h$ according to , we can obviously also write $$M^2_H \le 1+\lambda'_5 \label{MHbound}$$ Now, it is not difficult to see that for $M^2_h=0, M^2_\pm=\lambda'_5 + \tfrac12, M^2_A=1+\lambda'_5, \alpha=\pi-\beta$, eq.  is satisfied with $M^2_H=1+\lambda'_5$ and means that represents the mass upper bound pertinent to the original unitarity constraint . The bound for the $M_h$ is obtained from by using there our subsidiary condition $M_h \le M_H$; one thus has $$M^2_h \le \frac 12 (1+\lambda'_5) \label{Mhsmallbound}$$ The upper limit in gets saturated (i.e. $M^2_h=\frac 12 (1+\lambda'_5) $) for $M_H=M_h$, $M^2_A=0$, $M^2_\pm=\lambda'_5/2$, $\alpha=3\pi/4, \beta=\pi/4$. It is worth noticing that here we have fixed a particular value of the angle $\beta$ , while all previous constraints were independent of $\beta$ (i.e. for any $\beta$ we were then able to find an appropriate value of $\alpha$). A more detailed analysis shows that, in general, the upper bound for the $M_h$ indeed depends explicitly on the $\beta$. To derive the corresponding formula, we consider the boundary value $M_h = M_H$ (i.e. $Y = 0$) and use also eq. . The inequality then becomes $$M^2_h - \lambda'_5\left(1-\frac{\sin^2 2\beta}2 \right) + |M^2_h \cos2\beta - \lambda'_5 \cos2\beta| \le \frac{\sin^22\beta}2 \label{Mhbetadifficult}$$ To work it out, we will assume that $M^2_h \ge \lambda'_5$ (taking into account this means $\lambda'_5\le 1$; in fact, one can do even without such a restriction, but for our perturbative treatment only sufficiently small values of the $\lambda'_5$ are of real interest). The inequality then becomes $$M^2_h \le \frac{(1-\lambda'_5)}2 \frac{(1+\cos 2\beta)(1-\cos2\beta)}{1+|\cos2\beta|} +\lambda'_5 \label{Mhbeta}$$ Obviously, the maximum bound is recovered from the last expression for $\beta = \pi/4$. Let us also remark that the choice $\alpha = \pi-\beta$ comes, as in all previous cases, from the requirement $Z = - Y$. Upper bound for the lightest scalar for $\xi = 0$ {#sec:Mlightest} ================================================= ![The region of admissible values of $M^2_h$ if $h$ is assumed to be the lightest scalar[]{data-label="obrhyperbola"}](graf.1) One can notice that any Higgs mass upper limit discussed so far gets saturated only when at least one of the other scalar masses vanishes. Thus, another meaningful question arising in this connection is what can be an upper bound for the lightest Higgs boson (within a considered set of the five scalars $h, H, A^0, H^\pm$). Let us first take $h$ to be the lightest scalar state; it means that in our analysis we will include the additional assumption $M_h\le M_H, M_A, M_\pm$. The procedure we are going to employ is a modest generalization of the earlier calculation [@KKT]. Squaring the inequality one gets $$(X-X_0)^2 - \left(1 - \frac 59 \sin^2 2\alpha\right) (Y-Y_0)^2 \ge R^2 \label{eq:hyperbola}$$ where $X_0, Y_0$ and $R$ depend on $\lambda'_5, \alpha, \beta, M^2_\pm+M^2_A/2$. This inequality defines the domain bounded by the hyperbola shown in Fig. \[obrhyperbola\], but the original constraint corresponds just to its left-hand part. In order to find the solution, one should realize that the slope of the asymptote with respect to the $X$-axis must be greater than the slope of the straight lines $X=\pm Y$ (this follows from the fact that the coefficient $1-\frac 59 \sin^2 2\alpha$, multiplying the $Y^2$ in , is less than one). Because of that, the maximum value of the $M_h$ corresponds to $Y=0$ and $a_+=8\pi$, and we are thus led to the equation $$X-\lambda'_5(2-\sin^22\beta) + \sqrt{\cos^22\beta(X-2\lambda'_5)^2 + \frac 49 \sin^42\beta \left( M^2_\pm + \frac{M^2_A}2 - \frac{\lambda'_5}2\right) } = \sin^22\beta \label{Mhgeneral}$$ It is clear that for smaller $M_\pm, M_A$ one has a bigger value of the $M_h$, so the needed upper estimate is obtained for $M_\pm = M_A = M_h$ (note also that from $Y=0$ one has $X = 2 M^2_h$). In this way one gets an equation for maximum $M_h$: $$2 M^2_h - \lambda'_5(2-\sin^22\beta) + \sqrt{ 4(M^2_h-\lambda'_5)^2\cos^22\beta + \sin^42\beta (M^2_h - \frac13\lambda'_5)^2} = \sin^22\beta \label{Mhbetabound}$$ From eq. one can calculate the $M_h^2$ as a function of $\sin^22\beta$. It can be shown that for $\lambda'_5<3/5$ this function is increasing, i.e. the maximum is reached for $\beta=\pi/4$ and its value becomes $$M^2_h = \frac13 + \frac 49 \lambda'_5 \label{Mhsmallestbound}$$ We do not display the explicit dependence of the maximum $M_h$ on the $\beta$, but it is clear that the solution of eq. $\eqref{Mhbetabound}$ is straightforward. Finally, we should also examine the cases where the lightest Higgs boson mass is either $M_A$ or $M_\pm$ . However, from the above discussion it is clear that both these extremes occur when $M_h=M_A=M_\pm$. ![Dependence of lightest boson mass on $\beta$[]{data-label="grafbeta"}](grafbeta) Similarly, from eq. one can derive a constraint for the mass of the lightest neutral scalar boson (which we denote $M_n$). In this case we substitute there $X=2 M^2_n$, $M^2_A=M^2_n$, $M^2_\pm=0$ and obtain thus the equation $$2 M^2_n - \lambda'_5(2-\sin^22\beta) + \sqrt{ 4(M^2_n-\lambda'_5)^2\cos^22\beta + \sin^42\beta \left(\frac{M^2_n}3 - \frac13\lambda'_5\right)^2} = \sin^22\beta \label{Mnbetabound}$$ From eq. one then obtains the $M_n^2$ as a function of $\sin^22\beta$, which is increasing for $\lambda'_5<1$. Its maximum reached at $\beta=\pi/4$ becomes $$M_n^2 = \frac 37 + \frac 47 \lambda'_5 \label{Mnbound}$$ Numerical solution for $\xi\neq0$ {#sec:numeric} ================================= In the general case with $\xi\neq0$ (i.e. with  violation in the scalar sector) we have not been able to solve the inequalities analytically, so we had to resort to an appropriate numerical procedure. The main result we have obtained in this way is that for small values of the parameter $\nu$ (see eq. ), in particular for $\nu'\in \langle0,0.3\rangle$ , the upper mass bounds in question are the same as for $\xi = 0$. The interval has been chosen such that the variations in the upper estimates be at the level of $50-100\%$, the validity of our theoretical estimates is guaranteed up to $\nu'<3/5$ (see the remark below eq. ). Our numerical procedure consists in solving the inequalities on the space of parameters $\lambda'_{1,2,3,4,5,6}$ and $\xi$ restricted by the condition , where one also adds constraints for the existence of a minimum of the potential : $\lambda'_4>0$ (i.e. $m^2_\pm>0$, see ) and the requirement of positive definiteness of the matrix (i.e. $m^2_{A,H,h}>0$). On this parametric subspace we have looked for the maximum values of the following quantities: 1. Mass of the charged Higgs boson $m_\pm$ (see Fig. \[fig:mx\]) 2. Mass of the lightest Higgs boson (see Fig. \[fig:min\]) 3. \[item:lightest\] Mass of the lightest neutral Higgs , i.e. the lightest one among the $A, H, h$ (see Fig. \[fig:m1\]) 4. Mass of the heaviest neutral Higgs, i.e. the heaviest among the $A, H, h$ (see Fig. \[fig:m3\]). Let us remark that in this case we have not distinguished between $A$ and $h, H$, which are superpositions of the -odd and -even states. In our plots we display, apart from the dependence of masses in question on the $\nu$, also the values of the parameter $\xi$ in the case $\lambda_5=\lambda_6$ and $\lambda_5\neq\lambda_6$ respectively, in order to be able to distinguish the extreme cases without  violation ($\xi=k \pi/2$ or $\lambda_5=\lambda_6$, see the discussion in Section \[sec:potential\]). From Figs. \[fig:mx\], \[fig:min\], \[fig:m1\], \[fig:m3\] it can be seen that all examined mass upper bounds are reached just in the aforementioned extreme cases. In view of this, we can make use of our previous analytic expressions, except for the case \[item:lightest\], which we have not solved analytically. Our results have been simulated by means of the computer program Matlab 6.0, package optim, with the help of the function fmincon. The numerical errors are mostly due to an insufficiently smooth condition for the positive definiteness of the matrix . Conclusions {#sec:conclusion} =========== In the present paper we have reconsidered upper bounds for the scalar boson masses within THDM, by using the well-known technical constraint of tree-level unitarity. Our analysis should extend and generalize the results of some previous treatments, in particular those obtained in the papers [@AAN] and [@KKT]. Although we basically employ the traditional methods, we have tried to present some details of the calculations not shown in the earlier papers — we have done so not only for the reader’s convenience, but also to provide a better insight into the origin of the numerical results displayed here. As we have already noted in the Introduction, some new relevant papers on the subject have appeared quite recently (see [@Ginzburg:2003; @Ginzburg:2004; @Ginzburg:2005]). In these works, the structure of the unitarity constraints is discussed in detail within a rather general THDM, but there is no substantial overlap with our results, since our main point is rather a detailed explicit solution of the inequalities in question. So, let us now summarize briefly our main results. We have found upper limits for Higgs boson masses in dependence on the parameter $\nu$ that embodies an information about possible flavour-changing neutral scalar-mediated interactions. The upper bounds are seen to grow with increasing $\nu$ (see Tab.\[tab\]). On the other hand, this parameter cannot take on large values (to avoid a conflict with current phenomenology), and thus it makes no real sense to consider the mass estimates for an arbitrary $\nu$ ; in the present paper we restrict ourselves to $\nu \le 0.4$ (cf. the condition used when deriving the relation ). In the case with no  violation in the scalar sector ($\xi=0$), the relevant results are obtained from the inequalities , , , , and the bound for the lightest scalar is shown in eq. (where one should also pass from $\lambda'_5$ to $\lambda_5$ according to ). In Section \[sec:numeric\] we have then verified that in the -violating case these values remain the same. The results are shown in Tab. \[tab\], where we have singled out the case $\nu = 0$ that corresponds to the absence of flavour-changing scalar currents. Let us remark that in the -violating case we do not distinguish between the $H$ and $A$, and in the -conserving case the bounds for $H$ and $A$ are the same. Further, we have calculated an explicit dependence of the upper limit for the $M_h$ on the angle $\beta$ in the case with $\xi = 0$. The analytic expression reads $$M^2_h \le \frac{\sin^22\beta}{1+|\cos2\beta|} \left( \frac12 - \frac 3{32\pi}\lambda_5 \right) +\lambda_5\frac 3{16\pi} \label{Mhbetavysl}$$ (cf. with the $\lambda_5$ retrieved). The dependence of the relevant bound for a lightest scalar boson can be obtained from eq. and the results for some particular values of the $\lambda_5$ are depicted in Fig.\[grafbeta\]. For $\nu=0$ and $\xi = 0$, our results can be compared directly with those published in [@KKT]. We get somewhat stronger bounds for $m_A$ and $m_\pm$ since, in addition to the set of constraints utilized in [@KKT], we have employed also the inequality $p_1<8\pi$, which stems from charged processes (cf. the end of Section \[sec:inequalities\]) not considered in [@KKT]. On the other hand, our estimates for $m_H$, $m_h$ and the lightest scalar coincide with the results [@KKT], since the above-mentioned extra inequality is not used here. It is also noteworthy that the upper limits for $m_h$ and $m_H$ coincide with the SM LQT bound if they are estimated separately and, depending on the number of the simultaneously estimated Higgs scalars, the coefficient $1/2$ appears when we take two of them and $1/3$ when all of them are considered. In the case $\xi=0$ and $\lambda_5\ne 0$ comparison with [@AAN] is possible. Here we can compare only the corresponding numerical values, which turn out to be approximately equal when $\lambda_5 = 0$. However, for $\lambda_5=0$ our results obviously differ from those of [@AAN]: in particular, the bounds for $m_A$, $m_\pm$ displayed in [@AAN] appear to decrease with increasing $\lambda_5$. The authors [@AAN] state that they used some fixed values of the angle $\beta$; for the purpose of a better comparison we have therefore calculated the $\beta$-dependence of the upper bound for $m_h$, with the result shown in . As it turns out, the $m_A$ and $m_\pm$ do not depend on $\beta$ in this case. Finally, let us mention that in the -violating case we have not been able to get analytic results; we have only shown, numerically, that the maximum values of the masses in question are obtained for $\xi = 0$, i.e. the upper mass bounds are the same as in the case with no  violation in the scalar sector. \#1 [|l|c|c|c|c|c|]{} & $\mathbf{H}$ & $\mathbf{A}$ & $\mathbf{H}^{\boldsymbol\pm}$ & $\mathbf h$ & [**lightest boson**]{}\ \ $m/m_\text{LQT}$ & & $\sqrt{\dfrac12 + \nu\dfrac{3}{32\pi}}$ & $\sqrt{\dfrac13 + \nu\dfrac{1}{12\pi}}$\ $m[\text{GeV}]$ & & 503 GeV & 411 GeV\ \ $m/m_\text{LQT}$ & 1 & $\sqrt{3}$ & $\sqrt{\dfrac32}$ & $\dfrac1{\sqrt2}$ & $\dfrac1{\sqrt3}$\ $m[\text{GeV}]$ & 712 GeV & 1233 GeV & 872 GeV & 503 GeV & 411 GeV\ \ $m[\text{GeV}]$ & 638 GeV & 691 GeV & 695 GeV & 435 GeV & —\ [10]{} T. D. Lee, Phys. Rev. [**D8**]{}, 1226 (1973). M. Sher, Phys. Rept. [**179**]{}, 273 (1989). J. F. Gunion, H. E. Haber, G. L. Kane, and S. Dawson, (Perseus Publishing, Cambridge, Massachusetts 2000). J. Abdallah [*et al.*]{} (DELPHI Collaboration), Eur. Phys. J. [**C34**]{}, 399 (2004)\ ; P. Achard [*et al.*]{} (L3 Collaboration), Phys. Lett. [**B583**]{}, 14 (2004)\ ; G. Abbiendi [*et al.*]{} (OPAL Collaboration), Eur. Phys. J. [**C40**]{}, 317 (2005)\ . B. W. Lee, C. Quigg, and H. B. Thacker, Phys. Rev. [**D16**]{}, 1519 (1977). J. Maalampi, J. Sirkka, and I. Vilja, Phys. Lett. [**B265**]{}, 371 (1991); R. Casalbuoni, D. Dominici, R. Gatto, and C. Giunti, Phys. Lett. [**B178**]{}, 235 (1986); R. Casalbuoni, D. Dominici, F. Feruglio, and R. Gatto, Nucl. Phys. [**B299**]{}, 117 (1988); H. Hüffel and G. Pócsik, Z. Phys. [**C8**]{}, 13 (1981). S. Kanemura, T. Kubota, and E. Takasugi, Phys. Lett. [**B313**]{}, 155 (1993)\ . A. G. Akeroyd, A. Arhrib, and E.-M. Naimi, Phys. Lett. [**B490**]{}, 119 (2000)\ . I. F. Ginzburg and I. P. Ivanov, arXiv:hep-ph/0312374. I. F. Ginzburg and M. Krawczyk, arXiv:hep-ph/0408011. I. F. Ginzburg and I. P. Ivanov, arXiv:hep-ph/0508020. M. Kladiva, Theoretical upper bounds for Higgs boson masses, Master’s thesis, Charles University, Prague, 2003. H. Georgi, Hadronic J. [**1**]{}, 155 (1978). S. L. Glashow and S. Weinberg, Phys. Rev. [**D15**]{}, 1958 (1977). W. J. Marciano, G. Valencia, and S. Willenbrock, Phys. Rev. [**D40**]{}, 1725 (1989). C. E. Vayonakis, Nuovo Cim. Lett. [**17**]{}, 383 (1976); M. S. Chanowitz and M. K. Gaillard, Nucl. Phys. [**B261**]{}, 379 (1985); G. J. Gounaris, R. Kögerler, and H. Neufeld, Phys. Rev. [**D34**]{}, 3257 (1986). [^1]: For useful reviews of the subject see e.g. [@Sher], [@Guide]
--- abstract: 'A new family of inflation models is introduced and studied. The models are characterised by a scalar potential which, far from the origin, approximates an inflationary plateau, while near the origin becomes monomial, as in chaotic inflation. The models are obtained in the context of global supersymmetry starting with a superpotential, which interpolates from a generalised monomial to an O’Raifearteagh form for small to large values of the inflaton field respectively. It is demonstrated that the observables obtained, such as the scalar spectral index and the tensor to scalar ratio, are in excellent agreement with the latest observations. Some discussion of initial conditions and eternal inflation is included.' --- [Shaft Inflation]{} [**Konstantinos Dimopoulos**]{} [*Physics Department, Lancaster University, Lancaster LA1 4YB, UK*]{}[^1] The latest CMB observations from the Planck satellite have confirmed the broad predictions of the inflationary paradigm, in that the Universe is found to be spatially flat with a predominantly Gaussian curvature perturbation that is almost (but not quite) scale invariant [@planckinf]. However, the precision of these observations is so high that they put tension to (or even exclude) entire classes of inflationary models, e.g. chaotic inflation.[^2] The Planck observations seem to support an inflationary scalar potential which asymptotes to a constant, i.e. an inflationary plateau is favoured [@bestinf]. In view of this fact, in this letter we present a new class of inflationary potentials, which we call shaft inflation. The idea is that the inflationary plateau is pierced by shafts such that, when the inflaton field finds itself close to one of them it slow-rolls inside the shaft, until inflation ends and gives away to the hot big bang cosmology. Assuming a shaft at the origin, the scalar potential approximates a constant at large values of the inflaton field, but at small values the potential becomes similar to monomial chaotic inflation. In that respect, shaft inflation is similar to the so-called T-model inflation [@Tmodel] but the scalar potential in our case features a power-law (in contrast to exponential) dependence on the inflaton field. Although we attempt to design the model in the context of global supersymmetry, this is by no means restrictive since the phenomenology really stems out from the form of the scalar potential, which can be obtained via a different, possibly more realistic (and complicated) setup. Indeed, as we discuss, one of the realisations of shaft inflation can be identified with S-dual superstring inflation [@Sdualinf] or with radion assisted gauge inflation [@RAGI]. After the first draft of this letter was produced, the first data of the BICEP2 experiment were released, which show that inflation may produce substantial gravitational waves. According to the findings of BICEP2, the tensor to scalar ratio is [@bicep2]. We show that shaft inflation can accommodate such a large value of $r$. We use natural units, where and Newton’s gravitational constant is , with being the reduced Planck mass. Let us begin with the following superpotential: $$W=M^2\frac{|\phi|^{nq+1}}{(|\phi|^n+m^n)^q} \label{W0}$$ where $M,m$ are mass-scales, $n,q$ are real parameters and $\phi$ is a real scalar field (corresponding to a superfield made real by suitable field redefinitions). Without loss of generality, we assume that so we can write and assume that there is a $Z_2$ symmetry . In the limit the above superpotential reduces to an O’Raifearteagh form , which leads to de-Sitter expansion. However, in the limit the superpotential becomes , which leads to monomial chaotic inflation. To simplify the potential we may assume , in which case the superpotential becomes $$W=M^2\left(\phi^n+m^n\right)^{1/n}, \label{W}$$ Thus, in the limit the above becomes , which leads to a monomial F-term potential. [^3] For the superpotential in Eq. (\[W\]), the corresponding F-term scalar potential is: $$V(\phi)=M^4\phi^{2n-2}(\phi^n+m^n)^{\frac{2}{n}-2}. \label{V}$$ From the above we see that the scalar potential has the desired behaviour for , i.e. it approaches a constant for , while for the potential becomes monomial, with , see Fig. \[fig1\]. When the potential is exactly flat and the shaft disappears. (10,10) (-20,0)[ ]{} For the slow-roll parameters we find $$\begin{aligned} & & \epsilon\equiv\frac12 m_P^2\left(\frac{V'}{V}\right)^2= 2(n-1)^2\left(\frac{m_P}{\phi}\right)^2\left(\frac{m^n}{\phi^2+m^n}\right)^2 \label{eps}\\ & & \eta\equiv m_P^2\frac{V''}{V}= 2(n-1)\left(\frac{m_P}{\phi}\right)^2\left(\frac{m^n}{\phi^2+m^n}\right) \frac{(2n-3)m^n-(n+1)\phi^n}{\phi^n+m^m}, \label{eta}\end{aligned}$$ where the prime denotes derivative with respect to the inflaton field. Hence, the spectral index of the curvature perturbation is $$n_s=1+2\eta-6\epsilon=1-4(n-1)\left(\frac{m_P}{\phi}\right)^2 \frac{m^n[(n+1)\phi^n+nm^n]}{(\phi^n+m^n)^2}. \label{ns}$$ It is straightforward to see that inflation is terminated when so that, for the end of inflation, we find $$\phi_{\rm end}\simeq m_P\left[2(n^2-1)\alpha^n\right]^{1/(n+2)}, \label{fend}$$ where we assumed that (so that the potential deviates from a chaotic monomial) and we defined $$\alpha\equiv\frac{m}{m_P}\,. \label{alpha}$$ Using this, we obtain $\phi(N)$ $$\begin{aligned} N=\frac{1}{m_P^2}\int_{\phi_{\rm end}}^\phi\frac{V}{V'}{\rm d}\phi & \simeq & \frac{1}{2(n-1)(n+2)\alpha^n}\left[\left(\frac{\phi}{m_P}\right)^{n+2}- \left(\frac{\phi_{\rm end}}{m_P}\right)^{n+2}\right] \label{N}\\ & \Rightarrow & \phi(N)\simeq m_P\left[2(n-1)(n+2)\alpha^n \left(N+\frac{n+1}{n+2}\right)\right]^{1/(n+2)}, \label{fN}\end{aligned}$$ where $N$ is the remaining e-folds of inflation and we considered again. Inserting the above into Eqs. (\[eps\]) and (\[ns\]) respectively we obtain the tensor to scalar ratio $r$ and the spectral index $n_s$ as functions of $N$: $$\begin{aligned} & & r=16\epsilon=32(n-1)^2\alpha^{\frac{2n}{n+2}}\left[2(n-1)(n+2) \left(N+\frac{n+1}{n+2}\right)\right]^{-2(\frac{n+1}{n+2})} \label{r}\\ & & n_s=1-2\frac{n+1}{n+2}\left(N+\frac{n+1}{n+2}\right)^{-1} \label{nsN}\end{aligned}$$ An interesting choice is , in which case the scalar potential becomes $$V(\phi)=M^4\frac{\phi^2}{\phi^2+m^2}. \label{Vquad}$$ We see that the above can be thought of as a modification of quadratic chaotic inflation, because after the end of inflation, the inflaton field oscillates in a quadratic potential. However, for large values of the inflaton the potential approaches a constant. This potential has been obtained also in S-dual superstring inflation [@Sdualinf] with and also in radion assisted gauge inflation [@RAGI] with . In this case, Eqs. (\[r\]) and (\[nsN\]) become $$r=\frac{32\alpha}{\left[8\left(N+\frac34\right)\right]^{3/2}} \quad{\rm and}\quad n_s=1-\frac32\left(N+\frac34\right)^{-1}$$ For the moment, let us ignore the BICEP2 results and try to satisfy the Planck observations only. Assuming , for {} we readily obtain and { and }. As shown in Fig. \[fig2\], these values fall within the 95% {68%} c.l. contour of the Planck observations. Things improve further if we enlarge $n$. Indeed, in the limit Eqs. (\[r\]) and (\[nsN\]) become $$r=\frac{8\alpha^2}{n^2(N+1)^2}\rightarrow 0 \quad{\rm and}\quad n_s=1-\frac{2}{N+1}\,. \label{nbig}$$ The spectral index is now the same as in the original $R^2$ inflation model [@R2] (also in Higgs inflation [@Higgs]), which is not surprising since we expect power-law behaviour to approach the exponential when . Now, for {} we obtain {}, which is very close to the best fit point for the Planck data, as shown in Fig. \[fig2\]. (10,10) (-50,-500)[ ]{} Now, let us incorporate in our thinking the BICEP2 results, which suggest that [@bicep2]. From Eq. (\[r\]) it is readily seen that . This means that the tensor production can be enhanced if the shaft is appropriately widened, i.e. if $m$ is somewhat larger than $m_P$ without affecting the scalar spectral index, as seen in Eq. (\[nsN\]). Indeed, it is easy to show that is enough to boost the tensor signal up to BICEP2 values. For example, assuming and , Eqs. (\[r\]) and (\[nsN\]) give Table 1: $n$ $r$ $n_s$ ----- ------- ------- 2 0.200 0.970 4 0.199 0.967 6 0.141 0.966 8 0.098 0.965 [Table 1: Values of $r$ and $n_s$ for shaft inflation with and and 8.]{} (10,10) (50,-150)[ ]{} Allowing for a running spectral index the BICEP2 results suggest the blue contours shown in Fig. \[fig3\]. However, it is easy to show that $$\frac{{\rm d}n_s}{{\rm d}\ln k}= -\frac{2\left(\frac{n+1}{n+2}\right)}{\left(N+\frac{n+1}{n+2}\right)^2}\sim -\frac{2}{N^2},$$ which gives for , so the running is not substantial. An estimate for the required value of $M$ is obtained enforcing the COBE bound onto the curvature perturbation. Using Eqs. (\[V\]) and (\[fN\]) we find $$\sqrt{{{\cal P}}_\zeta}\!=\!\frac{1}{2\sqrt 3\pi}\frac{V^{3/2}}{m_P^3|V'|}\Rightarrow \!\left(\frac{M}{m_P}\right)^2\!\!=\!4\sqrt 3(n-1)\alpha^{-\frac{n}{n+2}} \pi\sqrt{{{\cal P}}_\zeta}\left[2(n\!-\!1)(n\!+\!2)\!\left(N+\frac{n+1}{n+2}\right) \right]^{-\frac{n+1}{n+2}}. \label{M}$$ For , and {} and taking we get {}, which is close to the scale of grand unification, as expected. Provided $\phi$ can be arbitrarily large \[the vacuum density for large $\phi$ is constant and remains sub-Planckian, since and \] one can show that slow-roll inflation can last for a huge number of e-folds. However, far away from the shaft, the potential becomes so flat that the inflaton finds itself in the so-called quantum diffusion zone, leading to eternal inflation [@eternal]. The criterion is as follows. For eternal inflation to occur, the classical variation of the inflaton field $|\dot\phi|$ needs to become subdominant to the quantum variation of $\phi$, which is given by the Hawking temperature per Hubble time . Comparing the two it is easy to show that $$|\dot\phi| \mbox{\raisebox{-.9ex}{~$\stackrel{\mbox{$>$}}{<}$~}} \frac{\delta\phi}{\delta t}\Leftrightarrow|V'| \mbox{\raisebox{-.9ex}{~$\stackrel{\mbox{$>$}}{<}$~}} \frac{3}{2\pi}H^3,$$ where we used the slow-roll equation of motion . In view of Eq. (\[V\]) and using the Friedman equation , after some algebra, one can show $$|\dot\phi| \mbox{\raisebox{-.9ex}{~$\stackrel{\mbox{$>$}}{<}$~}} \frac{\delta\phi}{\delta t}\Leftrightarrow \frac{N}{N_*} \simeq\frac{N+\frac{n+1}{n+2}}{N_*+\frac{n+1}{n+2}} \mbox{\raisebox{-.9ex}{~$\stackrel{\mbox{$<$}}{>}$~}} \left(\sqrt{{{\cal P}}_\zeta}\right)^{-\frac{n+2}{n+1}}\sim 10^{4-6},$$ where we also used Eq. (\[M\]), we considered that and with $N_*$ we have denoted the remaining e-folds, when the cosmological scales leave the horizon, i.e. . Thus, we see that, even though the multiverse may be undergoing eternal inflation, our region finds itself relatively close to the potential shaft such that slow-roll takes over and the inflaton gradually moves into the shaft. The inflaton slow-rolls for a few millions of e-folds before the cosmological scales exit the horizon and after this. Eventually, inflation, in our region, ends and the inflaton oscillates at the bottom of the shaft, leading to (p)reheating and the hot big bang. Meanwhile, elsewhere in the multiverse, eternal inflation continues. One can imagine that there may be a large number of shafts puncturing the inflationary plateau (leading to different vacua, possibly with different values of $n$). Some regions of the multiverse are close to the shafts in such a way that eternal inflation is superseded by classical slow-roll which attracts the system into the shaft in question. Our observable universe is such a case. In summary, we introduced and studied a new family of inflation models, which we called shaft inflation. The models correspond to the scalar potential given in Eq. (\[V\]) and are parametrised by . We obtained the models in the context of global supersymmetry starting with a superpotential, which interpolates from a generic monomial to an O’Raifearteagh form for small to large values of the inflaton field respectively. However, shaft inflation can be obtained in different setups, as mentioned, for example, in the case . We showed that we obtain values for the spectral index $n_s$ or the tensor to scalar ratio $r$ that are in excellent agreement with the latest observations of the Planck satellite and the BICEP2. [**Acknowledgements**]{} KD is supported (in part) by the Lancaster-Manchester-Sheffield Consortium for Fundamental Physics under STFC grant ST/J000418/1. [99]{} P. A. R. Ade [*et al.*]{} \[Planck Collaboration\], arXiv:1303.5082 \[astro-ph.CO\]. A. Ashoorioon, K. Dimopoulos, M. M. Sheikh-Jabbari and G. Shiu, JCAP [**1402**]{} (2014) 025 \[arXiv:1306.4914 \[hep-th\]\]. J. Martin, C. Ringeval, R. Trotta and V. Vennin, JCAP [**1403**]{} (2014) 039 \[arXiv:1312.3529 \[astro-ph.CO\]\]. A. Linde, arXiv:1402.0526 \[hep-th\]. A. de la Macorra and S. Lola, Phys. Lett. B [**373**]{} (1996) 299 \[hep-ph/9511470\]. M. Fairbairn, L. Lopez Honorez and M. H. G. Tytgat, Phys. Rev. D [**67**]{} (2003) 101302 \[hep-ph/0302160\]. P. A. R. Ade [*et al.*]{} \[BICEP2 Collaboration\], arXiv:1403.3985 \[astro-ph.CO\]. A. A. Starobinsky, Phys. Lett. B [**91**]{} (1980) 99; Sov. Astron. Lett.  [**9**]{} (1983) 302. F. L. Bezrukov and M. Shaposhnikov, Phys. Lett. B [**659**]{} (2008) 703 \[arXiv:0710.3755 \[hep-th\]\]. A. D. Linde, Phys. Lett. B [**175**]{} (1986) 395; A. S. Goncharov, A. D. Linde and V. F. Mukhanov, Int. J. Mod. Phys. A [**2**]{} (1987) 561. [^1]: e-mail: [k.dimopoulos1@lancaster.ac.uk]{} [^2]: unless excited states are assumed instead of the usual Bunch-Davis vacuum [@ash]. [^3]: The form of the superpotential in Eq. (\[W0\]) is dictated by the requirement that it gives rise to the envisaged scenario. The simplifying relation reduces the parameters and can have physical interpretation for specific values of , e.g. (see below). However, the physical meaning of Eq. (\[W\]) falls beyond the scope of this letter; while in a sense, can be thought as the definition of shaft inflation.
--- abstract: 'The proton’s transverse polarization structure is examined in terms of the Lorentz-covariant Pauli-Lubanski vector in QCD. We find that there are contributions from leading, subleading, and next-to-subleading partonic contributions in the light-front system of coordinates. The subleading and next-to-subleading contributions are related to the leading one through Lorentz symmetry. And the leading contribution obeys a simple partonic angular momentum sum rule that gives a clear physical interpretation to a relation known previously.' author: - Xiangdong Ji - Xiaonu Xiong - Feng Yuan title: Transverse Polarization of the Nucleon in Parton Picture --- Introduction ============ The proton spin structure has been one of the main focuses in hadron physics research in the last two decades [@Boer:2011fh]. The major goal is to understand how the spin is decomposed into different components which can be interpreted as the contributions from its constituents, quarks and gluons. In the past, the studies are mainly focused on the proton helicity in the set-up in which the proton travels in the $z$-direction, with its spin polarization along the same direction. In this case, the proton helicity is related to the $z$-component of the angular momentum (AM) operator $J_z$. In Ref. [@Jaffe:1989jz], Jaffe and Manohar have constructed a gauge-dependent helicity sum rule by investigating the AM density tensor from the basic degrees of freedom in quantum chromodynamics (QCD). A gauge-invariant decomposition of the proton helicity was advocated by one of the authors [@Ji:1996ek], which emphasizes the experimental accessibility of individual contributions. The relevant generalized parton distributions (GPDs) can be measured through deeply virtual Compton scattering (DVCS) processes in lepton-nucleon collisions. Beside the gauge invariance, it has also been shown that the helicity decomposition is frame independent [@Ji:1997pf]. If so, it works for the spin-decomposition in the rest frame. Then by rotational invariance, it shall also work for the transverse spin. However, this is not so simple. Transverse AM does not commute with the boost operator; therefore, the partonic structure for the transverse polarization must be different from that of the helicity. So far, there have been confusing and conflict statements on the transverse spin sum rule in the literature. In Ref. [@Bakker:2004ib], a transverse spin sum rule was proposed which includes the contribution from the quark transversity distributions. However, we know that the quark transversity is a chiral-odd object [@Jaffe:1991ra], the proposed sum rule is in direct contradiction with the chiral-even property of the nucleon spin and AM. Meanwhile, an impact parameter space description for the GPDs was used by Burkardt to study the transverse AM and spin sum rule [@Burkardt:2002hr]. A similar GPD spin sum rule was derived, where the discussions are restricted to zero residual momentum in the infinite momentum frame (IMF) and an additional contribution has to be included [@Burkardt:2005hp]. More recently, Leader proposed another transverse spin sum rule [@Leader:2011cr] which is different from the GPD spin sum rule, and differs from Burkardt’s derivation in IMF as well. The current state calls for a thorough investigation for the sum rule for the transverse polarization, and this is the main goal of the present paper. In order to obtain the boost-invariant spin sum rule for a moving nucleon (along the $\hat z$ direction), we construct the polarization through the Lorentz-covariant Pauli-Lubanski vector. Various components of the AM tensity tensor are found to contribute to the transverse polarization. In light-front coordinates, the contributions correspond to leading (twist-two), subleading (twist-three), and next-to-subleading (twist-four) parton physics. However, due to Lorentz symmetry, all twist-three and four contributions are related to the leading one. Thus we establish that the sum rule derived in [@Ji:1997pf] is actually a frame independent, leading-twist, partonic sum rule for transverse polarization. A brief summary of our results has appeared in [@Ji:2012sj]. Spin Structure of the Nucleon and Frame Dependence ================================================== In this section, we consider carefully what it means by the spin structure of the nucleon and its frame-dependence. From this discussion, it becomes clear that how we proceed with studying the partonic interpretation of the transverse polarization. It is easiest to discuss the spin structure in the nucleon’s rest frame where $\vec{P}=0$ and the nucleon is polarized in a certain axis chosen as the $z$-direction. We have, $$J_z \left|\vec{P}=0, \frac{1}{2}\right\rangle = \frac{1}{2} \left|\vec{P}=0, \frac{1}{2}\right\rangle \ .$$ The angular momentum projection $J_z$ in QCD can be decomposed into a sum of gauge-invariant, local operators $$J_z= \sum_i J_z^i \ .$$ One can take expectation values of $J_z^i$ in the above state, and thereby obtains a sum rule or a decomposition of $1/2=\sum_i \langle J_z^i\rangle$. The individual contributions $\langle J_z^i\rangle$ can be calculated in lattice QCD or nucleon models without recoursing to partons. Note that in general, the operator $J_z^i$ is not conserved and hence is renormalization scale-dependent. And it does not obey the commutation relation of the full charge $J_z$. This is the price one has to pay in quantum field theory when discussing about the spin structure, or the structure of any other conserved quantity, such as mass and momentum. To study the frame-dependence of the spin structure, it is desirable to boost the nucleon to a finite momentum. This is easily done in the $z$-direction because the boost operator $K_z$ commutes with $J_z$, $$[K_z, J_z]=0 \ .$$ Thus Eq.(1) is unchanged for a finite or infinite $P_z$. On the other hand, It is not so clear that $\langle J_z^i\rangle$ is independent of $P_z$. It was shown  [@Ji:1997pf] that so long $J_z^i$ is defined from a piece of the AM density $M^{\mu\alpha\beta}_i$ with the same Lorentz transformation property as the whole tensor itself, the answer is affirmative. To study parton physics related to $\langle J_z^i\rangle$, one has to boost the nucleon to IMF. Alternative to such boosts, one can define the operators in the light-front coordinate where the space and time undergo a special transformation $\xi^\pm = (\xi^0 \pm \xi^3)/\sqrt{2}$. In this case, the charge of a symmetry is defined from the + component of a current, rather than the 0 component. The parton picture of $\langle J_z^i \rangle$ emerges when the operators are interpreted in the parton degrees of freedom, in particular, when related to some parton distributions. In Ref. [@Ji:1996ek], a sum rule has been derived for $\langle J_z^{q,g}\rangle$ from the GPD’s $E^{q,g}$ and $H^{q, g}$, $$\langle J_z^{q,g}\rangle = \frac{1}{2} \int dx x [H^{q,g}(x,\eta=0,t=0) + E^{q,g}(x,\eta=0,t=0)] \ .$$ This result was derived from the matrix elements of the QCD energy momentum tensor $T^{\mu\nu}$, and has in principle no direct bearing on the partonic interpretation of the AM contributions themselves. Therefore, it sometimes is also called a relation instead of a sum rule in the sense that it is not clear the integrand representing the AM density of the partons. In particular, it has no immediate connection with the partonic contributions to the nucleon helicity. However, there have been some early attempts to correlate the above sum rule with a parton picture. In [@Hoodbhoy:1998yb], the parton AM density $J(x)=(1/2)x(q(x) + E(x, 0, 0))$ was motivated from the generalization of the AM density tensor $M^{\mu\alpha\beta}$. The quark orbital angular momentum (OAM) contribution to the nucleon helicity $L_q(x)$ has been identified as $J(x) - \Delta\Sigma(x)/2$ with explicit parton OAM operators, where $\Delta\Sigma(x)$ is the quark helicity density. In [@Burkardt:2002hr], it is found that the sum rule derived in [@Ji:1996ek] has a more natural connection with the parton contribution to the transverse spin. This observation closely follow from the derivation of the sum rule as a spin-flip nucleon matrix element, and points to that the transverse spin has a more natural partonic interpretation than the longitudinal one. Transverse spin parton sum rules were also considered by Leader and collaborators [@Bakker:2004ib; @Leader:2011cr]. However, a full understanding of the parton structure of the transverse polarization deems rather complicated because $$[J_{x, y}, K_z]\ne 0 \ .$$ The transverse AM operators do not commute with the boost operators, therefore, a parton picture for the transverse spin AM itself will depend on the momentum of the nucleon. To understand properly the spin of a relativistic particle in the way consistent with special theory of relativity, one needs to start with the covariant spin four-vector $W_\mu$, the so-called Pauli-Lubanski vector, $$W_\mu=-\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}P^\nu J^{\rho\sigma}\ ,\label{pl}$$ where $J^{\rho\sigma}$ is the angular momentum tensor, and in the light-front coordinates, can be calculated from the angular momentum density $M^{\mu\nu\lambda}$, $$J^{\rho\sigma}=\int d\xi^-d^2\xi_\perp M^{+\rho\sigma}(x) \ .$$ A nucleon state with momentum $P_\mu$ and polarization $S_\mu$ (which is usually normalized as $S^2=-M_N^2$), $|PS\rangle$ is an eigenstate $W^\mu \cdot S_\mu$, $$(W^\mu \cdot S_\mu)\left|PS\right\rangle = \frac{1}{2} (-M_N^2) \left|PS\right\rangle \ .$$ Therefore, to study the transverse polarization, one has to start with the Pauli-Lubanski vector along the transverse direction, $$W^\perp=\epsilon^{-+\perp\sigma}\left(P^+J^{-\sigma}-P^-J^{+\sigma}\right) \ , \label{wt}$$ which not only involves the AM $J^{+\sigma}$, but also the boost operator $J^{-\sigma}$. Therefore, a frame-independent transverse polarization is achieved through the presence of the boost generator. And only in the rest frame of the nucleon, the boost operator drops out, $W^\perp$ reduces to the transverse AM. Therefore, a study of transverse polarization cannot avoid discussing the matrix element of the boost operator. Structure of Transverse Polarization from Energy-Momentum Tensor ================================================================ From the general discussions in the previous section, we know that to obtain a transverse polarization sum rule, we have to start from the Pauli-Lubanski vector. In this section, we will study the structure of transverse polarization from the matrix elements of the AM density and energy-momentum tensor. We assume the AM density $M^{\mu\nu\lambda}$ can be decomposed into a sum of different parts, $$M^{\mu\nu\lambda} = \sum_i M_i^{\mu\nu\lambda} \ .$$ The following discussion is applicable to individual part, $M_i^{\mu\nu\lambda}$, $$M^{\mu\nu\lambda}_i(x)=x^\nu T^{\mu\lambda}_i-x^\lambda T^{\mu\nu}_i \ ,$$ which can be further defined from the quark and gluon contributions of the energy-momentum tensor in QCD, $$\begin{aligned} T^{\mu\nu}=T^{\mu\nu}_q+T^{\mu\nu}_g \ ,\label{tmu}\end{aligned}$$ where $$\begin{aligned} T^{\mu\nu}_q&=&\frac{1}{2}\left[\bar\psi\gamma^{(\mu}i\overrightarrow{D}^{\nu)}\psi+\bar\psi\gamma^{(\mu}i\overleftarrow{D}^{\nu)}\psi\right]\nonumber\\ T^{\mu\nu}_g&=&\frac{1}{4}F^2g^{\mu\nu}-F^{\mu\alpha}{F^{\nu}}_{\alpha} \ .\label{tmuqg}\end{aligned}$$ Thus we ultimately relate the expectation value of the Pauli-Lubanski operator in the transversely-polarized nucleon state to the matrix elements of the energy-momentum tensor. Following Ref. [@Jaffe:1989jz], we define the off-forward matrix element of $M^{\mu\nu\lambda}$ in the nucleon state $${\cal M}^{\mu\nu\lambda}_i(k)=\int d^4x e^{ik\cdot x} \langle P'S'|M^{\mu\nu\lambda}_i(x)|PS\rangle \ ,$$ In the end, we will take $P'=P$ and $S'=S$. After subtracting the total derivative, we obtain $$\begin{aligned} {\cal M}^{\mu\nu\lambda}_i(k)=-i(2\pi)^4\delta^{4}(k+P'-P)\left\{ \frac{\partial}{\partial k_\nu}\left[ \langle P'S|T^{\mu\lambda}_i(0)|PS\rangle\right]- \left(\nu\leftrightarrow \lambda\right) \right\}\ .\end{aligned}$$ General expressions for ${\cal M}^{\mu\nu\lambda}$ has been discussed in Ref. [@Jaffe:1989jz]. To complete the calculation, we need the parameterization for the matrix element of the energy-momentum tensor [@Ji:1996ek], $$\begin{aligned} \langle P'S|T^{\mu\nu}_i(0)|PS\rangle&=&\bar U(P') \left[A_i(\Delta^2)\gamma^{(\mu}\bar P^{\nu)} +B_i(\Delta^2)\frac{\bar P^{(\mu}i\sigma^{\nu)\alpha}\Delta_\alpha}{2M_N}\right.\nonumber\\ &&\left.+C_i(\Delta^2)\frac{\Delta^\mu\Delta^\nu-g^{\mu\nu}\Delta^2}{M_N} +\bar C_i(\Delta^2)M_Ng^{\mu\nu}\right]U(P ) \ , \label{energy}\end{aligned}$$ where $\bar P=(P+P')/2$, $\Delta=P'-P$, and $A$, $B$, $C$ and $\bar C$ are form factors. To calculate the first-order derivative, we expand the above result to the linear term of $\Delta^\alpha$ [@Jaffe:1989jz]. We can immediately drop out the contribution from the $C$ form factor because it is proportional to $\Delta^2$. As we shall see, the contributions from $\bar C_i$ form factors is related to the twist-four contribution which cancels between quarks and gluons. Therefore, in the following discussions, we only keep $A$ and $B$ form factors in the above equation. To further simplify this equation, we apply the Gordon identity, to get $$\begin{aligned} \langle P'S|T^{\mu\nu}_i(0)|PS\rangle=\bar U(P') \left[A_i(\Delta^2)\frac{\bar P^\mu\bar P^{\nu}}{M_N} +\left(A_i(\Delta^2)+B_i(\Delta^2)\right)\frac{\bar P^{(\mu}i\sigma^{\nu)\alpha}\Delta_\alpha}{2M_N}\right]U(P ) \ .\label{energytensor}\end{aligned}$$ It is clear that the second term of Eq. (\[energytensor\]) has $\Delta^\alpha$ dependence explicitly, whose contribution can be easily evaluated. On the other hand, the first term of Eq. (\[energytensor\]) is a little involved. In most cases, it does not contain a linear term of $\Delta_\alpha$. This is the case if the nucleon is in the rest frame. It is also true for the longitudinal polarization in the moving frame (along $\hat z$ direction) with $P_z\neq 0$. However, it does contribute to a linear term for the transverse polarization with nonzero $P_z$, and its contribution depends on $P_z$. This indicates that the contribution of this term is not boost invariant. This was first realized in Ref. [@Bakker:2004ib]. A transverse polarization sum rule starts from the expression of the transverse component of the Pauli-Lubanski vector. In anticipation to studying the partonic structure, we use the light-front coordinates. Therefore, we have, $$W^{\perp}_i=\epsilon^{-+\perp\sigma}\left(P^+J^{-\sigma}_i-P^-J^{+\sigma}_i\right) \ , \label{wt}$$ which involves ${\cal M}^{++\perp}_i$ and ${\cal M}^{+-\perp}_i$, $$\begin{aligned} M^{++\perp}_i(x)&=&x^+T^{+\perp}_i-x^\perp T^{++}_i \ ,\\ M^{+-\perp}_i(x)&=&x^-T^{+\perp}_i-x^\perp T^{+-}_i\ .\end{aligned}$$ From the above equation, we need the matrix elements the energy-momentum tensors: $T^{++}_i$, $T^{+\perp}_i$, and $T^{+-}_i$. Clearly, $T^{++}_i$ has parton density interpretation as discussed in Ref. [@Ji:2012sj], whereas the rest two will involve twist-three and four effects which do not. This indicates that a complete picture of the transverse polarization in partons is complicated. This situation is quite normal in light-front coordinates. For example, consider any vector $V^\mu$, it is the plus component $V^+$ has the simple partonic interpretation, whereas $V^\perp$ and $V^-$ are subleading and the next-to-subleading in light-cone power counting. When discussing the magnitude of the vector, one may focus on the leading component only and using Lorentz symmetry to simplify the parton picture for the other components when possible. Indeed as we shall see, in the case of transverse Pauli-Lubanski vector, one can use the symmetry argument to relate the subleading contributions to the leading one to develop a simple parton picture. To study the contributions from the different components, we first consider $$\begin{aligned} \langle P'S'|T^{++}_i(0)|PS\rangle&=&\bar U(P') \left[A_i(\Delta^2)\frac{\bar P^+\bar P^{+}}{M_N} +\left(A_i(\Delta^2)+B_i(\Delta^2)\right)\frac{\bar P^{+}i\sigma^{+\alpha}\Delta_\alpha}{2M_N}\right]U(P )\ ,\label{pp}\\ \langle P'S'|T^{+\perp}_i(0)|PS\rangle&=&\bar U(P') \left[ \left(A_i(\Delta^2)+B_i(\Delta^2)\right)\frac{\bar P^{(+}i\sigma^{\perp)\alpha}\Delta_\alpha}{2M_N}\right]U(P )\ , \\ \langle P'S'|T^{+-}_i(0)|PS\rangle&=&\bar U(P') \left[A_i(\Delta^2)\frac{\bar P^+\bar P^{-}}{M_N} +\left(A_i(\Delta^2)+B_i(\Delta^2)\right)\frac{\bar P^{(+}i\sigma^{-)\alpha}\Delta_\alpha}{2M_N}\right]U(P ) \ .\label{pm}\end{aligned}$$ It is easy to see that the first term of Eq. (\[pp\]) cancels out the first term of Eq. (\[pm\]) in the contribution to $W_\perp$ in Eq. (\[wt\]). This cancelation allows one to see a simple boosting property of the $W_\perp$ at different longitudinal momentum. In the following discussion, we will neglect both terms. We further notice that the second term of Eq. (\[pm\]) vanishes as well, because it is proportional to $P^+P^--P^-P^+$ due the antisymmetric nature of $\sigma^{\mu\nu}$. A possible contribution from $\bar C$ form factor cancel between quarks and gluons because $\bar C_q(0) +\bar C_g(0) = 0$. Therefore, we take no contribution to $W_{\perp i}$ from the energy-momentum tensor $T^{+-}_i$. We are left with contributions from $T^{++}_i$ and $T^{+\perp}_i$. Expanding these results to the linear term of $\Delta_\alpha$, we obtain, $$\begin{aligned} \langle P'S'|T^{++}_i(0)|PS\rangle&=&\left[\frac{A_i(0)+B_i(0)}{2}\right]\frac{2(\bar P^{+})^2}{M_N^2}\epsilon^{-+\alpha\beta}(i\Delta_\alpha) S_\beta\ ,\\ \langle P'S'|T^{+\perp}_i(0)|PS\rangle&=&\left[\frac{A_i(0)+B_i(0)}{2}\right]\frac{(\bar P^{+})^2}{M_N^2}\epsilon^{-\perp\alpha\beta}(i\Delta_\alpha) S_\beta\ ,\end{aligned}$$ where we have dropped the first term of Eq. (\[pp\]) as we commented above. Applying the above expansion results, we obtain the angular momentum tensor, $$\begin{aligned} {\cal M}^{++\perp}_i&=&(2\pi)^4\delta^{4}(0)\left[\frac{A_i(0)+B_i(0)}{2}\frac{3}{2}\right]\frac{2(\bar P^{+})^2S^{\perp'}}{M_N^2} \ , \\ {\cal M}^{+-\perp}_i&=&-(2\pi)^4\delta^{4}(0)\left[\frac{A_i(0)+B_i(0)}{2}\frac{1}{2}\right]\frac{2\bar P^{+}\bar P^-S^{\perp'}}{M_N^2} \ ,\end{aligned}$$ where $S^{\perp'}=\epsilon^{-+\perp\alpha}S^\alpha$. Substituting these results into Eq. (\[wt\]), we find that $$W^{\perp}_i=\frac{A_i(0)+B_i(0)}{2}S^\perp \ ,$$ Thus the contribution $T^{+\perp}_i$ is crucial to obtain the complete result for the transverse polarization. To demonstrate this more clearly, we show separately the above contributions from $T^{++}_i$ and $T^{+\perp}_i$, $$\begin{aligned} W^{\perp}_i |_{T^{++}_i}&=&\frac{A_i(0)+B_i(0)}{4}S^\perp \ , \\ W^{\perp}_i |_{T^{+\perp}_i}&=&\frac{A_i(0)+B_i(0)}{4}S^\perp \ .\end{aligned}$$ Thus $T^{++}_i$ and $T^{+\perp}_i$ contribute separately 1/2 of the nucleon spin. This a simple result of Lorentz symmetry and cannot be obtained without including the boost operator in the Pauli-Lubanski vector. The above result will be used to seek partonic interpretation of the transverse spin sum rule in the next section. Partonic Sum Rule for Transverse Polarization ============================================= The discussion in the previous section allows one to derive simple partonic sum rule for the transverse polarization. Although $W_\perp$ receives contributions from leading $T^{++}$ and subleading $T^{+\perp}$ and $T^{+-}$ contributions, the unwanted Lorentz structure from $T^{+-}$ cancels that of $T^{++}$ and drops out of the discussion. The contribution from $T^{+\perp}$ is then the same as the contribution of $T^{++}$ by Lorentz symmetry. We notice that the partonic content of $T^{+\perp}$ is complicated, since it involves a twist-three parton correlation which we will not discuss in this paper. However, as we demonstrated in the last section, both terms contribute to a fixed equal amount of the transverse polarization, and thus they are dynamically independent. Therefore, in this section, we will focus on the leading twist-two part of $T^{++}$ contribution, which yields a frame-independent simple parton picture of the transverse polarization. We first specialize the discussion of the previous section to the quark and gluon parts of the energy momentum tensor, $T_q^{++}$ and $T_g^{++}$, which contribute to the AM density, $$\begin{aligned} {\cal M}_{q,g}^{++\perp}|_{T^{++}}&=&(2\pi)^4\delta^{(4)}(0)\left[\frac{A_{q,g}(0)+B_{q,g}(0)}{2}\right]\frac{2(\bar P^{+})^2S^{\perp'}}{M_N^2} \ ,\end{aligned}$$ where the form factors have been defined before. Their contributions to the transverse polarization sum is, $$W^\perp_{q,g}|_{T^{++}}=\frac{A_{q,g}(0)+B_{q,g}(0)}{4}S^\perp \ . \label{tranpp}$$ The explicit expression for quark and gluon energy-momentum tensors $T^{++}_{q,g}$ is, $$\begin{aligned} T^{++}_q&=&\frac{1}{2}\left[\bar \psi\gamma^+\overrightarrow{D}^+\psi+\bar\psi\overleftarrow{D}^+\gamma^+\psi\right]\ ,\\ T^{++}_g&=&F^{+i}F^{+i} \ ,\end{aligned}$$ where $i=1,2$. To explore the partonic picture for these contributions, we first calculate the parton momentum density when the nucleon is in transverse polarization, $$\rho_q^{+} (x, \xi,S^\perp) = x\int \frac{d\lambda}{4\pi} e^{i\lambda x} \left\langle PS^\perp\left|\overline \psi\left(-\frac{\lambda n}{2},\xi_\perp\right)\gamma^+\psi\left(\frac{\lambda n}{2},\xi_\perp\right) \right|PS^\perp\right\rangle \ ,$$ where $\xi_\perp$ denotes the spatial transverse coordinates, and $n$ is an adjoint vector with $n^2=0$ and $n\cdot P=1$. This function describes the longitudinal momentum distribution of quarks carrying the momentum fraction $x$ and at the coordinate $\xi_\perp$ in the transverse plane. Nominally because of the translational symmetry in the transverse plane, $ \rho_q^+$ shall have no dependence on $\xi_\perp$. However, there is a subtle distribution term in the mathematical sense which can only be revealed from the forward limit of an off-forward matrix element. Using the GPD’s defined in $$\begin{aligned} &&\int \frac{d\lambda}{2\pi}e^{i\lambda x}\left\langle P'|\bar \psi\left(-\frac{\lambda n}{2}\right)\gamma^+\psi\left(\frac{\lambda n}{2}\right) |P\right\rangle \nonumber\\ &&~~ = H_q(x,\eta,\Delta)\bar U(P')\gamma^+U(P )+E_q(x,\eta,\Delta)\bar U(P')\frac{i\sigma^{+\alpha}\Delta_\alpha}{2M_N}U(P ) \ ,\end{aligned}$$ where $\eta$ is the skewness parameter, a transverse-polarization dependent term appears $$\rho_q^{+} (x, \xi,S^\perp)/P^+= xH_q(x,0,0)+\left[\frac{xH_q(x,0,0)+xE_q(x,0,0)}{2}\right]\lim_{\Delta_\perp\rightarrow 0}\frac{S^{\perp'}}{M^2} \partial^{\perp_\xi} e^{i\xi_\perp\Delta_\perp} \ ,$$ where we have neglected a contribution from the scalar term in the Gordon identity. This term shall be canceled by a similar contribution from the twist-four operators as we discussed before. In a sense, $\rho^+$ provide the joint distribution of partons with longitudinal momentum $x$ and transverse coordinate $\xi_\perp$. When integrating out $\xi_\perp$, the second term drops out, and we recover the usual momentum distribution of the quarks. On the other hand, if we integrate over $\xi$ with a weight $\xi^\perp$, the first term drops out, and the second term leads to the AM contribution from quarks with fixed longitudinal momentum $x$, $$\begin{aligned} {\cal M}^{++\perp}_q(x)|_{T^{++}} &=&P^+(2\pi)^2\delta^{(2)}(0) \int d^2\xi \xi^{\perp} \rho_q^{+}(x, \xi,S^\perp)\nonumber\\ & =& (2\pi)^4\delta^{(4)}(0)\frac{x}{2}\frac{H_q(x,0,0)+E_q(x,0,0)}{2} \frac{2(\bar P^{+})^2S^{\perp'}}{M_N^2} \ ,\end{aligned}$$ and the corresponding contribution to the transverse polarization is $$\begin{aligned} W^\perp_{q}(x)|_{T^{++}}&=&\frac{M_N^2}{2P^+(2\pi)^2\delta^{(2)}(0)}\int d^2\xi \xi^{\perp'} \rho_q^{+}(x, \xi,S^\perp)\nonumber\\ &=&S^\perp\frac{x}{2}\frac{H_q(x,0,0)+E_q(x,0,0)}{2}\ . \label{wtq}\end{aligned}$$ Therefore, we have shown that quarks with longitudinal momentum $x$ contribute $S_q^\perp(x) = (x/2)(E_q(x,0,0)+H_q(x,0,0)$ to the transverse polarization. By integrating over $x$, we obtain the total contribution $$S_q^\perp = \frac{1}{2}\int dx x (H_q(x,0,0) + E_q(x,0,0)) \ .$$ as a partonic sum rule for the transverse polarization. Similarly, for the gluons, we define the longitudinal momentum density, $$\rho_g^{+} (x, \xi_\perp,S^\perp) = \int \frac{d\lambda}{2\pi} e^{i\lambda x} \langle PS^\perp|F^{+i}(-\frac{\lambda n}{2},\xi_\perp)F^{+i}(\frac{\lambda n}{2},\xi_\perp)|PS^\perp\rangle \ ,$$ then the gluon parton of momentum $x$ contributes to the transverse polarization density, $$\begin{aligned} W^\perp_{g}(x)|_{T^{++}}&=&\frac{M_N^2}{2P^+(2\pi)^2\delta^{(2)}(0)}\int d^2\xi \xi^{\perp'} \rho_g^{+}(x, \xi,S^\perp)\nonumber\\ &=&S^\perp\frac{x}{2}\left(H_g(x,0,0)+E_g(x,0,0)\right)\ , \label{wtg}\end{aligned}$$ The total contribution is a partonic sum rule, $$S_g^\perp = \frac{1}{2}\int dx x (H_g(x) + E_g(x)) \ ,$$ which is just the GPD sum rule in Eq. (4). Eqs. (\[wtq\],\[wtg\]) are the main results of our paper. They are derived from the angular momentum density in QCD with the Lorentz symmetry argument for the transverse polarization decomposition for nucleon state. They provide an intuitive picture for the nucleon transverse polarization from the quarks and gluons. Comparing with Leader’s transverse spin sum rule derived recently [@Leader:2011cr], we found that his result is frame dependent. As we discussed before, this frame dependence arises from the non-commutativity of the transverse AM and longitudinal boost. Therefore, a frame-independent parton picture does not exist for the transverse AM but rather the Pauli-Lubanski spin vector. Comparing with Burkardt’s result [@Burkardt:2005hp], our derivation here dispenses with the wave-packet construction and is valid for any residual momentum of the nucleon in IMF, not just the rest frame. Summary ======= In conclusion, we have examined the transverse spin structure for the nucleon by a detailed study of the angular momentum density tensor. By constructing transverse polarization through the Lorentz-covariant Pauli-Lubanski vector, we derived a sum rule that satisfies the boost invariance, and is consistent with the GPD spin sum rule derived early. We find that the leading contribution to the the transverse AM has a simple partonic AM density interpretation. We thank M. Burkardt and E. Leader for the comments. This work was partially supported by the U. S. Department of Energy via grants DE-FG02-93ER-40762 and DE-AC02-05CH11231 and a grant (No. 11DZ2260700) from the Office of Science and Technology in Shanghai Municipal Government. [99]{} D. Boer, [*et al.*]{}, arXiv:1108.1713 \[nucl-th\]. R. L. Jaffe and A. Manohar, Nucl. Phys. B [**337**]{}, 509 (1990). X. Ji, Phys. Rev. Lett.  [**78**]{}, 610 (1997). X. Ji, Phys. Rev. D [**58**]{}, 056003 (1998). B. L. G. Bakker, E. Leader and T. L. Trueman, Phys. Rev. D [**70**]{}, 114001 (2004). R. L. Jaffe and X. -D. Ji, Nucl. Phys. B [**375**]{}, 527 (1992). M. Burkardt, Int. J. Mod. Phys. A [**18**]{}, 173 (2003). M. Burkardt, Phys. Rev. D [**72**]{}, 094020 (2005). E. Leader, arXiv:1109.1230 \[hep-ph\]. X. Ji, X. Xiong and F. Yuan, arXiv:1202.2843 \[hep-ph\]. P. Hoodbhoy, X. -D. Ji and W. Lu, Phys. Rev. D [**59**]{}, 014013 (1999) \[hep-ph/9804337\].
--- abstract: 'We study low temperature electron transport in p-wave superconductor-insulator-normal metal junctions. In diffusive metals the p-wave component of the order parameter decays exponentially at distances larger than the mean free path $l$. At the superconductor-normal metal boundary, due to spin-orbit interaction, there is a triplet to singlet conversion of the superconducting order parameter. The singlet component survives at distances much larger than $l$ from the boundary. It is this component that controls the low temperature resistance of the junctions. As a result, the resistance of the system strongly depends on the angle between the insulating boundary and the ${\bf d}$-vector characterizing the spin structure of the triplet superconducting order parameter. We also analyze the spatial dependence of the electric potential in the presence of the current, and show that the electric field is suppressed in the insulating boundary as well as in the normal metal at distances of order of the coherence length away from the boundary. This is very different from the case of the normal metal-insulator-normal metal junctions, where the voltage drop takes place predominantly at the insulator.' author: - 'A. Keles' - 'A. V. Andreev' - 'B. Z. Spivak' title: 'Electron transport in p-wave superconductor-normal metal junctions. ' --- Introduction ============ Electron transport in superconducting systems is very different from that in normal metals. Roughly speaking, the characteristic size of wave packets which carry current in metals is of the order of the Fermi wave length $\hbar/p_{F}$, while their charge is equal to the electron charge $e$. Here $p_{F}$ is the Fermi momentum. On the other hand, the quasiparticle wave packets are coherent superpositions of electrons and holes. This results in a characteristic size of the wave packets which is much larger than $\hbar/p_{F}$. The charge of the packets depends on the energy and can be very different from the electron charge $e$. This has important consequences in electronic transport properties of superconductor-insulator-normal metal junctions. Transport properties of s-wave superconductor-insulator-normal metal junctions have been the subject of intensive experimental and theoretical research for decades, see for example, Refs. . In this case the Cooper pairs can be constructed from the two single particle wave functions related by a time reversal operation. At low temperatures the characteristic size of wave packets which carry current in the metal near the boundary is of the order of the normal metal coherence length $L_{T}=\sqrt{D/T}$, which turns out to be much larger than the elastic mean free path $l$. Here $D$ is the diffusion coefficient, and $T$ is the temperature. One of the consequences of the large size of the wave packets is that, in the presence of a current through the junction, the drop of the gauge-invariant potential $\Phi$ is pushed to distances of order $L_{T}$ away from the boundary, which is much larger than both the thickness of the insulator and the elastic mean free path $l$. This is quite different from the case of normal metal-insulator-normal metal junctions, where most of the potential drop occurs at the insulator. In this article we develop a theory of electron transport in p-wave superconductor-insulator-normal metal junctions. The best known example of a p-wave superfluid is superfluid $^{3}He$. One of the leading candidates for p-wave pairing in electronic systems is $Sr_{2}RuO_{4}$. There are numerous pieces of experimental evidence that the superconducting state of this material has odd parity, breaks time reversal symmetry and is fully gaped.[@nelson; @kidwingira; @Xia; @Luke; @rmp_pwave; @maeno; @kapitulnik] One of the simplest forms of the order parameter which satisfies these requirements is the chiral p-wave state $\Delta({\bf p}) \sim p_{x}\pm ip_{y}$, which has been suggested in Ref. . It is a two-dimensional analog of superfluid $^{3}He-A$.[@AndersonMorel] Another interesting scenario for the order parameter was suggested in Ref. . Chirality of the pairing wave function leads to edge states and spontaneous surface currents. While the quasiparticle tunneling spectroscopy[@Kashiwaya; @Laube; @Liu] confirmed the existence of the subgap states, the experiments in Ref.  did not confirm the existence of the edge supercurrent. (See Ref.  for a discussion about about consistency of the chiral p-wave phase for $Sr_{2}RuO_{4}$.) We think that electron transport experiments may clarify the situation. ![ Schematic representation of the superconductor- insulator-normal metal junction. The vector ${\hat z}$ is along the c-axis of crystal, and $\vartheta_\mathbf{d}$ denotes the angle between spin vector $\mathbf{d}$ and $\hat z$. The dependence of the voltage inside the normal metal on the distance from the boundary may be measured by a scanning tunneling microscope (STM). []{data-label="fig:geometry"}](geometry){width="55.00000%"} In this article we consider a p-wave superconductor-insulator-normal metal junction in the geometry in which the insulating boundary ($xy$-plane) is perpendicular to the c-axis of the layered chiral p-wave superconductor, as shown in Fig. (\[fig:geometry\]). Although for simplicity we take the order parameter in the superconductor in the form[@AndersonMorel; @mineev] $$\label{eq:delta} \hat{\Delta}({\bf n})=\Delta({\bf n})(\mathbf{d}\cdot \boldsymbol{\sigma})i \sigma_2, \quad \Delta({\bf n}) =\Delta_0e^{i\varphi_\mathbf{n}},$$ our results also apply to more complicated forms of the order parameter, such as for example that in Ref. . Here $\mathbf{n}$ is a unit vector in the xy-plane, which points along $\mathbf{p}$, and $\varphi_\mathbf{n}$ is the azimuthal angle characterizing its direction $\mathbf{n} =(\cos\varphi_\mathbf{n}, \sin \varphi_\mathbf{n})$. At temperatures well below the gap, tunneling of single electrons from the metal to the superconductor is forbidden. Thus, similar to the s-wave case, the resistance of the junction is determined by the tunneling of the electron pairs. Coherent pair tunneling gives rise to coherence between electrons and holes inside the normal metal. Electron-hole coherence in the metal is characterized by the anomalous Green function. The crucial difference between the s-wave and the p-wave cases is the following. In the p-wave case in the absence of spin-orbit interaction only the p-wave component is induced inside the normal metal. The latter is exponentially suppressed at distances larger than $l$ away from the superconductor-normal metal boundary. As a result, in the diffusive regime the conductance of the junction is significantly suppressed. In the presence of the spin-orbit interaction the p-wave order parameter in the superconductor is partially converted to the s-wave component inside the normal metal. At low temperatures, the s-wave component propagates into the metal to large distances from the boundary. Consequently, it is this component that determines the low temperature resistance of the system. We show below that Rashba-type spin-orbit coupling at the boundary between the normal metal and the p-wave superconductor leads to strong dependence of the resistance on the direction of the vector ${\bf d}$, which characterizes the spin structure of the order parameter in Eq. (\[eq:delta\]). Since the spin orientation of the order parameter may be controlled by an external magnetic field[@knight] our predictions may be tested in experiment. Qualitatively, this dependence may be understood as follows. In our geometry (with the $z$-axis parallel to the c-axis of the crystal) the $z$-component of the total (orbital plus spin) angular momentum, $J_z=L_z + s_z$, is conserved during tunneling even in the presence of spin-orbit interaction. Therefore the s-wave singlet proximity effect in the normal metal is produced only by the pairs with $J_z=0$ in the p-wave superconductor. Since in our geometry the $z$-component of the orbital angular momentum in a $p_x+i p_y$ superconductor is $L_z=+1$ we conclude that only the part of the condensate with $s_z=-1$ induces the s-wave proximity effect in the normal metal. This condensate fraction corresponds to the the components of the vector $\mathbf{d}$ lying in the xy-plane. Kinetic scheme for description of electron transport in p-wave superconductor-normal metal junctions. {#sec:kinetic_equations} ===================================================================================================== The conventional description of the electronic transport in superconductors based on the Boltzmann kinetic equation is valid when all spatial scales in the problem, including the mean free path $l$, are larger than the characteristic size of electron wave packets. At low temperatures, $L_{T}\gg l$, this condition is violated and this approach cannot be used for the description of the effects mentioned above. The set of equations describing the electronic transport in s-wave superconductors in this situation has been derived in Ref. . Below we review a modification of this approach for the case where the superconducting part of the junction is a p-wave superconductor. The central object of this approach is the matrix Green function in the Keldysh space $$\label{eq:gorkov_functions} \check{G}({\bf x}_1; {\bf x}_2)=\left( \begin{array}{cc} \hat{G}^R & \hat{G}^K \\ 0 & \hat{G}^A \end{array}\right).$$ The retarded, advanced and Keldysh Green functions in this equation can be written in the following form $$\begin{aligned} \hat{G}_{\ell\ell'}^R(\mathbf{x}_1;\mathbf{x}_2)&=&-i\theta(t_1-t_2) \langle \{\psi_\ell(\mathbf{x}_1), \psi^\dagger_{\ell'}(\mathbf{x}_2)\} \rangle,\\ \hat{G}_{\ell\ell'}^A(\mathbf{x}_1;\mathbf{x}_2)&=\!&i\theta(t_2-t_1) \langle \{\psi_\ell(\mathbf{x}_1), \psi^\dagger_{\ell'}(\mathbf{x}_2)\} \rangle,\\ \hat{G}^K_{\ell\ell'}(\mathbf{x}_1;\mathbf{x}_2)&=&-i \langle [\psi_\ell(\mathbf{x}_1), \psi^\dagger_{\ell'}(\mathbf{x}_2)] \rangle.\end{aligned}$$ Here $\mathbf{x}=(\mathbf{r},t)$ denotes the space-time coordinate, and the indices $\ell,\ell'=1...4$ label the four components of the fermion operator in the Nambu/spin space; $\psi_1=\psi_\uparrow$, $\psi_2=\psi_\downarrow$, $\psi_3=\psi_\uparrow^\dagger$, $\psi_4=\psi_\downarrow^\dagger$. Finally, the anticommutator and the commutator of operators $A$ and $B$ are denoted by $\{A,B\}$ and $[A,B]$ respectively. Introducing the new variables, $\mathbf{x}=(\mathbf{r},t)=(\mathbf{x}_1+\mathbf{x}_2)/2$, and $\mathbf{x}'=(\mathbf{r}',t')=\mathbf{x}_1-\mathbf{x}_2$, we can define the quasiclassical Green function by Fourier transforming $\check{G}({\bf x}_1; {\bf x}_2)$ with respect to $\mathbf{x}'$ and integrating over $\xi_\mathbf{p}=\varepsilon_\mathbf{p}-E_F$ as $$\label{eq:quasiclassical_function} \check{g}(\mathbf{x},\mathbf{n},\epsilon)= \frac{i}{\pi}\int d\xi_\mathbf{p} \int d^4x' e^{i\epsilon t'-i\mathbf{p}\mathbf{r}'} \tau_3 \check{G}(\mathbf{x}_1;\mathbf{x}_2).$$ Here $E_F$ is the Fermi energy, $\varepsilon_\mathbf{p}$ is the electron energy spectrum, and $\mathbf{n}$ is a unit vector labeling a location on the Fermi surface (for example, for a spherical Fermi surface it can be chosen as $\mathbf{n}=\mathbf{p}/|\mathbf{p}|$), and $\tau_3$ is the third Pauli matrix. In this paper, we will denote the Pauli matrices in the Nambu space by $\tau_i$, and the Pauli matrices in spin space by $\sigma_i$. The Keldysh space structure of the Green functions will be indicated explicitly when necessary. The quasiclassical Green’s function (\[eq:quasiclassical\_function\]) satisfies the normalization condition $$\label{eq:norm} \check{g}\check{g}=1,$$ which can be spelled out in terms of components in the Keldysh space as, $$\begin{aligned} \hat{g}^{(R,A)}\hat{g}^{(R,A)}=1\label{eq:norm_E},\\ \hat{g}^R\hat{g}^K+\hat{g}^K\hat{g}^A=0.\label{eq:norm_NE}\end{aligned}$$ The normalization condition Eq. (\[eq:norm\_NE\]) is satisfied for any Keldysh function of the form $$\label{eq:keldysh} \hat{g}^K=\hat{g}^R\hat{h}-\hat{h}\hat{g}^A.$$ The matrix $\hat{h}$ may be parameterized as[@LarkinOvchinnikov] $$\label{eq:LO_parametrization} \hat{h}=f_0\hat{\tau}_0+f_1\hat{\tau}_3.$$ Here $f_0$ and $f_1$ are respectively the odd and even in $\epsilon$ parts of the distribution function (see Ref.  for an alternative treatment). In this paper we only consider stationary situations. In this case the Green functions depend on the energy $\epsilon$ but not on the time $t$. If $k_{F}l\gg 1$ the Gorkov equation for the Green function in Eq. (\[eq:gorkov\_functions\]) can be reduced to the quasi-classical Eilenberger equations for the Green functions defined in Eq. (\[eq:quasiclassical\_function\]) [@LarkinOvchinnikov] $$i\mathbf{v}_F\cdot\boldsymbol{\nabla}\check{g}+ \left[ \epsilon\check{\tau}_3-\check{\Delta}(\mathbf{n})-\check{\Sigma}, \check{g} \right]=0. \label{eq:EilenbergerEq}$$ Here $\check{\Sigma}$ is the self energy associated with impurity scattering. In the Born approximation $\check{\Sigma}=-i\langle\check{g}\rangle/2\tau_e$, where $\langle \ldots \rangle$ denotes average over the solid angle in momentum space and $\tau_e$ is the elastic mean free time. The only difference of Eq. (\[eq:EilenbergerEq\]) from the conventional s-wave superconductor case is in the form Eq. (\[eq:delta\]) for the order parameter. We neglect the electron-electron interactions in the normal metal. As a result, in our approximation the order parameter vanishes inside the normal metal. This yields the following equations for the retarded, advanced and Keldysh Green functions: $$\begin{aligned} i\mathbf{v}_F\cdot\boldsymbol{\nabla}\hat{g}^{R,A}+ \epsilon[\hat{\tau}_3,\hat{g}^{R,A}] &=&[\hat{\Sigma}^{R,A},\hat{g}^{R,A}], \\ \label{eq:keldysh2} i\mathbf{v}_F\cdot\boldsymbol{\nabla}\hat{g}^{K} + \epsilon[\tau_3,\hat{g}^{K}] &=&\hat{\Sigma}^R\hat{g}^K + \hat{\Sigma}^K\hat{g}^A \nonumber \\ && - \hat{g}^R\hat{\Sigma}^K -\hat{g}^K\hat{\Sigma}^A.\end{aligned}$$ Multiplying Eq. (\[eq:keldysh2\]) with $\tau_3$ and $\tau_0$ and taking the trace, and using the fact that $\mathrm{Tr} (\hat{g}^R-\hat{g}^A)=0$, one obtains the following equations for $f_1$ and $f_0$ $$\begin{aligned} \label{eq:f_1_transport} \mathrm{Tr}\left[\hat{\beta}\right]\mathbf{v}_F\cdot\boldsymbol{\nabla} f_1 &=& -\frac{1}{2\tau_e}f_0\mathrm{Tr} \left( \langle \hat{\alpha}\rangle\hat{\alpha}- [\langle \hat{g}^R\rangle,\hat{g}^R]+ [\langle \hat{g}^A\rangle,\hat{g}^A] \right) +\frac{1}{2\tau_e}\mathrm{Tr} \left[ \langle \hat{\alpha} f_0\rangle\hat{\alpha} \right]\nonumber\\ &-&\frac{1}{2\tau_e}f_1\mathrm{Tr} \left( \langle \hat{\alpha}\rangle\hat\beta- [\langle \hat g^R\rangle,\hat g^R]\hat\tau_3+ \hat\tau_3[\langle \hat g^A\rangle,\hat g^A] \right) +\frac{1}{2\tau_e}\mathrm{Tr} \left[ \langle \hat\beta f_1\rangle\hat\alpha \right],\\ \label{eq:f_0_transport} \mathrm{Tr}\left[\hat\beta\right]\mathbf{v}_F\cdot\boldsymbol{\nabla} f_0 &=& -\frac{1}{2\tau_e}f_0\mathrm{Tr} \left( \langle \hat\tau_3\hat\beta\hat\tau_3\rangle\hat\alpha- [\langle \hat g^R\rangle,\hat g^R]\hat\tau_3+ \hat\tau_3[\langle \hat g^A\rangle,\hat g^A] \right) +\frac{1}{2\tau_e}\mathrm{Tr} \left[ \langle\hat \alpha f_0\rangle\hat\tau_3\hat\beta\hat\tau_3 \right]\nonumber\\ &-&\frac{1}{2\tau_e}f_1\mathrm{Tr} \left( \langle\hat g^R\rangle\hat\beta\hat\tau_3 -\hat\tau_3\hat\beta\langle\hat g^A\rangle- [\langle\hat g^R\rangle,\hat g^R]+ [\langle\hat g^A\rangle,\hat g^A] \right) +\frac{1}{2\tau_e}\mathrm{Tr} \left[ \langle\hat \beta f_1\rangle\hat\tau_3\hat\beta\hat\tau_3 \right].\end{aligned}$$ Here we defined $\hat \alpha= \hat g^R- \hat g^A$ and $\hat\beta=\hat g^R\hat\tau_3-\hat\tau_3\hat g^A$. The gauge-invariant potential and the electric current can be expressed in terms of quasiclassical Keldysh green functions as, $$\begin{aligned} \label{eq:Phi_def} \Phi(\mathbf{r}) &=&\frac{1}{4e}\int d\epsilon\int d^2\mathbf{n} \mathrm{Tr}\{\hat{g}^K(\mathbf{r},\mathbf{n},\epsilon)\}\\ J(\mathbf{r}) &=& -\frac{e\nu_0}{4}\int d\epsilon\int d^2\mathbf{n} \mathbf{v}_F\mathrm{Tr} \{\hat{\tau}_3\hat{g}^K(\mathbf{r},\mathbf{n},\epsilon)\}. \label{eq:J_def}\end{aligned}$$ Here the integral over $\mathbf{n}$ denotes averaging over the Fermi surface, $d^2 \mathbf{n}=d \Omega_\mathbf{n}/4\pi$. We discuss the boundary conditions for the quasiclassical transport equations (\[eq:EilenbergerEq\]) - (\[eq:f\_0\_transport\]) in Sec. \[sec:boundary\_conditions\]. Boundary conditions for p-wave superconductor-normal metal interface {#sec:boundary_conditions} -------------------------------------------------------------------- The p-wave superconductivity is destroyed by elastic scattering processes when $l< \xi_0$, where $\xi_0$ is the zero temperature coherence length in a clean superconductor. Therefore we consider the case where the p-wave superconductor is relatively clean and $l\gg \xi$. For the same reason the p-wave proximity effect is exponentially suppressed in the metal at distances larger than $l$ from the boundary. On the other hand, in a spatially inhomogeneous system in the presence of spin-orbit interaction the p- and s- wave components of the anomalous Green functions are mixed. At low temperatures, the s-wave component induced by spin-orbit coupling extends into the metal to distances much larger than $l$, and determines the low temperature transport properties of the junction. Therefore spin-orbit coupling plays a crucial role in low temperature electron transport in normal metal–p-wave superconductor junctions. Though our results have a general character, in this article we assume that a Rashba type spin orbit coupling is present only at the boundary. The corresponding potential energy at the boundary may be modeled by the form $V= (u_0\sigma_0+u_1\hat{z}\times\mathbf{p}_\parallel\cdot\boldsymbol{\sigma})\delta(z)$, where $\mathbf{p}_\parallel$ is the component of the electron momentum parallel to the boundary, and $\hat{z}$ is the unit vector normal to the boundary. We assume that $u_1\ll u_0$, and consider a disorder free boundary, so that $\mathbf{p}_\parallel$ is conserved. The boundary conditions for quasiclassical Green functions in superconductors were obtained in Refs. . In the case of a spin active boundary [@millis] they may be expressed in terms of the $\mathbf{p}_\parallel$-dependent scattering matrix of the insulating barrier. The latter relates the spinor amplitudes of the outgoing ($\psi_o$) and incident ($\psi_i$) electron waves, $$\label{eq:S_matrix} \left( \begin{array}{c} \psi_o^S \\ \psi_o^N \\ \end{array} \right) =\left( \begin{array}{cc} S_{11} & S_{12} \\ S_{21} & S_{22} \\ \end{array} \right) \left( \begin{array}{c} \psi_i^S \\ \psi_i^N \\ \end{array} \right).$$ Here the superscripts $N$ and $S$ denote respectively the normal metal and the superconductor side of the barrier. The presence of spin-orbit interaction at the boundary results in a spin-dependent transmission amplitude $S_{12}$, which may be written in the form $$\begin{aligned} \label{eq:tunneling_so} S_{12}&=& t_0+ t_s \gamma (\varphi_\mathbf{n}), \\ \label{eq:gamma_n_def} \gamma(\varphi_\mathbf{n})&=& \cos \varphi_\mathbf{n} \sigma_y - \sin \varphi_\mathbf{n} \sigma_x .\end{aligned}$$ Here we introduced the azimuthal angle $\varphi_\mathbf{n}$ as $\mathbf{p}_\parallel=|\mathbf{p}_\parallel| (\hat x \cos \varphi_\mathbf{n} + \hat y \sin \varphi_\mathbf{n})$. The spin-dependent and spin-independent transmission amplitudes $t_s$ and $t_0$, are scalar functions of $|\mathbf{p}_\parallel|$. To lowest order in the transmission amplitude, the boundary condition for the quasiclassical Green functions may be written as [@millis] $$\begin{aligned} \check{g}(\mathbf{r}^{N},\mathbf{n}^N_{{o}}) &=& -\frac{1}{2} \left[ \check S_{21} \left( \check{g}(\mathbf{r}^{S},\mathbf{n}^S_{{i}})-1 \right) \check{S}^\dagger_{21}, \check{g}(\mathbf{r}^{N},\mathbf{n}^N_{{o}}) \right] \nonumber\\ &&+ \check{S}_{22} \check{g}(\mathbf{r}^{N},\mathbf{n}^N_{{i}}) \check{S}^{-1}_{22},\label{eq:boundary-millis1}\\ \check{g}(\mathbf{r}^{S},\mathbf{n}^S_{{i}}) &=& -\frac{1}{2} \left[ \check{g}(\mathbf{r}^{S},\mathbf{n}^S_{{i}}), \check S_{21}^\dagger \left( \check{g}(\mathbf{r}^{N},\mathbf{n}^N_{{o}})-1 \right) \check{S}_{21} \right] \nonumber\\ &&+ \check{S}^{-1}_{11} \check{g}(\mathbf{r}^{S},\mathbf{n}^S_{{o}}) \check{S}_{11}\label{eq:boundary-millis2}.\end{aligned}$$ Here $\mathbf{n}_{i,o}^S$ and $\mathbf{n}_{i,o}^N$ are the unit vectors indicating positions on the Fermi surface in the superconductor ($S$) and the normal metal ($N$) for the incident ($i$) and outgoing ($o$) waves. By momentum conservation they correspond to the same $\mathbf{p}_\parallel$ and thus are characterized by the same azimuthal angle $\varphi_\mathbf{n}$. For simplicity we assume that Fermi surface in the superconductor to be a corrugated cylinder with the symmetry axis along $\hat z$, and that in the normal metal to be a sphere. The Fermi surface points corresponding to the incident and reflected waves are illustrated in Fig. \[fig:momentum\_boundary\]. The coordinates $\mathbf{r}^N$ and $\mathbf{r}^S$ correspond respectively to the normal metal - and the superconductor- sides of the insulating boundary. For brevity the obvious $\epsilon$ dependence of Green functions has been dropped. Finally the matrices $\check S_{\alpha\beta}$ are defined following Ref.  as $$\label{eq:S_hat_def} \check S_{\alpha\beta}= S_{\alpha\beta}(\mathbf{p}_\parallel)\frac{1+\tau_3}{2}+ S_{\beta\alpha}(-\mathbf{p}_\parallel)^{T}\frac{1-\tau_3}{2},$$ where $S_{\alpha\beta}$ is defined in Eq. (\[eq:tunneling\_so\]) and the superscript ${T}$ denotes the matrix transposition in the spin space. At weak tunneling we may approximate $\check{S}_{11}\approx\check{S}_{22}\approx 1$, and $$\check{S}_{12}=t_0\check{1}+t_s \check{\gamma}.$$ Here we introduced $$\begin{aligned} \label{eq:hat_gamma_def} \check{\gamma}= \left( \begin{array}{cc} \hat\gamma & 0 \\ 0 & \hat\gamma \\ \end{array} \right),\quad \hat{\gamma}= \left( \begin{array}{cc} \gamma(\varphi_\mathbf{n}) & 0 \\ 0 & -\gamma(\varphi_\mathbf{n})^{T} \\ \end{array} \right).\end{aligned}$$ with $\gamma(\varphi_\mathbf{n})$ defined in Eq. (\[eq:gamma\_n\_def\]). ![Fermi surface topologies of the superconductor (corrugated cylinder at left) and the normal metal (sphere at right). The vectors $\mathbf{n}_i$ and $\mathbf{n}_o$ correspond to respectively incident and outgoing waves. The superscripts $N$ and $S$ denote the superconductor and the normal metal sides of the insulating barrier. The vectors $\mathbf{n}^N$ and $\mathbf{n}^S$ correspond to the same parallel momentum, as shown by the green lines. The momentum domain where tunneling is possible is bounded by the angles $\vartheta_0$ and $\vartheta_1$. These angles define the integration limits in Eqs. (\[eq:matrix\_current\_boundary\]) and (\[eq:t2\]). []{data-label="fig:momentum_boundary"}](boundary){width="50.00000%"} For the purpose of studying electron transport at low temperatures, $T\ll \Delta$, we only need the Green functions with energies $\epsilon$ well below the gap $\Delta$. The Green functions inside the superconductor are practically unaffected by tunneling. Therefore, the boundary condition for the normal metal Green function is given by Eq. (\[eq:boundary-millis1\]), where the superconductor Green functions are replaced by their value in the bulk. Since latter do not depend on $p_z$ we have $\check{g}(\mathbf{r}^S,\mathbf{n}_i^S)= \check{g}(\mathbf{r}^S,\mathbf{n}_o^S)\equiv \check{g}(\mathbf{r}^S,\mathbf{n}^S)$. It is useful to define symmetric and antisymmetric Green functions as [@zaitsev; @kuprianov] $$\label{eq:g_sym_antisym} \check{g}_{s,a}(\mathbf{r},\mathbf{n}) =\frac{1}{2}[\check{g}(\mathbf{r},\mathbf{n}_i) \pm\check{g}(\mathbf{r},\mathbf{n}_o)]$$ With this notation Eq. (\[eq:boundary-millis1\]) may be written as follows: $$\begin{aligned} \label{eq:spinboundary} \check{g}_a(\mathbf{r}^N,\mathbf{n}^N)&=& -\frac{t_0^2}{4} \left[ \check{g}(\mathbf{r}^S,\mathbf{n}^S), \check{g}_s(\mathbf{r}^N,\mathbf{n}^N) \right]\nonumber\\ && -\frac{t_0t_s}{4} \left[ \left\{ \check{\gamma}, \check{g}(\mathbf{r}^S,\mathbf{n}^S) \right\}, \check{g}_s(\mathbf{r}^N,\mathbf{n}^N) \right]\nonumber\\ && +\frac{t_0t_s}{2} \left[ \check{\gamma}, \check{g}_s(\mathbf{r}^N,\mathbf{n}^N) \right].\end{aligned}$$ Here we for simplicity assume that due to weakness of spin-orbit coupling the electron tunneling amplitude with spin flip is smaller than that without, $t_s \ll t_0 \ll 1$. The first term in Eq. (\[eq:spinboundary\]) arises from the spin-conserving tunneling and coincides with that in Ref.  at small transparency. This term dominates electron transport properties of the junction in the high temperature regime. The second term comes from the spin orbit coupling. Although it is smaller than the first one, it generates the s-wave component proximity effect in the normal metal and thus determines the electron transport at low temperatures. Finally, the last term is odd in the parallel momentum. Therefore it vanishes upon averaging over the Fermi surface and does not contribute to electron transport in the diffusive regime. Kinetic scheme in the diffusive regime -------------------------------------- In the low temperature regime, $T \ll v_F/l$, the proximity effect extends to distances of order $L_T=\sqrt{D/T} \gg l $ into the normal metal (here $D$ is the electron diffusion constant). At such length scales the transport properties of the junction may be described in terms of the Usadel Green functions $\check{G}(\mathbf{r})$. The latter correspond to coincident coordinates of the electron operators in Eq. (\[eq:gorkov\_functions\]), $\mathbf{r}=\mathbf{r}'$, and may be expressed in terms of the Eilenberger Green functions (\[eq:quasiclassical\_function\]) by averaging them over the Fermi surface $$\begin{aligned} \label{eq:G_Usadel_def} \check{G}(\mathbf{r}) =\int d^2\mathbf{n}\, \check{g}(\mathbf{r},\mathbf{n}), \quad d^2\mathbf{n} =\frac{1}{4\pi}\, d\cos\vartheta_\mathbf{n} d\varphi_\mathbf{n}.\end{aligned}$$ where the polar and azimuthal angles $\vartheta_\mathbf{n}$ and $\varphi_\mathbf{n}$ characterize the unit vector $\mathbf{n}=(n_x, n_y, n_z)= (\sin \vartheta_\mathbf{n} \cos \varphi_\mathbf{n}, \sin \vartheta_\mathbf{n} \sin \varphi_\mathbf{n}, \cos \vartheta_\mathbf{n})$. We neglect the spin-orbit interaction in the normal metal and assume that the electrons in the normal lead are not spin polarized. The triplet component of the anomalous Green function is exponentially suppressed at distances larger than $l$ from the boundary with the superconductor. The singlet component, on the other hand, survives even at distances much larger than $l$. Therefore it dominates the electron transport in the junction at low temperatures. Below we focus on the singlet component of the Usadel Green function, $\hat G (\mathbf{r})$, which is a $4\times 4$ matrix in the Keldysh and Nambu space. Its various components $\alpha=R, A, K$ in the Keldysh space have the following form $$\label{eq:G_singlet} \hat{G}^{\alpha}(\mathbf{r})= \left( \begin{array}{ccc} G^{\alpha} & -iF^{\alpha} \\ i\tilde F^{\alpha} & -\tilde G^{\alpha} \end{array} \right).$$ The corresponding spin structure of the full $8\times 8$ Green function in Eq. (\[eq:G\_Usadel\_def\]) is given by $$\check{G}^{\alpha}(\mathbf{r})= \left( \begin{array}{ccc} G^{\alpha} \sigma_0 & -iF^{\alpha} i\sigma_2\\ i\tilde F^{\alpha} i\sigma_2 & -\tilde G^{\alpha} \sigma_0 \end{array} \right).$$ At length scales greater than $l$ the singlet component of the Usadel Green function satisfies the differential equation $$\label{eq:usadel} D\boldsymbol{\nabla} \cdot \left[ \hat{G}(\mathbf{r}) \boldsymbol{\nabla}\hat{G}(\mathbf{r}) \right] +i\epsilon \left[ \hat{\tau}_3, \hat{G}(\mathbf{r}) \right]=0.$$ Expanding in the Keldysh space, this equation gives $$\begin{aligned} D\boldsymbol{\nabla} \cdot \left( \hat{G}^{(R,A)} \boldsymbol{\nabla}\hat{G}^{(R,A)} \right) +i\epsilon[\hat{\tau}_3,\hat{G}^{(R,A)}]=0, \label{eq:usadel_E}\\ D\boldsymbol{\nabla} \cdot \left( \hat{G}^R \boldsymbol{\nabla}\hat{G}^K+ \hat{G}^K \boldsymbol{\nabla}\hat{G}^A \right) +i\epsilon[\hat{\tau}_3,\hat{G}^K]=0. \label{eq:usadel_NE}\end{aligned}$$ The first equation (\[eq:usadel\_E\]) is the Usadel equation, which describes the equilibrium properties of the system. The second equation (\[eq:usadel\_NE\]) for Keldysh component describes the non-equilibrium properties. The Usadel Green function satisfies the normalization conditions (\[eq:norm\_E\]) and (\[eq:norm\_NE\]). The condition (\[eq:norm\_NE\]) is satisfied by any matrix of the form (\[eq:keldysh\]). In the normal metal the matrix $\hat h$ may be expressed in terms of the symmetric and antisymmetric distribution functions $f_0$ and $f_1$ using Eq. (\[eq:LO\_parametrization\]). [@LarkinOvchinnikov] In a normal metal in contact with a single superconducting lead, Eq. (\[eq:usadel\_NE\]) can be used to obtain following equations for distribution functions by using Eqs. (\[eq:keldysh\]) and (\[eq:LO\_parametrization\]): $$\begin{aligned} \boldsymbol{\nabla} \cdot \left( \mathrm{Tr} \left[ 1-\hat{G}^R(\mathbf{r})\hat{G}^A(\mathbf{r}) \right] \boldsymbol{\nabla}f_0(\mathbf{r},\epsilon) \right)&=&0, \label{eq:even-distribution}\\ \boldsymbol{\nabla} \cdot \left( \mathrm{Tr} \left[ 1-\tau_3\hat{G}^R(\mathbf{r})\tau_3\hat{G}^A(\mathbf{r}) \right] \boldsymbol{\nabla}f_1(\mathbf{r},\epsilon) \right)&=&0. \label{eq:odd-distribution}\end{aligned}$$ The expressions for the density of states, electrochemical potential and current density in terms of the Usadel Green functions are $$\begin{aligned} \label{eq:dos} \nu(\mathbf{r},\epsilon) &=&\nu_0\mathrm{Re}\left\{ G^R(\mathbf{r},\epsilon)\right\},\\ \Phi(\mathbf{r}) &=&\frac{1}{e\nu_0}\int d\epsilon \nu(\mathbf{r},\epsilon)f_1(\mathbf{r},\epsilon), \label{eq:potential1}\\ J(\mathbf{r}) &=&e\nu_0D \int d\epsilon \Pi(\mathbf{r},\epsilon)\boldsymbol{\nabla} f_1(\mathbf{r},\epsilon). \label{eq:current1}\end{aligned}$$ Here $\Pi(\mathbf{r},\epsilon) =1+|G^R(\mathbf{r},\epsilon)|^2+|F^R(\mathbf{r},\epsilon)|^2$, $\nu_{0}= mp_F/\pi^2$ is the density of states of the normal metal in the absence of the proximity effect. Using Eq. (\[eq:norm\_E\]) one can write the retarded Usadel Green function in terms of the complex angles $\theta(\mathbf{r})$ and $\chi(\mathbf{r})$ as $$\label{eq:G^R_theta} \hat G^R(\mathbf{r})= \left( \begin{array}{cc} \cos\theta(\mathbf{r}) & -i \sin\theta(\mathbf{r}) e^{i \chi(\mathbf{r})} \\ i \sin\theta(\mathbf{r}) e^{-i \chi(\mathbf{r})} & -\cos\theta(\mathbf{r}) \\ \end{array} \right).$$ The corresponding parametrization for advanced Green function can be obtained by using $\hat G^A(\mathbf{r})=-\tau_3\left[\hat G^R(\mathbf{r})\right]^\dagger\tau_3$. For the system of interest, where the normal metal is connected to a single superconductor the phase $\chi(\mathbf{r})$ is independent of coordinates and is set by the phase of the order parameter in the superconductor. In this case ($\boldsymbol{\nabla}\chi(\mathbf{r})=0 $) the Usadel equation in (\[eq:usadel\_E\]) reduces to the following second order differential equation for the complex function $\theta(\mathbf{r})$: $$\frac{D}{2}\boldsymbol{\nabla}^{2}\theta(\mathbf{r}) +i\epsilon\sin\theta(\mathbf{r})=0\label{eq:sinegordon},$$ which is the well known sine-Gordon equation. The equations for the distribution functions in (\[eq:even-distribution\]) and (\[eq:odd-distribution\]) take the following forms in this parametrization: $$\begin{aligned} D\boldsymbol{\nabla} \cdot \left( \cos^2\theta_R(\mathbf{r})\boldsymbol{\nabla} f_0(\mathbf{r}) \right) &=& 0 \label{eq:odd-distribution-theta},\\ D\boldsymbol{\nabla} \cdot \left( \cosh^2\theta_{I}(\mathbf{r}) \boldsymbol{\nabla}f_1(\mathbf{r}) \right) &=& 0 \label{eq:even-distribution-theta}.\end{aligned}$$ Here we introduced the real and imaginary parts of $\theta(\mathbf{r})= \theta_R(\mathbf{r})+i\theta_I(\mathbf{r})$. Finally, using Eqs. (\[eq:potential1\]), (\[eq:current1\]), and (\[eq:G\^R\_theta\]) we get the following expressions for the electric current and potential $$\begin{aligned} J_n(\mathbf{r}) &=&eD\nu_0 \int d\epsilon\cosh^2\theta_I(\mathbf{r})\boldsymbol{\nabla} f_1(\mathbf{r}) \label{eq:current}\\ \Phi(\mathbf{r}) &=&\frac{1}{e} \int d\epsilon\cos\theta_R(\mathbf{r})\cosh\theta_I(\mathbf{r})f_1(\mathbf{r}) \label{eq:potential}.\end{aligned}$$ Below we will be interested only in linear in the external electric field effects, in which case $f_{0}=\tanh(\epsilon/2T)$, has its equilibrium form. The equations (\[eq:usadel\_E\]-\[eq:usadel\_NE\]) or (\[eq:sinegordon\]-\[eq:even-distribution-theta\]) must be supplemented with the boundary conditions. In Sec. \[sec:Usadel\_bc\] we obtain such conditions for a boundary between the normal metal and the p-wave superconductor in the geometry of our device. ### Diffusive Boundary Conditions in the Vertical Geometry {#sec:Usadel_bc} The boundary conditions for the Usadel Green function $\hat G (\mathbf{r})$ may be found by solving the Eilenberger equations (\[eq:EilenbergerEq\]) with boundary conditions (\[eq:spinboundary\]) at distances of the order of the mean free path $l$ from the boundary. This can be done using the method of Ref. . A key observation is that the Eilenberger equations (in which one may set $\epsilon \to 0$ for distances less than the mean free path from the boundary) conserve the matrix current normal to the boundary, $$\label{eq:matrix_current} \check j (\mathbf{r}) =\int d^2\mathbf{n} \check{g}({\bf r},\mathbf{n})\mathbf{v}_F \cdot \hat z =v_F\int' d^2\mathbf{n} \check{g}_a({\bf r},\mathbf{n}) \mathbf{n} \cdot \hat z .$$ The prime in the second expression indicates the fact that the integral must be taken over half the Fermi surface, $\mathbf{n}\cdot \hat z \geq 0$. At weak tunneling the singlet component $\hat j$ of the matrix current at the boundary may be expressed in terms of the Usadel Green function $\hat G (\mathbf{r})$ as[@kuprianov] $$\label{eq:matrix_current_Usadel} \hat j(\mathbf{r}^N)= D\hat{G}(\mathbf{r})\hat z\cdot \boldsymbol\nabla\hat{G}(\mathbf{r})|_{\mathbf{r}=\mathbf{r}^N}.$$ On the other hand, the matrix current may be evaluated by multiplying Eq. (\[eq:spinboundary\]) with $v_F \mathbf{n}^N \cdot \hat z$ and integrating the result over half the Fermi surface, $\mathbf{n}^N\cdot \hat z \geq 0$. In doing so it is important to keep in mind that at weak tunneling the symmetric part of Green function in the normal metal is independent of $\mathbf{n}^N$, $\check{g}_s(\mathbf{r}^N,\mathbf{n}^N)=\check{G}(\mathbf{r}^N)$, and that the superconductor Green function $\check{g}_s(\mathbf{r}^S,\mathbf{n}^S)$ may be replaced by its bulk value at $\epsilon=0$. The latter is given by $$\begin{aligned} \check{g}(\mathbf{n})=- \left[ \begin{array}{cc} 0 & e^{i(\varphi_\mathbf{n}+\chi_0)} \mathbf{d}\cdot\boldsymbol{\sigma}i \sigma_2\\ e^{-i(\varphi_\mathbf{n}+\chi_0)}i\sigma_2 \mathbf{d}^*\cdot\boldsymbol{\sigma} & 0 \end{array} \right]. \label{eq:greenfunction_p}\end{aligned}$$ Here we used Eq. (\[eq:delta\]). We consider unitary states, , and parameterize the vector $\mathbf{d}$ by an overall phase $\chi_0$ and the spherical angles $\vartheta_\mathbf{d}$, and $\varphi_\mathbf{d}$ as, $$\label{eq:d_angles} \mathbf{d}^T =e^{i\chi_0} (\sin\vartheta_\mathbf{d}\cos\varphi_\mathbf{d}, \sin\vartheta_\mathbf{d} \sin\varphi_\mathbf{d}, \cos\vartheta_\mathbf{d}).$$ It is easy to see that only the second term in the right hand side of Eq. (\[eq:spinboundary\]) contributes to the matrix current. The contributions of the other two terms vanish upon the integration over $\mathbf{n}$ because both $\check\gamma$ and the superconductor Green function $\check{g}(\mathbf{r}^S,\mathbf{n}^S)$ depend on the azimuthal angle $\varphi_\mathbf{n}$ as $e^{\pm i \varphi_\mathbf{n}}$, see Eqs. (\[eq:gamma\_n\_def\]), (\[eq:hat\_gamma\_def\]) and (\[eq:greenfunction\_p\]). We thus obtain $$\label{eq:matrix_current_boundary} \check j (\mathbf{r})= -\frac{ v_F}{4}\int_{\vartheta_0}^{\vartheta_1} \frac{d \cos \vartheta_\mathbf{n}}{2} t_st_0 \hat z\cdot\mathbf{n} \left[ \check{ \bar G}(\mathbf{r}^S) , \check{G}(\mathbf{r}^N). \right]$$ Here $v_F$ is Fermi velocity in the normal metal, and the integration limits $\vartheta_0$ and $\vartheta_1$ define the domain where tunneling is possible. This domain corresponds to the projection of the corrugated cylindrical Fermi surface in the superconductor to the Fermi surface in the metal, see Fig. \[fig:momentum\_boundary\]. Finally, $\check{\bar G}(\mathbf{r}^S)$ is given by $$\begin{aligned} \check{\bar G}(\mathbf{r}^S) &\equiv& \int \frac{d\varphi_\mathbf{p}^S}{2\pi} \left\{ \check{g}(\mathbf{r}^S,\mathbf{n}^S_i), \check{\gamma} \right\}\nonumber\\ &=& \left[ \begin{array}{cc} 0 & e^{i(\varphi_\mathbf{d}+\chi_0)}i\sigma_2 \\ e^{-i(\varphi_\mathbf{d}+\chi_0)}i\sigma_2 & 0 \end{array} \right]. \label{eq:eff_greenfunction}\end{aligned}$$ Comparing Eqs. (\[eq:matrix\_current\_Usadel\]) and (\[eq:matrix\_current\_boundary\]) we obtain the following boundary condition for the Usadel Green function, $$\label{eq:diffusivespinboundary2} D\check{G}(\mathbf{r}) \partial_z\check{G}(\mathbf{r})|_{\mathbf{r}=\mathbf{r}^N} =t \left[ \check{G}(\mathbf{r}^N), \check{\bar G}(\mathbf{r}^S) \right],$$ where $$\label{eq:t2} t=\frac{1}{4}|\sin\vartheta_\mathbf{d}| \int_{\vartheta_0}^{\vartheta_1}\frac{d\cos\vartheta_\mathbf{n}}{2}(t_s t_0 v_{F}\cos\vartheta_\mathbf{n}).$$ Note that the boundary condition in Eq. (\[eq:diffusivespinboundary2\]) has the same structure as that for a boundary between an normal metal and an s-wave superconductor. The reason is that only the s-wave component of the anomalous Green function survives in the normal metal at distances larger than $l$ from the boundary. The difference however is that in our case the effective barrier transparency $t$ in Eq. (\[eq:t2\]) depends on the spin-flip tunneling amplitude $t_s$, and on the vector $\mathbf{d}$ characterizing the spin orientation of the triplet order parameter. The phase of the effective s-wave anomalous Green function (\[eq:eff\_greenfunction\]), $\chi_0+ \varphi_\mathbf{d}$, also depends on the orientation of the spin vector $\mathbf{d}$ in the xy-plane. The aforementioned analogy enables one to treat the proximity effect in normal metal- p-wave superconductor systems in the diffusive regime as proximity effect in an effective s-wave superconductor problem, in which the phase of the s-wave order parameter and the barrier transparency depend on the spin orientation of the p-wave condensate. It is convenient to recast the boundary condition Eq. (\[eq:diffusivespinboundary2\]) in terms of the parametrization in Eq. (\[eq:G\^R\_theta\]). In our setup, see Fig. \[fig:geometry\], the phase $\chi(\mathbf{r})$ of the anomalous Usadel Green function (\[eq:G\^R\_theta\]) is uniform in space and equal to the phase of the effective s-wave order parameter, $\chi(\mathbf{r})=\varphi_\mathbf{n} + \chi_0$. The boundary condition for the angle $\theta(\mathbf{r}^N)$ becomes $$\label{eq:bc-theta} D\partial_z\theta(\mathbf{r})\big|_{\mathbf{r}=\mathbf{r}^N} =2t\cos \left[ \theta(\mathbf{r}^N) \right].$$ The Keldysh component of the boundary condition in Eq. (\[eq:diffusivespinboundary2\]) gives the following boundary condition for the even part of the distribution function: $$\begin{aligned} D\cosh^2\theta_I(\mathbf{r})\partial_z f_1(\mathbf{r}) |_{\mathbf{r}=\mathbf{r}^N} &=&2t\Gamma_\epsilon f_1(\mathbf{r}^N)\label{eq:bc-distribution}.\end{aligned}$$ Here we assumed that $f_1(\mathbf{r}^S)=0$ is zero inside superconductor and introduced the notation $$\label{eq:Gamma_epsilon_def} \Gamma_\epsilon =\cosh\theta_I(\mathbf{r}^N) \sin\theta_R(\mathbf{r}^N).$$ The set of equations (\[eq:sinegordon\]) and (\[eq:even-distribution-theta\]) along with the boundary conditions (\[eq:bc-theta\]) and (\[eq:bc-distribution\]) gives a description of electron transport in diffusive metal–p-wave superconductor systems. Below we apply these equations to our device geometry. Resistance of p-wave superconductor- normal metal junction {#sec:solutions} ========================================================== We consider the geometry in which the superconductor fills the $z<0$ half space and the normal metal occupies the $z>0$ half space, see Fig. \[fig:geometry\]. At weak tunneling the Green function in the superconductor is practically unaffected by the presence of the tunneling barrier. On the other hand, the low energy properties of the normal metal are significantly affected by the proximity effect. The singlet Usadel Green function (\[eq:G\_singlet\]) in the normal metal is described by the set of equations (\[eq:G\^R\_theta\]), (\[eq:sinegordon\]), (\[eq:even-distribution-theta\]) with the boundary conditions (\[eq:bc-theta\]) and (\[eq:bc-distribution\]). The solution of Eq. (\[eq:sinegordon\]) satisfying the condition $\lim_{z\to \infty} \theta(z) = 0$, has the form $$\theta(\epsilon,z) =4\arctan \left[ \exp \left( \beta_\epsilon+(i-1)\frac{z}{L_\epsilon} \right) \right]. \label{eq:theta_solution}$$ Here $\beta_\epsilon$ is an energy-dependent integration constant. Its value is determined from the boundary condition in Eq. (\[eq:bc-theta\]), which gives $$\cosh\beta_\epsilon-\frac{2}{\cosh\beta_\epsilon} =(1-i)\frac{L_{t}}{L_\epsilon} \label{eq:BC_cosh}$$ where $$L_{t}=\frac{D}{t},\quad L_\epsilon=\sqrt\frac{D}{\epsilon}.$$ The algebraic equation (\[eq:BC\_cosh\]) has multiple solutions for the integration constant $\beta_\epsilon$. The physical solution must satisfy the condition $\lim_{\epsilon \to 0}\theta(\epsilon ,z=0+)=\pi/2$, which gives $$\label{eq:exp_beta} e^{\beta_\epsilon}=\frac{\alpha+\sqrt{\alpha^2+8}}{2}- \frac{1}{2}\sqrt{(\alpha+\sqrt{\alpha^2+8})^2-4},$$ where we introduced the notation $\alpha=(1-i)L_t/L_\epsilon$. ![(color online) Density of states in the normal metal as a function of the distance from the superconductor-normal metal boundary for different temperatures: $L_t/L_\epsilon=0.01$ (blue), $L_t/L_\epsilon=0.5$ (green), and $L_t/L_\epsilon=1.2$ (red).[]{data-label="fig:dos_bulk"}](dos){width="50.00000%"} An important aspect of the solution Eq. (\[eq:theta\_solution\]) is that in the normal metal, at small values of $\epsilon$ and at small distances from the boundary, $\theta(z)\approx \pi/2$ which is the same as in the bulk of the superconductor. In particular, it means that at small energies the density of states in metal is strongly suppressed at distances smaller than $L_{\epsilon}$. The full spatial dependence of the density of states $\nu(\epsilon,z)$ may be obtained by substituting the solution (\[eq:theta\_solution\]), (\[eq:exp\_beta\]) of the Usadel equation into Eqs. (\[eq:dos\]), and (\[eq:G\^R\_theta\]). In Fig. \[fig:dos\_bulk\] we have plotted the result as a function of $z/L_\epsilon$ for different values of $L_{t}/L_\epsilon$. Note that the effective diffusion constant for the distribution function $f_1$ is determined by the imaginary part of $\theta$, see Eq. (\[eq:even-distribution-theta\]). From the solution (\[eq:theta\_solution\]) it follows that the imaginary part $\theta_{I}$ is close to zero both at $z\gg L_\epsilon$ and $z\ll L_\epsilon$ and has a maximum at $z\sim L_\varepsilon$ whose value depends on $L_{\epsilon}/L_{t}$. Therefore the effective diffusion coefficient in Eq. (\[eq:even-distribution-theta\]) approaches its normal metal value at $z\gg L_\epsilon$ and $z\ll L_\epsilon$. In the intermediate region $z\sim L_\epsilon$ the diffusion coefficient exceeds the Drude value. The differential equation (\[eq:even-distribution-theta\]) for the non-equilibrium part of the distribution function has the first integral, which has the meaning of the conserved partial current density at a given energy $\epsilon$ $$\label{eq:partial_current} J_\epsilon\equiv eD\nu_0 \cosh^2\theta_I(z)\partial_zf_1(\epsilon, z).$$ The energy dependence of the partial current $J_\epsilon$ can be obtained by noticing that far away form the boundary the distribution function should have the form $$\label{eq:f_1_form} f_1 (\epsilon, z)= \frac{1}{\cosh^2\frac{\epsilon}{2T}} \frac{eJ_0}{2T \sigma_D} (z -z_0),$$ where $\sigma_D=e^2\nu_0 D$ is the Drude conductivity of the normal metal, and we introduced the current density, $$\label{eq:J_0} J_0=\int_{-\infty}^\infty d\epsilon J_\epsilon.$$ Substituting Eq. (\[eq:f\_1\_form\]) into Eq. (\[eq:partial\_current\]) we obtain the following expression for the partial current $$\label{eq:J_epsilon_result} J_\epsilon= \frac{J_0}{4T} \frac{1}{\cosh^2\frac{\epsilon}{2T}}.$$ Using Eqs. (\[eq:partial\_current\]) and (\[eq:J\_epsilon\_result\]) the solution of Eq. (\[eq:even-distribution-theta\]) which satisfies the boundary condition (\[eq:bc-distribution\]) and the asymptotic form (\[eq:f\_1\_form\]) at large distances may be written in the form $$f_1(\epsilon, z) =\frac{eJ_0}{ 2 T \sigma_D \cosh^2\frac{\epsilon}{2T} } \left[ \frac{L_t}{2\Gamma_\epsilon}+\int_{0}^z\frac{ dz'}{\cosh^2\theta_I(\epsilon, z')} \right].$$ Here $\theta_I(z')$ is given by Eqs. (\[eq:theta\_solution\]) and (\[eq:exp\_beta\]), and $\Gamma_\epsilon$ was defined in Eq. (\[eq:Gamma\_epsilon\_def\]). Substituting this result in Eq. (\[eq:potential\]) we get the following expression for the gauge invariant potential $$\begin{aligned} \Phi(z) &=&\frac{J_0}{\sigma_D}\int_0^\infty \frac{d\epsilon}{2T} \frac{\cos\theta_R(\epsilon,z) \cosh\theta_I(\epsilon, z)}{\cosh^2\frac{\epsilon}{2T}}\label{eq:phicapital} \nonumber\\ &&\times \left( \frac{L_{t}}{2\Gamma_\epsilon} +\int_0^z\frac{dz'}{\cosh^2\theta_I(\epsilon, z')} \right).\end{aligned}$$ In Fig. (\[fig:potential\_phi\]), we plotted the dependence of the gauge invariant potential on the dimensionless distance from the boundary, $z/L_T$, for different values of the dimensionless barrier transparency parameter, $L_{t}/L_T$. ![(color online) The spatial variation of the gauge-invariant potential $\Phi$ (solid lines) and the compensating voltage $V_{stm}$ at the STM tip (dashed lines) on the dimensionless distance $z/L_T$ from the boundary is plotted at different temperatures; $L_t/L_T=0.01$ (blue), $L_t/L_T=1$ (green) and $L_t/L_T=5$ (red). The solid grey lines represent the large distance asymptotes of the gauge invariant potential. Their intercepts with the vertical axis for the three values of $L_t/L_T$ are marked by $\Phi_0$ in the corresponding color. The value of $\Phi_0$ defines the junction resistance $R_\infty$ in Eq. (\[eq:R\_2\_def\]). []{data-label="fig:potential_phi"}](potential){width="50.00000%"} One of the important features of transport through the junction is that at low temperatures the gauge invariant potential $\Phi(z)$ is significantly suppressed near the superconductor-normal metal boundary, and is a non-linear functions of $z$. In particular, the voltage drop across the insulator, $\Phi(z=0)$, goes to zero in the low temperature limit. Because of the nontrivial spatial distribution of the electric field in the junction its resistive properties may be characterized in different ways. One measure of the resistance can be defined in terms of the voltage drop across the insulating barrier. We define the resistance of the insulating boundary per unit area as $$\label{eq:R_1_def} R_0=\frac{\Phi(z=+0)}{J_{0}}.$$ Using Eq. (\[eq:phicapital\]) one can expression the boundary resistance $R_0$ per unit area in the form $$\label{eq:R_0_result} R_0=\frac{1}{e^{2}\nu_{0}t}\frac{L_{t}}{L_{T}}A\left(\frac{L_{t}}{L_{T}}\right),$$ where the dimensionless function $A\left(L_{t}/L_{T}\right)$ is defined by the following integral $$\begin{aligned} A\left(\frac{L_{t}}{L_{T}}\right) &=&\frac{L_T}{L_t}\int_0^\infty \frac{d\epsilon}{4T} \frac{\cot\theta_R(\epsilon,0)}{\cosh^2\frac{\epsilon}{2T}}. \label{eq:R_0}\end{aligned}$$ This function is plotted in Fig. \[fig:R\_0\]. In low and high temperature limits this expression tends to the following constants; $A(0)\approx 0.37$ and $A(\infty)\approx 0.53$. As a result in the high and low temperature regimes the boundary resistance $R_0 \propto \sqrt{T}$. ![Plot of the function $A(L_t/L_T)$ in Eqs. (\[eq:R\_0\_result\]), (\[eq:R\_0\]).[]{data-label="fig:R_0"}](r0_a){width="50.00000%"} Note that at low temperatures, $L_T \gg L_{t}$, the magnitude of the jump of $\Phi(z)$ at the insulator boundary approaches zero at $T\to 0$. This is very different from the resistance of the normal metal-insulator-normal metal junctions where in the presence of a current though the junction $R_{NIN}=1/e^{2}\nu_{0} \tilde{t}$, where $\tilde{t}\sim t_0^{2}$ is the transmission coefficient of the insulator. Another measure of the junction resistance may be obtained by extrapolating the linear dependence of $\Phi(z)$ at large distances, $\Phi(z)= J_0 z/\sigma_D + \Phi_0$ to the location of the barrier, $z=0$. This is shown by grey solid lines in Fig.  \[fig:potential\_phi\]. The value of the intercept with the vertical axis, $\Phi_0$, defines the total resistance per unit area of the junction $$\label{eq:R_2_def} R_\infty= \frac{\Phi_0}{J_0}.$$ Using Eq. (\[eq:phicapital\]) we obtain $$R_\infty = \frac{1}{e^2t\nu_0}B\left(\frac{L_t}{L_T}\right),$$ where the function $B(L_t/L_T)$ is given by the following integral $$B= \int_0^\infty\frac{d\epsilon}{2T}\frac{1}{\cosh^2\frac{\epsilon}{2T}} \left[ \frac{1}{2\Gamma_\epsilon}- \int_0^\infty \frac{dz'}{L_t} \tanh^2\theta_I(\epsilon,z') \right]. \label{eq:R_inf}$$ The first term in the brackets is positive and represents the contribution of the insulating boundary. The second term is negative. It describes the reduction of the resistance of the normal metal due to the proximity effect. The junction resistance $R_\infty$ is plotted in Fig. (\[fig:R\_inf\]) as a function of $L_t/L_T$. At relatively high temperatures $L_t/L_T\gg 1$, junction resistance $R_\infty$ is dominated by the contribution from the insulating boundary (first term in Eq. (\[eq:R\_inf\])). In this case $B(L_t/L_T)\approx 0.53\, L_t/L_T$, in agreement with the discussion below Eq. (\[eq:R\_0\]). In the low temperature regime, $L_t\ll L_T$, the junction resistance is dominated by the change in the resistance of the normal metal due to the proximity effect (second term in Eq. (\[eq:R\_inf\]) and becomes negative. In this case the junction resistance reduces to $$\label{eq:R_inf_low_T} R_\infty = - \frac{b}{e^2t\nu_0}\frac{L_T}{L_t},$$ where the constant $b$ is given by $$\begin{aligned} \label{eq:gamma_def} b&=& \int_0^\infty \frac{d\lambda}{2} \frac{\lambda^{-1/2}}{\cosh^2\frac{\lambda}{2}}\nonumber \\ &&\times\int_0^\infty d\zeta\tanh^2\left[4\, \mathrm{Im} \arctan\left( (\sqrt{2}-1)e^{(i-1)\zeta}\right)\right]\nonumber\\ &\approx&0.39 .\end{aligned}$$ ![The junction resistance $R_\infty$ per unit area (in units of $1/e^2 \nu_0 t$) is a plotted as a function of $L_t/L_T$. []{data-label="fig:R_inf"}](r_inf){width="50.00000%"} Probing the spatial distribution of the gauge-invariant potential $\Phi (\mathbf{r})$ ------------------------------------------------------------------------------------- Let us now discuss the possibility of experimental observation of the suppression of $\Phi(x)$ near the junction’s boundary by using a scanning tunneling probe. We consider the setup illustrated in Fig. \[fig:geometry\]. The electron transport between the STM tip and the metal can be described with the aid of the tunneling Hamiltonian $$H_T=\sum_{\mathbf{k}\mathbf{p}} \left[ t_{\mathbf{kp}} c_\mathbf{k}^\dagger c_\mathbf{p}+t_{\mathbf{kp}}^* c_\mathbf{p}^\dagger c_\mathbf{k}\right].$$ Here $c^\dagger$ is an electron creation operator, and $\mathbf{k}$ labels the states in the STM tip and $\mathbf{p}$ labels the states in the wire. In the tunneling approximation the STM current can be written in the form $$\begin{aligned} \label{eq:voltage} I_{stm}(z) &=& \frac{g_n}{2e} \int_{-\infty}^{\infty} d\epsilon \cos\theta_R(\epsilon,z) \cosh\theta_I(\epsilon, z)\nonumber\\ &&\times\left[ f_1^{stm}(\epsilon)- f_1(\epsilon, z) \right],\end{aligned}$$ where $g_n$ is the conductance of the tunneling contact in the normal state. The nonequilibrium distribution function in the STM is given by $f_1^{stm}(\epsilon)=eV_{stm}/2T\cosh^2(\epsilon/2T)$, where $V_{stm}$ is the STM voltage measured relative to that in the superconductor. Using Eq. (\[eq:potential\]) we can rewrite Eq. (\[eq:voltage\]) in the form $$\label{eq:tunneling_current_short} I_{stm}(z)=g_n\Phi (z) - g_t(T, z)V_{stm},$$ where $$\label{eq:tunneling conductance} g_t(T, z)=g_n\int_{0}^{\infty} \frac{d\epsilon}{2T}\frac{\cos\theta_R(\epsilon,z) \cosh\theta_I(\epsilon, z)}{ \cosh^2\frac{\epsilon}{2T}}$$ is the conductance of the tunneling contact. In the case where the voltage $V_{stm}$ at the tip vanishes the value of the tunneling current through the STM contact is proportional to $\Phi(z)$, $$\label{Istm} I_{stm}(z)=g_n\Phi(z).$$ In particular, $I_{stm}(z)$ is significantly suppressed near the superconductor-normal metal boundary, reflecting corresponding suppression of $\Phi(x)$. On the other hand, if $I_{stm}=0$, we get $$\label{Vstm} V_{stm}(z)=\frac{g_n}{g_t(T, z)}\, \Phi(z),$$ where $\Phi(z)$ is given by Eq. (\[eq:phicapital\]). The graph of $V_{stm}(z)$ is plotted in Fig. \[fig:potential\_phi\] by dashed lines for several temperatures. It is interesting to note that, in contrast to the gauge invariant potential, Eq. (\[Istm\]), the compensating STM voltage in Eq. (\[Vstm\]) does not exhibit the aforementioned suppression near the boundary at low temperatures, $L_T \gg L_t$. The slope $d V_{stm}(z)/dz $ remains practically the same as in the normal metal in the absence of superconductor, both at $z\ll L_{T}$ and at $z\gg L_{T}$. The reason is that the conductance of the tunneling barrier between the STM and the metal, $g_t(T, z)$, reflects the suppression of the single particle density of states in the metal, as described by Eq. (\[eq:tunneling conductance\]). This nearly cancels the suppression of $\Phi(z)$ in Eq. (\[Vstm\]). Conclusions {#sec:conclusion} =========== We show that the low temperature resistance of the p-wave superconductor-diffusive normal metal junctions is controlled by the spin-orbit interaction. As a result the junction resistance, tunneling density of states in the metal and other transport properties of the device exhibit a strong dependence on the angle between the vector ${\bf d}$ characterizing the spin part of the superconducting wave function, and the normal to the surface of the junction. In particular, the s-wave component of the proximity effect in metal vanishes when ${\bf d}$ is parallel to the c-axis. The absence of the corresponding dependence of the Knight shift on the angle between ${\bf d}$ and the c-axis in $Sr_{2}RuO_{4}$ crystals is one of the problems in the interpretation of $Sr_{2}RuO_{4}$ as a conventional p-wave superconductor. This fact was attributed to weakness of the spin-orbit interaction in $Sr_{2}RuO_{4}$.[@knight] We would like to point out that the resistance of the junction should be strongly dependent on the angle between ${\bf d}$ and ${\bf z}$ even in the case of weak spin-orbit interaction. Therefore the measurement of this effect could clarify the situation. Another consequence of the sensitivity of the proximity effect to the orientation of the condensate spin is that a current passing across such a junction leads to spin accumulation inside the p-wave superconductor (although inside the proximity region no spin accumulation occurs). We also would like to mention that the boundary conditions Eq. (\[eq:diffusivespinboundary2\]) can be used to describe the Josephson effect in junctions consisting of two p-wave superconductors separated by a diffusive normal metal. The structure of boundary conditions (\[eq:diffusivespinboundary2\]) is similar to those of for s-wave superconductor-normal metal junction. Therefore the supercurrent for the p-wave case may be obtained from the conventional formulas for the s-wave case if we substitute the phase difference in the s-wave case with $\phi_\mathbf{d}+\chi_0$, see Eqs. (\[eq:eff\_greenfunction\]) and (\[eq:d\_angles\]), and the transmission coefficient with $t$. An important consequence of the proximity effect near the superconductor-normal metal boundary is the suppression of the Hall effect in the metal near the superconducting boundary. Qualitatively, this suppression is related to the fact that, due to proximity effect, at low energies the quasiparticle wave functions in metal are a coherent superposition of electron and hole wave functions, and the effective charge of the quasiparticles approaches zero at $\epsilon\rightarrow 0$. The presented above scheme of calculation of the electronic transport was derived in zeroth order in $\omega_{c}\tau$, where $\omega_c$ is the cyclotron frequency and $\tau$ is the elastic mean free time. In this approximation the electron wave functions near the Fermi surface are electron-hole symmetric, which yields a vanishing Hall effect. To describe Hall effect one has to add to the expression for the current a term linear in $\omega_{c}\tau$,[@ZhouSpivak] $$\mathbf{J}_H\propto \omega_{c}\tau {\bf b}\times\int d\epsilon \cos\theta_{R}\cosh^3\theta_{I} \boldsymbol{\nabla} f_{1}$$ here $\omega_{c}$ is the cyclotron frequency, and ${\bf b}$ is the unit vector in the direction of the magnetic field. Since the magnitude of the proximity effect is controlled by $t$, which is proportional to $\sin\vartheta_\mathbf{d}$, the Hall conductance is expected to have a strong dependence on the orientation of the order parameter, $\mathbf{d}$. Since the latter may be oriented by the external magnetic field, both the magnetoresistance of the junction and the Hall resistance are expected to be strongly anisotropic with respect to orientation of the magnetic field. Finally, we note that our results hold for more general realizations of $p_x+ip_y$ order parameter in superconductors with complicated topology of the Fermi surface, such as the one proposed in Ref. . This work was supported by the US Department of Energy through the grant DE-FG02-07ER46452. B. S. thanks the International Institute of Physics (Natal, Brazil) for hospitality during the completion of the paper. [99]{} , [Phys. Rev. B]{}, [**25**]{} 4515 (1892). , [Phys. Rev. B]{}, [**49**]{}, [6847]{} (1994). , [Phys. Rev. B]{}, [**53**]{}, [14496]{} (1996). , Phys. Rev. B, [**46**]{}, 12841 (1992). , [Rev. Mod. Phys.]{}, [**69**]{}, [731]{} (1997). , [Phys. Rev. B]{}, [**52**]{}, [4467]{} (1995). , Science, [**306**]{}, 1151 (2004). , Science, [**314**]{}, 1267 (2006). , Phys. Rev. Lett. [**97**]{}, 167002 (2006). G. M. Luke [*at al.*]{}, Nature [**394**]{}, 558 (1998). , [Rev. Mod. Phys.]{}, [**75**]{}, [657]{} (2003). , [J. Phys. Soc. of Japan]{}, [**81**]{}, [011009]{} (2012). , [Phys. Rev. Lett.]{}, [**97**]{}, [167002]{} (2006). ,[ J.Phys.: Condens. Matter]{} [**7**]{}, L613 (1996). , Phys.Rev. [**123**]{}, 1911, (1961). , [Phys. Rev. Lett.]{}, [**105**]{}, [136401]{} (2010). S. Kashiwaya *et al.*, Phys. Rev.Lett. [**107**]{}, 077003 (2011). , Phys. Rev. Lett. [**84**]{}, 1595, (2000). , J. Low. Temp. Phys. [**131**]{}, 1059 (2003). , Phys.Rev. B [**76**]{}, 014526 (2007). , [Rep. Prog. Phys.]{}, [**765**]{}, 042501 (2012). , *Introduction to unconventional superconductivity*, Amsterdam, Gordon and Breach Science Publishers, (1999). , [Phys. Rev. Lett.]{}, [**93**]{}, [167004]{} (2004). , [Sov. Phys. JETP [**41**]{} 960 (1975) ]{}; [Sov. Phys. JETP [**46**]{}, 155 (1977). ]{} , [J. Low Temp. Phys.]{} [**60**]{}, [pg. 29-44]{}, (1985). A. V. Zaitsev, [ Sov. Phys. JETP]{}, [**59**]{}, [1015]{} (1984). ,[ Phys. Rev. B]{}, [**38**]{}, [4504]{} (1988). M. Y. U. Kuprianov and V. F. Lukichev, [ Sov. Phys. JETP]{}, [**67**]{}, [1163]{} (1990). , [Phys. Rev. Lett.]{}, [**80**]{}, [3847]{} (1998).
--- abstract: 'For $p \in [1, \infty),$ we define and study full and reduced crossed products of algebras of operators on $\sigma$-finite $L^p$ spaces by isometric actions of second countable locally compact groups. We give universal properties for both crossed products. When the group is abelian, we prove the existence of a dual action on the full and reduced $L^p$ operator crossed products. When the group is discrete, we construct a conditional expectation to the original algebra which is faithful in a suitable sense. For a free action of a discrete group on a compact metric space $X,$ we identify all traces on the reduced $L^p$ operator crossed product, and if the action is also minimal we show that the reduced $L^p$ operator crossed product is simple. We prove that the full and reduced $L^p$ operator crossed products of an amenable $L^p$ operator algebra by a discrete amenable group are again amenable. We prove a Pimsner-Voiculescu exact sequence for the K-theory of reduced $L^p$ operator crossed products by ${{\mathbb{Z}}}.$ We show that the $L^p$ analogs ${\mathcal{O}}_d^p$ of the Cuntz algebras ${\mathcal{O}}_d$ are stably isomorphic to reduced $L^p$ operator crossed products of stabilized $L^p$ UHF algebra by ${\mathbb{Z}},$ and show that $K_0 \big( {\mathcal{O}}_d^p \big) \cong {{\mathbb{Z}}}/ (d - 1) {{\mathbb{Z}}}$ and $K_1 \big( {\mathcal{O}}_d^p \big) = 0.$' address: 'Department of Mathematics, University of Oregon, Eugene OR 97403-1222, USA.' author: - 'N. Christopher Phillips' date: 24 September 2013 title: 'Crossed products of $L^p$ operator algebras and the K-theory of Cuntz algebras on $L^p$ spaces' --- This paper is an initial investigation of full and reduced crossed products of algebras of operators on $L^p$ spaces by isometric actions of locally compact groups, for $p \in [1, \infty).$ The original motivation was the computation of the K-theory of the analogs of Cuntz algebras on $L^p$ spaces, introduced in [@PhLp1]. The result is the same as in the C\* case: $K_0 \big( {{\mathcal{O}}_{d}^{p}} \big) \cong {{\mathbb{Z}}}/ (d - 1) {{\mathbb{Z}}}$ and $K_1 \big( {{\mathcal{O}}_{d}^{p}} \big) = 0.$ The choice of material in this paper is largely dictated by what is needed for this calculation, but we carry the work out in the greatest generality not requiring significant extra work, and we present some results unrelated to this computation but which can be done with little additional work. As steps toward the computation, we prove that on the reduced $L^p$ operator crossed product by a countable group, there is a conditional expectation to the original algebra which is faithful in a suitable sense, and we prove a Pimsner-Voiculescu exact sequence for the K-theory of reduced $L^p$ operator crossed products by ${{\mathbb{Z}}}.$ We also construct the dual action on the full and reduced $L^p$ operator crossed products by a second countable locally compact abelian group. The main unrelated results are as follows. Let $G$ be a countable discrete group. If $X$ is a free minimal compact metrizable $G$-space, then the reduced $L^p$ operator crossed product by the corresponding action on $C (X)$ is simple. If $X$ is a free compact metrizable $G$-space, not necessarily minimal, then the traces on the reduced $L^p$ operator crossed product are in one to one correspondence with the invariant Borel probability measures on $X.$ If $G$ is amenable and acts on an $L^p$ operator algebra which is amenable as a Banach algebra, then both the full and reduced $L^p$ operator crossed products are amenable Banach algebras. We also mention a separate result of Pooya and Hejezian [@PH]. Let $p \in (1, {\infty}),$ let $G$ be a Powers group, and let ${\alpha}\colon G \to {{\mathrm{Aut}}}(A)$ be an isometric action of $G$ on an $L^p$ operator algebra $A$ such that $A$ is $G$-simple. Then the reduced $L^p$ operator crossed product of $A$ by $G$ is simple. This is not true for $p = 1,$ as follows from Proposition \[P\_3217\_L1OpGpAlg\]. We originally hoped to compute $K_* \big( {{\mathcal{O}}_{d}^{p}} \big)$ directly, without developing the theory of $L^p$ operator crossed products. However, the methods of the original computation, in [@Cu2], do not seem to work. See the discussion at the beginning of Section \[Sec\_OpdCP\]. There are many interesting questions which we do not address. We exclude the case $p = {\infty}$ in most of the paper, since it seems to require special treatment. Whenever convenient, we restrict to discrete groups. We make no effort to decide when the full and reduced $L^p$ operator crossed products are the same, beyond the simple observation that they are at least isomorphic when the group is finite. We do not investigate the crossed product by the dual action (Takai duality). We do not try to give any general criteria for simplicity of crossed products; even for actions on spaces, our result is weaker than in the C\* case, where one only needs the action to be essentially free. There are many other interesting questions which we do not even mention. This paper is organized as follows. In Section \[Sec\_LpOpAlg\], we define $L^p$ operator algebras (closed subalgebras of $L (L^p (X, \mu))$), give some examples, and prove a few elementary facts. Section \[Sec\_GAlg\] contains the definition of (isometric) $G$-$L^p$ operator algebras, that is, $L^p$ operator algebras with (isometric) actions of the group $G,$ and their covariant [representation]{}[s]{}. We give examples of $G$-$L^p$ operator algebras, and prove the existence of regular covariant [representation]{}[s]{}. Full and reduced $L^p$ operator crossed products are introduced in Section \[Sec\_CP\]. Analogs of the usual universal properties for C\* crossed products are given. When the group is abelian, dual actions are defined on both the full and reduced $L^p$ operator crossed products. Section \[Sec\_DiscCP\] contains the construction of the conditional expectation on a reduced $L^p$ operator crossed product by a discrete group. In Section \[Sec\_FpCX\], we prove the results mentioned above on $L^p$ operator crossed products by free actions on compact spaces, and on amenability of $L^p$ operator crossed products. Section \[Sec\_PV\] contains the proof of the analog of the Pimsner-Voiculescu exact sequence for the K-theory of reduced $L^p$ operator crossed products by ${{\mathbb{Z}}}.$ In Section \[Sec\_OpdCP\], we show how to realize the stabilization of the algebra ${{\mathcal{O}}_{d}^{p}}$ as the reduced $L^p$ operator crossed product of an action of ${{\mathbb{Z}}}$ on a stabilized $L^p$ UHF algebra, and we combine this realization with the Pimsner-Voiculescu exact sequence to compute $K_* \big( {{\mathcal{O}}_{d}^{p}} \big).$ In Section \[Sec\_Problems\], we state a few of the many problems left open in this paper. We warn the reader that many facts which are either automatic or well known for [C\*-algebra]{}[s]{} are false, unknown, or require additional work for $L^p$ operator algebras. Integration of Banach space values [continuous function]{}[s]{} with compact support will be taken to be as in Section 2.5 of [@DDW]. For more details, see Section 1.5 of [@Wl]. I am grateful to Guillermo Cortiñas, María Eugenia Rodríguez, and Sanaz Pooya for finding a number of misprints in earlier drafts of this paper. $L^p$ operator algebras {#Sec_LpOpAlg} ======================= Crossed products will be taken with respect to the category of norm closed subalgebras of algebras of the form ${L (L^p (X, \mu))},$ for a fixed value of $p.$ We call such algebras $L^p$ operator algebras. We will mostly restrict to nondegenerate algebras and [${\sigma}$-finite measure space]{}[s]{}. In this section, we present the basic definitions related to $L^p$ operator algebras and some examples. \[D-LpOpAlg\] Let $A$ be a Banach algebra, and let $p \in [1, {\infty}].$ We say that $A$ is an [*$L^p$ operator algebra*]{} if there is a measure space ${(X, {\mathcal{B}}, \mu)}$ such that $A$ is isometrically isomorphic to a norm closed subalgebra of ${L (L^p (X, \mu))}.$ We do not assume that $A$ is unital. When $p = 2,$ an $L^p$ operator algebra is a Banach algebra which is isometrically isomorphic to a norm closed but not necessarily selfadjoint subalgebra of the bounded operators on some Hilbert space. (We do not know if there is a useful $L^p$ analog of a C\*-algebra.) Results such as the characterization in [@BRS] suggest that nonselfadjoint operator algebras are better behaved when matrix norms are included in the structure. In the $L^p$ situation, there is an obvious way to define matrix norms. In [@LM] there is a representation theorem for matrix normed operator algebras on the collection of quotients of subspaces of $L^p$ spaces, for a fixed value of $p.$ However, for $p \neq 2,$ this collection is much bigger than the collection of $L^p$ spaces, and therefore does not meet our needs. We do not need matrix norms for the purposes of this paper, essentially because the algebras we consider have a unique $L^p$ operator matrix normed structure. Therefore we do not pursue this idea here. We do make scattered remarks. For example, as discussed at the beginning of Section \[Sec\_CP\], the completely isometric theory of crossed products is essentially a special case of what we do here. When $p = {\infty},$ it may well be more appropriate to consider norm closed subalgebras of spaces of the form $L (C_0 (X))$ for locally compact Hausdorff spaces $X.$ Since we will eventually have to exclude $p = {\infty}$ anyway, we do not pursue this approach here. The algebra ${L (L^p (X, \mu))}$ is an obvious example of an $L^p$ operator algebra. Here are some others. \[E-Mpd\] Let $p \in [1, {\infty}],$ let $d \in {{\mathbb{Z}}_{> 0}},$ set $Z = \{ 0, 1, \ldots, d - 1 \},$ and let ${\lambda}$ be normalized counting measure on $Z,$ that is, ${\lambda}(S) = d^{-1} {{\mathrm{card}}}(S)$ for all $S {\subset}Z.$ Then $L (L^p ( Z, {\lambda}))$ is an $L^p$ operator algebra, algebraically isomorphic to the algebra $M_d$ of $d \times d$ complex matrices. We call it $M^p_d.$ We write its standard matrix units as $e_{j, k}$ for $j, k \in Z.$ Thus, $$e_{j, k} ({\chi}_{ \{ l \} } ) = \begin{cases} 0 & l \neq k \\ {\chi}_{ \{ j \} } & l = k. \end{cases}$$ We use $\{ 0, 1, \ldots, d - 1 \}$ rather than $\{ 1, 2, \ldots, d \},$ and normalized counting measure, for notational convenience in Section \[Sec\_OpdCP\]. \[R-UnNormalize\] Let the notation be as in Example \[E-Mpd\], and let $\nu = d \cdot {\lambda}$ be ordinary counting measure on $Z.$ Then there is an isometric isomorphism ${\varphi}$ from $M^p_d$ as defined there to $L (l^p (Z, \nu))$ which sends matrix units to matrix units. Indeed, let $u \in L \big( L^p (Z, \mu), \, L^p (Z, {\lambda}) \big)$ be the isometric isomorphism $u \xi = d^{1/p} \xi.$ Then set ${\varphi}(a) = u a u^{-1}$ for $a \in M^p_d.$ More generally, replacing the measure $\mu$ by a strictly positive multiple of $\mu$ does not change ${L (L^p (X, \mu))}.$ (See Lemma 2.11 of [@PhLp2b] for the formal statement.) The algebra $M^p_d$ plays a key role in [@PhLp1] and [@PhLp2a]. \[L\_3911\_UTr\] Let the notation be as in Example \[E-Mpd\], and set $$T = {{\mathrm{span}}}\big( \big\{ e_{j, k} \colon 0 \leq j \leq k \leq d - 1 \big\} \big),$$ the algebra of upper triangular matrices. Then $T$ is an $L^p$ operator algebra. Unlike the other examples we present here, for $p = 2$ we do not get a [C\*-algebra]{}. For any Banach space $E,$ we let $K (E)$ denote the algebra of compact operators on $E.$ \[E-CptOps\] Let $p \in [1, {\infty}],$ and let ${(X, {\mathcal{B}}, \mu)}$ be a [measure space]{}. Then the algebra $K (L^p (X, \mu))$ of compact operators on $L^p (X, \mu)$ is an $L^p$ operator algebra. \[E-MatpS\] Let $S$ be a set, and let $p \in [1, {\infty}].$ For a finite subset $F {\subset}S,$ we define $M_F^p$ to be the set of all $a \in L (l^p (S))$ such that $a \xi = 0$ whenever $\xi |_F = 0$ and such that $a \xi \in l^p (F) {\subset}l^p (S)$ for all $\xi \in l^p (S).$ We then define $$M_S^p = \bigcup_{ {\mbox{$F {\subset}S$ finite}} } M_F^p,$$ and we define ${{\overline}{M}}_S^p$ to be the closure of $M_S^p$ in the operator norm on $L (l^p (S)).$ It follows from [Lemma \[L-MpIsAlg\]]{} below that ${{\overline}{M}}_S^p$ is an $L^p$ operator algebra which is contained in $K (l^p (S)).$ By Corollary \[C-UMnIsK\] below, ${{\overline}{M}}_S^p = K (l^p (S))$ for $p \in (1, {\infty}),$ but Example \[E-UMn1\] shows that this is not true for $p = 1.$ When $S = \{ 0, 1, \ldots, d - 1 \}$ or $S = \{ 1, 2, \ldots, d \},$ we recover the algebra $M^p_d$ of Example \[E-Mpd\], including its norm, by Remark \[R-UnNormalize\]. As in Example \[E-Mpd\], for $j, k \in S$ we let $e_{j, k} \in M^p_S {\subset}{{\overline}{M}}^p_S$ be the standard matrix unit, given by $$e_{j, k} ({\chi}_{ \{ l \} } ) = \begin{cases} 0 & l \neq k \\ {\chi}_{ \{ j \} } & l = k. \end{cases}$$ \[L-MpIsAlg\] Let $S$ be a set, and let $p \in [1, {\infty}).$ Then $M_S^p$ is a subalgebra of $L (l^p (S))$ and ${{\overline}{M}}_S^p$ is a closed subalgebra of $L (l^p (S))$ which is contained in $K (l^p (S)).$ It is easy to check that $M_S^p$ is a subalgebra of $L (l^p (S))$ and all its elements have finite rank. The rest of the statement follows. The following proposition and its corollary will not be formally used. They are included to show how ${{\overline}{M}}_S^p$ is related to the more usual algebra $K (l^p (S)).$ \[P-UMnDense\] Let $p \in [1, {\infty}).$ Let $S$ be a set, and let ${\mathcal{F}}$ be the collection of finite subsets of $S,$ ordered by inclusion. For $T \in {\mathcal{F}}$ let $e_T \in L (l^p (S))$ be multiplication by ${\chi}_T.$ Let $a \in K (l^p (S)).$ If $p \in (1, {\infty}),$ then $\lim_{T \in \mathcal{F}} \| e_T a e_T - a \| = 0,$ and if $p = 1$ then $\lim_{T \in \mathcal{F}} \| e_T a - a \| = 0.$ We begin with the easily checked observation that for all $T \in {\mathcal{F}},$ we have $$\label{Eq:NormeF} \| e_T \| \leq 1 {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}\| 1 - e_T \| \leq 1.$$ (We have equality unless $T = \varnothing$ or $T = S.$) Now let $a \in K (l^p (S))$ and let ${\varepsilon}> 0.$ We will find $T_0 \in {\mathcal{F}}$ such that for all $T \in {\mathcal{F}}$ containing $T_0,$ we have $\| e_T a - a \| < \tfrac{1}{3} {\varepsilon}.$ For $p \neq 1$ we will further find $T_1 \in {\mathcal{F}}$ (containing $T_0$) such that for all $T \in {\mathcal{F}}$ containing $T_1,$ we have $\| e_T a e_T - a \| < {\varepsilon}.$ Define $$B = {{\overline}{ \big\{ a \xi \colon {\mbox{$\xi \in l^p (S)$ and $\| \xi \|_p \leq 1$}} \big\} }} {\subset}l^p (S).$$ Then $B$ is compact. For $T \in {\mathcal{F}},$ define an open set $U_T {\subset}l^p (S)$ by $$U_T = \big\{ {\eta}\in l^p (S) \colon \| (1 - e_T) {\eta}\|_p < \tfrac{1}{6} {\varepsilon}\big\}.$$ If $T_1, T_2 \in {\mathcal{F}}$ satisfy $T_1 {\subset}T_2,$ then $U_{T_1} {\subset}U_{T_2},$ since, by (\[Eq:NormeF\]), for ${\eta}\in l^p (S)$ we have $$\| (1 - e_{T_2}) {\eta}\|_p = \| (1 - e_{T_2}) (1 - e_{T_1}) {\eta}\|_p \leq \| (1 - e_{T_1}) {\eta}\|_p.$$ Also, for all ${\eta}\in l^p (S),$ we have $\lim_{T \in \mathcal{F}} \| (1 - e_T) {\eta}\|_p = 0,$ so ${\eta}\in \bigcup_{T \in {\mathcal{F}} } U_T.$ Since $B$ is compact, there is $T_0 \in {\mathcal{F}}$ such that $B {\subset}U_{T_0}.$ We now claim that whenever $T \in {\mathcal{F}}$ satisfies $T_0 {\subset}T,$ then $\| e_T a - a \| < \tfrac{1}{3} {\varepsilon}.$ Indeed, if $\xi \in l^p (S)$ satisfies $\| \xi \|_p \leq 1,$ then $a \xi \in B,$ so $$\| (e_T a - a) \xi \|_p = \| (e_T - 1) a \xi \|_p < \tfrac{1}{6} {\varepsilon}.$$ Taking the supremum over all such $\xi$ gives $\| e_T a - a \| \leq \tfrac{1}{6} {\varepsilon}< \tfrac{1}{3} {\varepsilon}.$ This proves the claim. We now have the statement in the case $p = 1.$ So from now on assume $p \in (1, {\infty}).$ For $j \in S$ let ${\delta}_j$ denote the corresponding standard basis vector in $l^p (S).$ There are functionals ${\omega}_j$ in the dual space $l^p (S)'$ for $j \in T_0$ such that for all $\xi \in l^p (S)$ we have $$\label{Eq_3912_St} e_{T_0} a \xi = \sum_{j \in T_0} {\omega}_j (\xi) {\delta}_j.$$ Let $q \in (1, {\infty})$ be the conjugate exponent, that is, $\frac{1}{p} + \frac{1}{q} = 1.$ There are ${\eta}_j \in l^q (S)$ for $j \in T_0$ such that for all $\xi \in l^p (S)$ we have $${\omega}_j (\xi) = \sum_{l \in S} ({\eta}_j)_l \xi_l.$$ For $T \in {\mathcal{F}}$ let $f_T \in L (l^q (S))$ be multiplication by ${\chi}_T.$ There is $T_1 \in {\mathcal{F}}$ such that $T_0 {\subset}T_1$ and such that for $T \in {\mathcal{F}}$ with $T_1 {\subset}T,$ and $j \in T_0,$ we have $$\| f_T {\eta}_j - {\eta}_j \|_q < \frac{{\varepsilon}}{6 \cdot {{\mathrm{card}}}(T_0)}.$$ Since $\xi \mapsto {\omega}_j (e_T \xi)$ is the linear functional corresponding to $f_T {\eta}_j \in l^q (S),$ we get, for all $\xi \in l^p (S),$ all $j \in T_0,$ and all $T \in {\mathcal{F}}$ containing $T_1,$ $$| {\omega}_j (\xi) - {\omega}_j (e_T \xi) | \leq \| \xi \|_p \cdot \| {\eta}_j - f_T {\eta}_j \|_q \leq \frac{{\varepsilon}\| \xi \|_p}{6 \cdot {{\mathrm{card}}}(T_0)}.$$ Therefore, by (\[Eq\_3912\_St\]), $$\| e_{T_0} a e_T \xi - e_{T_0} a \xi \|_p \leq \sum_{j \in T_0} | {\omega}_j (\xi) - {\omega}_j (e_T \xi) | \leq \frac{{\varepsilon}\| \xi \|_p}{6}.$$ Taking the supremum over all such $\xi$ gives $\| e_{T_0} a e_T - e_{T_0} a \| \leq \tfrac{1}{6} {\varepsilon}< \tfrac{1}{3} {\varepsilon}.$ Therefore $$\begin{aligned} \| e_T a e_T - a \| & \leq \| e_T ( a - e_{T_0} a) e_T \| + \| e_{T_0} a e_T - e_{T_0} a \| + \| e_{T_0} a - a \| \\ & < \| e_T \| \cdot \| a - e_{T_0} a \| \cdot \| e_T \| + \tfrac{1}{3} {\varepsilon}+ \tfrac{1}{3} {\varepsilon}< \tfrac{1}{3} {\varepsilon}+ \tfrac{1}{3} {\varepsilon}+ \tfrac{1}{3} {\varepsilon}= {\varepsilon}.\end{aligned}$$ This completes the proof that $\lim_{T \in \mathcal{F}} \| e_T a e_T - a \| = 0.$ \[C-UMnIsK\] Let $p \in (1, {\infty}),$ and let $S$ be a set. Then ${{\overline}{M}}^p_S = K (l^p (S)).$ This is immediate from Proposition \[P-UMnDense\]. We also get the result that the finite rank operators are dense in $K (l^p (S))$ whenever $p \in [1, {\infty}).$ This is surely known. The case in which $S$ is countable is known in much greater generality, namely for every Banach space with a Schauder basis. It is not true that ${{\overline}{M}}^1_S = K (l^1 (S)),$ even when $S$ is countable. In fact, if $S$ is infinite, then ${{\overline}{M}}_S^1$ does not even contain all rank one operators in $L (l^1 (S)).$ \[E-UMn1\] Let $S$ be an infinite set, and fix $s_0 \in S.$ For $j \in S$ let ${\delta}_j$ denote the corresponding standard basis vector in $l^1 (S).$ Define $a \colon l^1 (S) \to l^1 (S)$ by $$a \xi = \left( {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{j \in S} }}}}} \xi_j \right) {\delta}_{s_0}.$$ Then one easily checks that $a$ is a rank one operator in $L (l^1 (S))$ and that $\| a \| = 1.$ We show that $a \not\in {{\overline}{M}}_S^1.$ Let $T {\subset}S$ be any finite set, and let $e_T$ be as in Proposition \[P-UMnDense\]. We first observe that $\| a - a e_T \| \geq 1.$ Indeed, there is $j \in S$ such that $j \not\in T,$ and taking $\xi = {\delta}_j$ gives $\| \xi \| = 1,$ $\| a \xi \| = 1,$ and $a e_T \xi = 0.$ We now claim that $b \in {{\overline}{M}}_S^1$ implies $\| a - b \| \geq 1.$ It suffices to prove this for $b \in M_S^1.$ Thus we may assume that there is a finite set $T {\subset}S$ such that $e_T b e_T = b.$ Since $b (1 - e_T) = 0,$ we get $$\| a - b \| = \| a - b \| \cdot \| 1 - e_T \| \geq \| (a - b) (1 - e_T) \| = \| a (1 - e_T) \| \geq 1,$$ as desired. \[E-pUHF\] Let $p \in [1, {\infty}).$ Let $P$ be the set of prime numbers, and let $N \colon P \to {{\mathbb{Z}}_{\geq 0}}\cup \{ {\infty}\}$ be a function such that $\sum_{t \in P} N (t) = {\infty}.$ (Such a function is a [*supernatural number*]{}.) Then the spatial $L^p$ UHF algebra $D$ of type $N,$ as in Definition 3.9(2) of [@PhLp2a] (and whose uniqueness is proved in Theorem 3.10 of [@PhLp2a]) is an $L^p$ operator algebra. Recall from Definition 3.5 and Theorem 3.10 of [@PhLp2a] that $D$ is characterized as follows. For any sequence $\big( r (0), \, r (1), \, r (2), \, \ldots \big)$ in ${{\mathbb{Z}}_{> 0}}$ with $r (0) = 1,$ $r (n) \mid r (n + 1)$ for $n \in {{\mathbb{Z}}_{\geq 0}},$ and such that for all $t \in P$ we have $$N (t) = \sup \big( \big\{ m \in {{\mathbb{Z}}_{\geq 0}}\colon {\mbox{there is $n \in {{\mathbb{Z}}_{\geq 0}}$ such that $t^m \mid r (n)$}} \big\} \big),$$ there are subalgebras $D_0 {\subset}D_1 {\subset}\cdots {\subset}D$ such that ${{\overline}{\bigcup_{n = 0}^{{\infty}} D_n}} = D$ and such that $D_n$ is isometrically isomorphic to $M^p_{r (n)}$ for all $n \in {{\mathbb{Z}}_{\geq 0}}.$ As a special case, for $d \in \{ 2, 3, \ldots \},$ we get the spatial $L^p$ UHF algebra of type $d^{\infty}$ by taking $N (t) = {\infty}$ for those primes $t$ which divide $d$ and $N (t) = 0$ otherwise. Then we can take $r (n) = d^n$ for $n \in {{\mathbb{Z}}_{\geq 0}}.$ Theorem 5.14 of [@PhLp2b] shows that for every $p \in [1, {\infty})$ and every supernatural number $N,$ there are uncountably many nonisomorphic nonspatial $L^p$ UHF algebras of type $N.$ \[E-Odp\] Let $p \in [1, {\infty}),$ and let $d \in {{\mathbb{Z}}_{> 0}}.$ Then the algebra ${{\mathcal{O}}_{d}^{p}}$ of Definition 8.8 of [@PhLp1] is an $L^p$ operator algebra. We recall its definition. For convenience in Section \[Sec\_OpdCP\], we use slightly different notation. We let $L_d$ denote the Leavitt algebra over ${{\mathbb{C}}},$ as in Definition 1.1 of [@PhLp1], with standard generators (note the change in labelling) $s_0, s_1, \ldots, s_{d - 1}, t_0, t_1, \ldots, t_{d - 1}$ satisfying the relations $$\label{Eq:Leavitt1} t_j s_j = 1 \,\,\,\,\,\,\,\,\,\,\,\, {\mbox{for $j \in \{ 0, 1, \ldots, d - 1 \},$}}$$ $$\label{Eq:Leavitt2} t_j s_k = 0 \,\,\,\,\,\,\,\,\,\,\,\, {\mbox{for $j, k \in \{ 0, 1, \ldots, d - 1 \}$ with $j \neq k,$}}$$ and $$\label{Eq:Leavitt3} \sum_{j = 0}^{d - 1} s_j t_j = 1.$$ Then ${{\mathcal{O}}_{d}^{p}}$ is the completion of $L_d$ in the norm coming from any spatial [representation]{} (in the sense of Definition 7.4(2) of [@PhLp1]) on a space $L^p (X, \mu)$ for a [${\sigma}$-finite measure space]{}  ${(X, {\mathcal{B}}, \mu)}.$ By Theorem 8.7 of [@PhLp1], all such [representation]{}[s]{} give the same norm on $L_d,$ so ${{\mathcal{O}}_{d}^{p}}$ is well defined. Since injective spatial [representation]{}[s]{} of $L_d$ exist (Lemma 7.5 of [@PhLp1]) we may, and do, regard $L_d$ as a subalgebra of ${{\mathcal{O}}_{d}^{p}}.$ \[E-CX\] Let $p \in [1, {\infty}],$ and let $X$ be a locally compact Hausdorff space. Then $C_0 (X),$ with the usual supremum norm, is an $L^p$ operator algebra. To see this, let $\mu$ be counting measure on $X.$ Then the map $f \mapsto m (f),$ sending $f \in C_0 (X)$ to the multiplication operator $(m (f) \xi) (x) = f (x) \xi (x)$ for $\xi \in L^p (X, \mu)$ and $x \in X,$ is an isometric bijection from $C_0 (X)$ to a norm closed subalgebra of ${L (L^p (X, \mu))}.$ If $X$ is compact metrizable (and in some other cases), we can find a finite Borel measure $\nu$ on $X$ such that $\nu (U) > 0$ for every nonempty open set $U {\subset}X.$ Then $C (X)$ is isometrically isomorphic to the corresponding algebra of multiplication operators on $L^p (X, \nu).$ If $X$ is compact metrizable and $p \neq {\infty},$ then $L^p (X, \nu)$ is separable. Our final example is the spatial $L^p$ operator tensor product of $L^p$ operator algebras. For this, we need to recall briefly the tensor product of operators on $L^p$ spaces. What we need is summarized in Theorem 2.16 of [@PhLp1], but is taken from Chapter 7 of [@DF]. Those sources assume the measures are [${\sigma}$-finite]{}, but this hypothesis is not needed, by Section 1 of [@FIP]. When we need them, we will use the symbol $\otimes_{\mathrm{alg}}$ for algebraic (that is, not completed) tensor products of both Banach spaces and Banach algebras. \[R-LpTens\] For proofs of the following, see Theorem 2.16 of [@PhLp1], and remove the hypothesis of ${\sigma}$-finiteness there by using, in [@FIP], Theorem 1.1 and the discussion at the beginning of Section 1. For $p \in [1, {\infty})$ and for [measure space]{}[s]{} ${(X, {\mathcal{B}}, \mu)}$ and ${(Y, {\mathcal{C}}, \nu)},$ there is an $L^p$ tensor product $L^p (X, \mu) \otimes_p L^p (Y, \nu)$ which can be canonically identified with $L^p (X \times Y, \, \mu \times \nu)$ via $(\xi \otimes {\eta}) (x, y) = \xi (x) {\eta}(y).$ Moreover, if $$(X_1, {{\mathcal{B}}}_1, \mu_1), \,\,\,\,\,\, (X_2, {{\mathcal{B}}}_2, \mu_2), \,\,\,\,\,\, (Y_1, {{\mathcal{C}}}_1, \nu_1), {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}(Y_2, {{\mathcal{C}}}_2, \nu_2)$$ are [measure space]{}s, and $$a \in L \big( L^p (X_1, \mu_1), \, L^p (X_2, \mu_2) \big) {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}b \in L \big( L^p (Y_1, \nu_1), \, L^p (Y_2, \nu_2) \big),$$ then there is a corresponding tensor product operator $$a \otimes b \in L \big( L^p (X_1 \times Y_1, \, \mu_1 \times \nu_1), \, L^p (X_2 \times Y_2, \, \mu_2 \times \nu_2 \big),$$ which has the expected properties: bilinearity, $(a_1 \otimes b_1) (a_2 \otimes b_2) = a_1 a_2 \otimes b_1 b_2,$ and $\| c \| = \| a \| \cdot \| b \|.$ We exclude $p = {\infty}$ because usually $L^{{\infty}} (X \times Y, \, \mu \times \nu)$ is much bigger than the closure of the image of $L^{{\infty}} (X, \mu) \otimes_p L^{{\infty}} (Y, \nu).$ \[E-LpTP\] Let $p \in [1, {\infty}),$ let $(X_1, {{\mathcal{B}}}_1, \mu_1)$ and $(X_2, {{\mathcal{B}}}_2, \mu_2)$ be [measure space]{}s, and let $A_1 {\subset}L ( L^p (X_1, \mu_1) )$ and $A_2 {\subset}L ( L^p (X_2, \mu_2) )$ be norm closed subalgebras. Define the algebra $$A_1 \otimes_p A_2 {\subset}L \big( L^p (X_1 \times X_2, \, \mu_1 \times \mu_2) \big)$$ to be the closed linear span of all $a_1 \otimes a_2$ (in the sense of Remark \[R-LpTens\]) for $a_1 \in A_1$ and $a_2 \in A_2,$ as in Definition 1.9 of [@PhLp2a]. Then $A_1 \otimes_p A_2$ is an $L^p$ operator algebra. Thus, for example, if $A {\subset}{L (L^p (X, \mu))}$ is an $L^p$ operator algebra, we can form an $L^p$ matrix algebra: let $d \in {{\mathbb{Z}}_{> 0}},$ let $Z$ and ${\lambda}$ be as in Example \[E-Mpd\], and get $$M^p_d \otimes_p A {\subset}L \big( L^p (Z \times X, \, {\lambda}\times \mu) \big).$$ Remark \[R-UnNormalize\] is easily adapted to show that we get the same result using counting measure instead of ${\lambda}.$ If $A_1$ and $A_2$ are $L^p$ operator algebras, we can choose [measure space]{}s $(X_1, {\mathcal{B}}_1, \mu_1)$ and $(X_2, {\mathcal{B}}_2, \mu_2),$ and isometric [representation]{}[s]{} (in the sense of [Definition \[D-LpRep\]]{} below) $$\pi_1 \colon A_1 \to L (L^p (X_1, \mu_1)) {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}\pi_2 \colon A_2 \to L (L^p (X_2, \mu_2)).$$ Following Example \[E-LpTP\], we can then form the $L^p$ operator tensor product $\pi_1 (A_1) \otimes_p \pi_2 (A_2).$ In general, the resulting $L^p$ operator tensor product can depend on the choice of $\pi_1$ and $\pi_2,$ even when $p = 2.$ The following suggestion is due to Vern Paulsen. Choose operator spaces $E_1, E_2, F_1, F_2$ on Hilbert spaces such that $E_1$ is isometrically isomorphic to $F_1,$ such that $E_2$ is isometrically isomorphic to $F_2,$ but such that the spatial tensor products $E_1 \otimes E_2$ and $F_1 \otimes F_2$ are distinct. For example, let $H$ be the two dimensional Hilbert space $l^2 ( \{ 1, 2 \} ).$ Take $E_2$ and $F_2$ to be the column Hilbert space $H^{\text{c}}$ and the row Hilbert space $H^{\text{r}}$ associated to $H$ (1.2.23 of [@BM]). Take $E_1 = F_1 = M_2 = L ( H ).$ Then $M_2 (E_2) \cong M_2 \otimes E_2$ and $M_2 (F_2) \cong M_2 \otimes F_2$ by 1.5.2 of [@BM]. There are $x, y \in M_2,$ such as $x = e_{1, 1}$ and $y = e_{1, 2},$ for which $\| x^* x + y^* y \| \neq \| x x^* + y y^* \|.$ Using 1.2.5 and 1.2.24 of [@BM], we see that the map $M_2 \otimes E_2 \to M_2 \otimes F_2$ is not isometric. Define $$A_1 = \left\{ \left( \begin{matrix} {\lambda}_1 & a \\ 0 & {\lambda}_2 \end{matrix} \right) \colon {\mbox{$a \in E_1$ and ${\lambda}_1, {\lambda}_2 \in {{\mathbb{C}}}$}} \right\} {\subset}L (H \oplus H).$$ Similarly define $A_2,$ $B_1,$ and $B_2$ using $E_2,$ $F_1,$ and $F_2$ in place of $E_1.$ Then $A_1$ is isometrically isomorphic to $B_1$ and $A_2$ is isometrically isomorphic to $B_2,$ but the obvious map $A_1 \otimes_{\mathrm{alg}} A_2 \to B_1 \otimes_{\mathrm{alg}} B_2$ does not extend to an isometric isomorphism of these algebras on $(H \oplus H) \otimes (H \oplus H).$ It is an isomorphism by finite dimensionality, but using $l^2$ in place of $H$ will give an example for which the obvious map does not extend to an isomorphism of the closures. This can’t happen for [C\*-algebra]{}[s]{}, as pointed out by Narutaka Ozawa. An isometric [homomorphism]{} from a [C\*-algebra]{} $A$ to $L (H),$ for any Hilbert space $H,$ must be a \*-[homomorphism]{}, by Proposition A.5.8 of [@BM]. Because of the rigidity of contractive [representation]{}[s]{} of ${{\mathcal{O}}_{d}^{p}}$ (see the remark after [Lemma \[L-OdIsoMdOd\]]{}), it turns out that we do not need a general theory of $L^p$ operator tensor products. Therefore we do not develop such a theory here. \[L-MatpSMap\] Let $p \in [1, {\infty}).$ Let $S$ and $T$ be sets, and let $g \colon S \to T$ be an injective function. Let ${(X, {\mathcal{B}}, \mu)}$ be a [measure space]{}, and let $A {\subset}{L (L^p (X, \mu))}$ be a closed subalgebra. Then there is a unique isometric [homomorphism]{} ${\gamma}_{g, A} \colon {{\overline}{M}}^p_S \otimes_p A \to {{\overline}{M}}^p_T \otimes_p A$ such that ${\gamma}_{g, A} (e_{j, k} \otimes a) = e_{g (j), \, g (k) } \otimes a$ for all $j, k \in S$ and all $a \in A.$ Its range is ${{\overline}{M}}^p_{g (S)} \otimes_p A.$ The assignment $g \mapsto {\gamma}_g$ is functorial. We do not claim functoriality in $A,$ because the norm on ${{\overline}{M}}^p_S \otimes_p A$ in general depends on how $A$ is represented. The proof avoids possible issues with the product of $\mu$ and counting measure on $S$ by making serious use of it only on finite subsets of $S.$ Regard $M^p_S \otimes_{\mathrm{alg}} A$ and $M^p_T \otimes_{\mathrm{alg}} A$ as subalgebras of ${{\overline}{M}}^p_S \otimes_p A$ and ${{\overline}{M}}^p_T \otimes_p A,$ with the restricted norms. By definition, they are dense. There is an obvious algebra [homomorphism]{} ${\gamma}_{g, A}^{(0)} \colon M^p_S \otimes_{\mathrm{alg}} A \to M^p_T \otimes_{\mathrm{alg}} A$ such that ${\gamma}_{g, A}^{(0)} (e_{j, k} \otimes a) = e_{g (j), \, g (k)} \otimes a$ for all $j, k \in S$ and all $a \in A,$ which is is functorial in $g.$ The closure of its range is obviously ${{\overline}{M}}^p_{g (S)} \otimes_p A.$ The proof is therefore completed by showing that ${\gamma}_{g, A}^{(0)}$ is isometric and extending by continuity. It suffices to show that, for any finite set $F {\subset}S,$ the restriction of ${\gamma}_{g, A}^{(0)}$ to $M^p_F \otimes_{\mathrm{alg}} A$ is isometric. Thus, we may as well assume that $S$ is finite. Let $\nu_S$ and $\nu_T$ be counting measure on $S$ and $T,$ with all subsets taken to be measurable, and equip $S \times X$ and $T \times X$ with the product ${\sigma}$-algebras and measures. Equip $g (S) \times X {\subset}T \times X$ with the restricted ${\sigma}$-algebra and measure. We then have a bijection $h = g \times {{\mathrm{id}}}_X$ from $S \times X$ to $g (S) \times X$ which preserves measurable sets and the measure. The map $h$ induces is an isometric bijection $${\varphi}\colon L \big( L^p (S \times X, \, \nu_S \times \mu) \big) \to L \big( L^p (g (S) \times X, \, \nu_T \times \mu) \big).$$ Let $f \in L \big( L^p (T \times X, \, \nu_T \times \mu) \big)$ be multiplication by the characteristic function of $g (S) \times X.$ Then there is an obvious isometric identification $$L \big( L^p (g (S) \times X, \, \nu_T \times \mu) \big) = f L \big( L^p (T \times X, \, \nu_T \times \mu) \big) f,$$ and thus an isometric inclusion $${\iota}\colon L \big( L^p (g (S) \times X, \, \nu_T \times \mu) \big) \to L \big( L^p (T \times X, \, \nu_T \times \mu) \big).$$ The restriction of ${\iota}\circ {\varphi}$ to $M^p_S \otimes_{\mathrm{alg}} A$ agrees with ${\gamma}_{g, A}^{(0)}.$ So ${\gamma}_{g, A}^{(0)}$ is isometric, as desired. There are many other examples of $L^p$ operator algebras. We will see a few later, namely $L^p$ operator crossed products. \[D-LpRep\] Let $p \in [1, {\infty}],$ and let $A$ be an $L^p$ operator algebra. 1. \[D\_3914\_LpRep\_Rep\] A [*representation*]{} of $A$ (on $L^p (X, \mu)$) is a [continuous]{}  [homomorphism]{}  $\pi \colon A \to {L (L^p (X, \mu))}$ for some measure space ${(X, {\mathcal{B}}, \mu)}.$ 2. \[D\_3914\_LpRep\_Contr\] The [representation]{} $\pi$ is said to be [*contractive*]{} if $\| \pi (a) \| \leq \| a \|$ for all $a \in A,$ and [*isometric*]{} if $\| \pi (a) \| = \| a \|$ for all $a \in A.$ 3. \[D\_3914\_LpRep\_Spb\] If $p \neq {\infty},$ we say that the [representation]{} $\pi \colon A \to {L (L^p (X, \mu))}$ is [*separable*]{} if $L^p (X, \mu)$ is separable, and that $A$ is [*separably representable*]{} if it has a separable isometric representation. 4. \[D\_3914\_LpRep\_Sft\] We say that $\pi$ is [*$\sigma$-finite*]{} if $\mu$ is [${\sigma}$-finite]{}, and that $A$ is [*$\sigma$-finitely representable*]{} if it has a $\sigma$-finite isometric representation. 5. \[D\_3914\_LpRep\_Ndg\] We say that $\pi$ is [*nondegenerate*]{} if $$\pi (A) E = {{\mathrm{span}}}\big( \big\{ \pi (a) \xi \colon {\mbox{$a \in A$ and $\xi \in E$}} \big\} \big)$$ is dense in $E.$ We say that $A$ is [*nondegenerately (separably) representable*]{} if it has a nondegenerate (separable) isometric representation, and [*nondegenerately ${\sigma}$-finitely representable*]{} if it has a nondegenerate ${\sigma}$-finite isometric representation. We will only be interested in contractive [representation]{}[s]{}, but it seems potentially confusing to incorporate contractivity into the definition. \[R\_3916\_SepSft\] Let $p \in [1, {\infty}).$ The corollary to Theorem 3 in Section 15 of [@Lc] implies that any separable $L^p$ space is isometrically isomorphic to a $\sigma$-finite $L^p$ space. So separably (nondegenerately) representable implies $\sigma$-finitely (nondegenerately) representable. We do not require representations to be unital, even when the algebra is unital. But a nonunital representation of a unital algebra is necessarily degenerate. \[E\_3419\_Deg\] Use the notation of Example \[E-Mpd\]. Take $S = \{ 1, 2 \},$ so that the algebra there is ${M_{2}^{p}}.$ Take $A = {{\mathbb{C}}}\cdot e_{1, 2}.$ Then $A$ is an $L^p$ operator algebra which has a separable isometric representation. We claim that $A$ has no nondegenerate representation on any nonzero Banach space. Let $E$ be a nonzero Banach space, and let $\pi \colon A \to L (E)$ be a representation. Then $\pi (e_{1, 2})^2 = 0,$ so $\pi (e_{1, 2}) E {\subset}\ker ( \pi (e_{1, 2}) ).$ If $\pi (e_{1, 2}) \neq 0,$ then $\ker (\pi (e_{1, 2}))$ is a proper closed subspace of $E$ which contains $\pi (A) E,$ while if $\pi (e_{1, 2}) = 0$ then $\pi (A) E = 0.$ Lemma \[L-LpSum\] below is essentially the same as Lemma 3.5 of [@PhLp2b], except that we do not assume that our direct sums are countable. We first define a version of the disjoint union of measure spaces which is suitable for our purposes. \[D\_3914\_DjUnion\] Let $S$ be a set, and for $j \in S$ let $(X_j, {{\mathcal{B}}}_j, \mu_j)$ be a [measure space]{} with $\mu_j (X_j) > 0.$ The [*disjoint union*]{} $\coprod_{j \in S} (X_j, {{\mathcal{B}}}_j, \mu_j)$ is the [measure space]{} ${(X, {\mathcal{B}}, \mu)}$ defined as follows. Set $X = \coprod_{j \in S} X_j.$ Take ${{\mathcal{B}}}$ to be the collection of subsets $Y {\subset}X$ such that $Y \cap X_j \in {{\mathcal{B}}}_j$ for all $j \in S$ and such that $Y \cap X_j = {\varnothing}$ for all but countably many $j \in S$ or $Y \cap X_j = X_j$ for all but countably many $j \in S.$ Define a measure $\mu$ on $X$ by $\mu (Y) = \sum_{j \in S} \mu_j (Y \cap X_j)$ for $Y \in {{\mathcal{B}}}.$ \[L\_3914\_PrpDj\] Let the notation be as in Definition \[D\_3914\_DjUnion\]. Then: 1. \[D\_3914\_DjUnion\_IsM\] ${(X, {\mathcal{B}}, \mu)}= \coprod_{j \in S} (X_j, {{\mathcal{B}}}_j, \mu_j)$ is a [measure space]{}. 2. \[D\_3914\_DjUnion\_Supp\] If $Y {\subset}X$ satisfies $Y \cap X_j \neq {\varnothing}$ for uncountably many $j \in S,$ then $\mu (Y) = \infty.$ 3. \[D\_3914\_DjUnion\_Sep\] If $S$ is countable and $L^p (X_j, \mu_j)$ is separable for all $j \in S,$ then $L^p (X, \mu)$ is separable. 4. \[D\_3914\_DjUnion\_Sft\] If $S$ is countable and $\mu_j$ is [${\sigma}$-finite]{} for all $j \in S,$ then $\mu$ is [${\sigma}$-finite]{}. All parts are immediate. \[L-LpSum\] Let $p \in [1, {\infty}),$ and let $A$ be an $L^p$ operator algebra. Let $S$ be a set. For $j \in S$ let $(X_j, {{\mathcal{B}}}_j, \mu_j)$ be a [measure space]{} with $\mu_j (X_j) > 0$ and let $\pi_j \colon A \to L (L^p (X_j, \mu_j))$ be a contractive [representation]{}. Equip the algebraic direct sum $E_0 = \bigoplus_{j \in S} L^p (X_j, \mu_j)$ with the norm $$\big\| (\xi_j )_{j \in S} \big\| = \left( {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{j \in S} }}}}} \| \xi_j \|_p^p \right)^{1 / p},$$ and let $E$ be the completion of $E_0$ in this norm. Set ${(X, {\mathcal{B}}, \mu)}= \coprod_{j \in S} (X_j, {{\mathcal{B}}}_j, \mu_j).$ Then $E \cong L^p (X, \mu),$ and there is a unique contractive [representation]{} $\pi \colon A \to L (E)$ such that $$\pi (a) \big( (\xi_j )_{j \in S} \big) = \big( (\pi_j (a) )_{j \in S} \big)$$ for $a \in A$ and $\xi_j \in L^p (X_j, \mu_j)$ for $j \in S.$ The proof is immediate. We can of course replace contractivity with $\| \pi_j \| \leq M$ (with $M$ independent of $j$) in the hypothesis and $\| \pi \| \leq M$ in the conclusion. \[D-LpDirectSum\] The [representation]{} $\pi$ of [Lemma \[L-LpSum\]]{} is called the [*$L^p$-direct sum*]{} of the [representation]{}[s]{} $\pi_j,$ and written $\pi = \bigoplus_{j \in S} \pi_j.$ (We suppress $p$ in the notation.) \[L\_3914\_SmNdg\] Let $A$ be an $L^p$ operator algebra. Then the $L^p$-direct sum of nondegenerate contractive [representation]{}[s]{} is nondegenerate. Let the notation be as in Lemma \[L-LpSum\]. It is enough to show that if $j_0 \in S$ and ${\eta}= ({\eta}_j )_{j \in S}$ is an element of the algebraic direct sum $E_0 = \bigoplus_{j \in S} L^p (X_j, \mu_j)$ such that ${\eta}_j = 0$ for $j \neq j_0,$ then $\xi \in {\overline{\pi (A) E_0}}.$ This is easily verified using elements $\xi \in E_0$ such that $\xi_j = 0$ for $j \neq j_0.$ We are interested in separably representable algebras for technical reasons which will become apparent in the example to which we want to apply the theory. (See Section \[Sec\_OpdCP\].) \[P-SepImpSepRep\] Let $p \in [1, {\infty}),$ and let $A$ be a separable $L^p$ operator algebra. Then $A$ is separably representable. If $A$ is nondegenerately representable, then $A$ is separably nondegenerately representable. The proof is harder than for $p = 2$ because a closed subspace of $L^p (X, \mu)$ need not be isomorphic to any $L^p (Y, \nu).$ Also, we don’t have anything which plays the role of the adjoint, so invariant subspaces need not be reducing. This means that we can’t use the most obvious form of the method of decomposing a [representation]{} into cyclic [representation]{}[s]{}. Of course, there are nonseparable $L^p$ operator algebras which are separably representable, such as $L (l^p ({{\mathbb{Z}}_{> 0}})).$ By hypothesis, there is a [measure space]{}  ${(X, {\mathcal{B}}, \mu)}$ and an isometric [representation]{} ${\rho}\colon A \to {L (L^p (X, \mu))},$ nondegenerate if $A$ is nondegenerately representable. Recall that $L^p (X, \mu)$ is a complex Banach lattice, in the sense of Definition 3 in Section 1 of [@Lc] and Definition 1 in Section 3 of [@Lc]. The real part consists of the real valued functions in $L^p (X, \mu),$ the order is the pointwise almost everywhere order, and $\xi \vee {\eta}$ is defined by $(\xi \vee {\eta}) (x) = \max (\xi (x), {\eta}(x) )$ for $x \in X.$ We write ${\mathrm{Re}} (\xi)$ and ${\mathrm{Im}} (\xi)$ with the obvious meanings. We claim that for every separable subset $Q {\subset}L^p (X, \mu),$ there is a separable closed sublattice $F {\subset}L^p (X, \mu)$ which contains $Q$ and which is invariant in the sense that ${\rho}(a) F {\subset}F$ for all $a \in A.$ We prove the claim. Choose a countable dense subset $S {\subset}A.$ For any set $T {\subset}L^p (X, \mu),$ we define $G (T) {\subset}L^p (X, \mu)$ as follows. First, let $G_0 (T)$ consist of all functions ${\mathrm{Re}} (\xi),$ ${\mathrm{Im}} (\xi),$ and $| \xi |,$ for $\xi \in T$ or $\xi = {\rho}(b) {\eta}$ with ${\eta}\in T$ and $b \in S.$ Then take $$G_1 (T) = \big\{ \xi_1 \vee \xi_2 \vee \cdots \vee \xi_n \colon {\mbox{$n \in {{\mathbb{Z}}_{> 0}}$ and $\xi_1, \xi_2, \ldots, \xi_n \in G_0 (T)$}} \big\}.$$ Finally, take $G (T)$ to be the ${{\mathbb{Q}}}[i]$-linear span of $G_1 (T).$ Note that $T {\subset}G (T),$ that ${\rho}(b) \xi \in G (T)$ for $\xi \in T$ and $b \in S,$ and that if $T$ is countable then so is $G (T).$ Now let $E_0$ be a countable dense subset of $Q$ and, for $n \in {{\mathbb{Z}}_{\geq 0}},$ set $E_{n + 1} = G (E_n).$ Set $E = \bigcup_{n = 0}^{{\infty}} E_n$ and $F = {{\overline}{E}}.$ It is clear that $F$ is a closed subspace of $L^p (X, \mu)$ and that (by density of $S$ in $A$) we have ${\rho}(A) F {\subset}F.$ By continuity of the operations, if $\xi \in F$ then ${\mathrm{Re}} (\xi), \, {\mathrm{Im}} (\xi), \, | \xi | \in F,$ and if $\xi, {\eta}\in F$ are real then $\xi \vee {\eta}\in F.$ Therefore $F$ is a ${\rho}$-invariant closed sublattice of $L^p (X, \mu)$ which contains $Q.$ (It is clearly the smallest such sublattice.) It is separable because $E$ is countable. This proves the claim. We next claim that if ${\rho}$ is nondegenerate, then the separable invariant closed sublattice $F {\subset}L^p (X, \mu)$ in the claim above can be chosen to also be nondegenerate. To prove this, let $F_0$ be a sublattice as in the previous claim. We construct by induction separable invariant closed sublattices $F_m {\subset}L^p (X, \mu)$ such that $F_m {\subset}F_{m + 1}$ and $F_m {\subset}{{\overline}{{{\mathrm{span}}}}} ({\rho}(A) F_{m + 1})$ for all $m \in {{\mathbb{Z}}_{\geq 0}}.$ Suppose we have $F_m.$ Let $P_m$ be a countable dense subset of $F_m$ and for every $\xi \in P_m$ and $n \in {{\mathbb{Z}}_{> 0}},$ use nondegeneracy of ${\rho}$ to find $l_{\xi, n} \in {{\mathbb{Z}}_{> 0}},$ and ${\eta}_{\xi, n, j} \in L^p (X, \mu)$ and $a_{\xi, n, j} \in A$ for $j = 1, 2, \ldots, l_{\xi, n},$ such that $$\left\| \xi - {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{j = 1}^{l_{\xi, n}} }}}}} \pi ( a_{\xi, n, j} ) {\eta}_{\xi, n, j} \right\|_p < \frac{1}{n}.$$ Set $$Q_m = \big\{ {\eta}_{\xi, n, j} \colon {\mbox{$n \in {{\mathbb{Z}}_{> 0}},$ $\xi \in P_m,$ and $j = 1, 2, \ldots, l_{\xi, n}$}} \big\},$$ which is a countable subset of $L^p (X, \mu)$ such that $F_m {\subset}{{\overline}{{{\mathrm{span}}}}} ({\rho}(A) Q_{m}).$ The previous claim provides a separable invariant closed sublattice $F_{m + 1} {\subset}L^p (X, \mu)$ which contains $F_m \cup Q_m.$ Clearly $F_m {\subset}{{\overline}{{{\mathrm{span}}}}} ({\rho}(A) F_{m + 1}).$ This completes the induction. Set $F = {{\overline}{\bigcup_{m = 0}^{{\infty}} F_m}}.$ Then $F$ is a separable invariant closed sublattice of $L^p (X, \mu)$ which contains $Q$ and such that ${{\overline}{{{\mathrm{span}}}}} ({\rho}(A) F) = F.$ This proves the claim. We now claim that for every $a \in A$ and every ${\varepsilon}> 0,$ there is a [measure space]{}  ${(Y, {\mathcal{C}}, \nu)}$ such that $L^p (Y, \nu)$ is separable, and a contractive [representation]{} $\pi \colon A \to L ( L^p (Y, \nu) )$ such that $\| \pi (a) \| > \| a \| - {\varepsilon}.$ Moreover, if $A$ is nondegenerately representable, then $\pi$ may be chosen to be nondegenerate. To prove the claim, choose $\xi \in L^p (X, \mu)$ such that $\| \xi \|_p = 1$ and $\| {\rho}(a) \xi \|_p > \| a \| - {\varepsilon}.$ The claims above provide a separable invariant closed sublattice $F {\subset}L^p (X, \mu)$ which contains $\xi,$ and which can be taken to satisfy ${{\overline}{{{\mathrm{span}}}}} ({\rho}(A) F) = F$ if ${\rho}$ is nondegenerate. Being a sublattice of $L^p (X, \mu),$ the space $F$ is clearly an abstract $L^p$-space in the sense of Definition 1 in Section 15 of [@Lc]. According to Theorem 3 in Section 15 of [@Lc], it follows that there is a [measure space]{}  ${(Y, {\mathcal{C}}, \nu)}$ such that $F$ is isometrically isomorphic to $L^p (Y, \nu).$ Thus, $\pi (a) = {\rho}(a) |_{F}$ defines a contractive [representation]{} on a separable $L^p$ space, which is nondegenerate if ${\rho}$ is nondegenerate. Since $\xi \in F,$ we have $\| \pi (a) \| > \| a \| - {\varepsilon}.$ This proves the claim. Now we prove the proposition. Let $S {\subset}A$ be a countable dense subset. For every $b \in S$ and $n \in {{\mathbb{Z}}_{> 0}},$ choose a contractive [representation]{} $\pi_{b, n} \colon A \to L \big( L^p (Y_{b, n}, \nu_{b, n}) \big)$ as in the previous claim with $a = b$ and ${\varepsilon}= \frac{1}{n}.$ Then let $\pi$ be the $L^p$ direct sum ([Definition \[D-LpDirectSum\]]{}) of the [representation]{}[s]{} $\pi_{b, n},$ which is a contractive [representation]{} of $A$ on a separable $L^p$ space. The [representation]{} $\pi$ satisfies $\| \pi (b) \| = \| b \|$ for all $b \in S,$ and hence for all $b \in A.$ If $A$ is nondegenerately representable, then $\pi$ is nondegenerate by Lemma \[L\_3914\_SmNdg\]. Group actions and covariant representations {#Sec_GAlg} =========================================== We take group actions to be as in Section 2 of [@PhLp2a]. \[D:Aut\] Let $B$ be a Banach algebra. An [*automorphism*]{} of $B$ is a [continuous]{}  linear bijection ${\varphi}\colon B \to B$ such that ${\varphi}(a b) = {\varphi}(a) {\varphi}(b)$ for all $a, b \in B.$ (Continuity of ${\varphi}^{-1}$ is automatic, by the Open Mapping Theorem.) We call ${\varphi}$ an [*isometric automorphism*]{} if in addition $\| {\varphi}(a) \| = \| a \|$ for all $a \in B.$ We let ${{\mathrm{Aut}}}(B)$ denote the set of automorphisms of $B.$ \[D:Action\] Let $A$ be a Banach algebra, and let $G$ be a topological group. An [*action of $G$ on $A$*]{} is a [homomorphism]{}  $g \mapsto {\alpha}_g$ from $G$ to ${{\mathrm{Aut}}}(A)$ such that for every $a \in A,$ the map $g \mapsto {\alpha}_g (a)$ is [continuous]{}  from $G$ to $A.$ We say that the action is [*isometric*]{} if each ${\alpha}_g$ is. If $p \in [1, {\infty}]$ and $A$ is an $L^p$ operator algebra, we call $(G, A, {\alpha})$ a [*$G$-$L^p$ operator algebra*]{}, and we call it an [*isometric $G$-$L^p$ operator algebra*]{} if in addition ${\alpha}$ is isometric. We say $(G, A, {\alpha})$ is [*separable*]{}, [*nondegenerately representable*]{}, [*${\sigma}$-finitely representable*]{}, etc., if $A$ is, in the sense of Definition \[D-LpRep\]. We will only construct crossed products by isometric actions of locally compact groups. To avoid measure theoretic technicalities (primarily related to product measures and Fubini’s Theorem), we mostly restrict to second countable groups and to algebras on [${\sigma}$-finite]{} $L^p$ spaces. Since integrable functions have [${\sigma}$-finite]{} supports, we expect that, with care, these restrictions can be avoided. However, we do not need the extra generality and therefore give the simpler proofs which work without it. \[R-DynSys\] If ${\alpha}\colon G \to {{\mathrm{Aut}}}(A)$ is any action as in [Definition \[D:Action\]]{}, then $(G, A, {\alpha})$ is a Banach algebra dynamical system in the sense of Definition 2.10 of [@DDW]. The action needed for the computation of $K_* \big( {{\mathcal{O}}_{d}^{p}} \big)$ will be described in Notation \[N-DICrPrd\] below. Dual actions on $L^p$ crossed products by abelian groups will be constructed in Theorem \[T-DualpFull\] and Theorem \[T-DualpRed\] below. Meanwhile, we give several other examples. Except for the first, they will not be used in the rest of this paper, and the first one will not be used in the computation of the K-theory of ${{\mathcal{O}}_{d}^{p}}.$ \[E-GpOnSp\] Let $X$ be a locally compact Hausdorff space, and let $G$ be a locally compact group which acts continuously on $X.$ Then $C_0 (X)$ is an $L^p$ operator algebra by Example \[E-CX\], and the usual formula ${\alpha}_g (f) (x) = f (g^{-1} x)$ makes $(G, \, C_0 (X), \, {\alpha})$ an isometric $G$-$L^p$ operator algebra. As an important special case, a [homeomorphism]{}  $h \colon X \to X$ gives an isometric action of ${{\mathbb{Z}}}$ on $C_0 (X).$ \[E-MdIso\] Let $p \in [1, {\infty}) {\setminus}\{ 2 \},$ let $d \in {{\mathbb{Z}}_{> 0}},$ and let $G$ be the group of isometries in the algebra $M^p_d$ of Example \[E-Mpd\]. It follows from Theorem 6.9 and Lemma 6.15 of [@PhLp1] that $G$ consists of what we call complex permutation matrices, that is, all $u \in M_d$ such that each row and each column of $u$ contains exactly one nonzero entry, and such that all nonzero entries have absolute value $1.$ (These are the products of permutation matrices and diagonal matrices whose diagonal entries have absolute value $1.$) The group $G$ is compact, and its action on $M^p_d$ by conjugation makes $M^p_d$ an isometric $G$-$L^p$ operator algebra. \[E-QF\] Let $p \in [1, {\infty}) {\setminus}\{ 2 \},$ let $d \in {{\mathbb{Z}}_{> 0}},$ and let $G$ be as in Example \[E-MdIso\]. Let ${{\mathcal{O}}_{d}^{p}}$ be as in Example \[E-Odp\]. Then there is an isometric action ${\alpha}\colon G \to {{\mathrm{Aut}}}\big( {{\mathcal{O}}_{d}^{p}} \big)$ given as follows. Write an element $g \in G$ as $g = \sum_{k = 0}^{d - 1} {\lambda}_k e_{{\sigma}(k), \, k}$ for a permutation ${\sigma}$ of $\{ 0, 1, \ldots, d - 1 \}$ and complex numbers ${\lambda}_0, {\lambda}_1, \ldots, {\lambda}_{d - 1}$ of absolute value $1.$ Then ${\alpha}_g$ is determined by $${\alpha}_g (s_j) = {\lambda}_j s_{{\sigma}(j)} {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}{\alpha}_g (t_j) = {{\overline}{{\lambda}_j}} t_{{\sigma}(j)}.$$ One easily checks that these formulas define an action of $G$ on $L_d.$ It is isometric for the norm on ${{\mathcal{O}}_{d}^{p}},$ because if $\pi \colon {{\mathcal{O}}_{d}^{p}} \to {L (L^p (X, \mu))}$ is a unital [representation]{} such that $\pi |_{L_d}$ is spatial in the sense of Definition 7.4(2) of [@PhLp1], then $\pi \circ {\alpha}_g |_{L_d}$ is easily seen to be spatial. Continuity of the action follows from continuity on the generators via a standard $\frac{{\varepsilon}}{3}$ argument. Of course, individual automorphisms ${\alpha}_g$ generate isometric actions of ${{\mathbb{Z}}}.$ Our definitions of crossed products follow the general framework of [@DDW]. The following definition is the analog in our situation (restricting to [representation]{}[s]{} on $L^p$ spaces) of Definition 2.12 of [@DDW]. \[D-CovRep\] Let $p \in [1, {\infty}],$ let $G$ be a topological group, and let $(G, A, {\alpha})$ be a $G$-$L^p$ operator algebra. Let ${(X, {\mathcal{B}}, \mu)}$ be a [measure space]{}. Then a [*covariant representation*]{} of $(G, A, {\alpha})$ on $L^p (X, \mu)$ is a pair $(v, \pi)$ consisting of a [representation]{} $g \mapsto v_g$ from $G$ to the invertible operators on $L^p (X, \mu)$ such that $g \mapsto v_g \xi$ is [continuous]{}  for all $\xi \in L^p (X, \mu),$ and a [representation]{} $\pi \colon A \to {L (L^p (X, \mu))},$ such that, in addition, the following covariance condition is satisfied: $\pi ({\alpha}_g (a)) = v_g \pi (a) v_g^{-1}$ for all $g \in G$ and $a \in A.$ A covariant representation $(v, \pi)$ of $(G, A, {\alpha})$ is [*contractive*]{} if $\| v_g \| \leq 1$ for all $g \in G$ and also $\pi$ is contractive. It is [*isometric*]{} if in addition $\pi$ is isometric. It is [*separable*]{}, [*$\sigma$-finite*]{}, or [*nondegenerate*]{} if $\pi$ has the corresponding property. If $(v, \pi)$ is contractive, then necessarily $v_g$ is an isometric bijection for all $g \in G.$ \[R-UnifBdd\] Let $p \in [1, {\infty}],$ let $G$ be a locally compact group, and let $(G, A, {\alpha})$ be an isometric $G$-$L^p$ operator algebra. Then the class of all contractive covariant representations is uniformly bounded in the sense of Definition 3.1 of [@DDW]. For isometric actions of locally compact groups, we show that covariant [representation]{}[s]{} exist by constructing regular covariant [representation]{}[s]{}. We need some preparation. \[R-CcGLp\] Let $p \in [1, {\infty}),$ let $G$ be a locally compact group with left Haar measure $\nu,$ and let ${(X, {\mathcal{B}}, \mu)}$ be a [measure space]{}. Following Remark \[R-LpTens\], we may identify the space $C_{\mathrm{c}} \big( G, \, L^p (X, \mu) \big)$ of compactly supported [continuous function]{}s from $G$ to $L^p (X, \mu)$ with a dense subspace of $L^p (G \times X, \, \nu \times \mu).$ Indeed, using Remark \[R-LpTens\], this follows because the subspace contains all ${\eta}\otimes \xi$ with ${\eta}\in C_{\mathrm{c}} (G)$ and $\xi \in L^p (X, \mu),$ and $C_{\mathrm{c}} (G)$ is dense in $L^p (G, \nu).$ \[R-lpG\] Let $p \in [1, {\infty}),$ and let $G$ be a countable discrete group with counting measure $\nu.$ For any Banach space $E,$ let $l^p (G, E)$ be the Banach space of all functions $\xi \colon G \to E$ such that the norm $$\| \xi \|_p = \left( {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{g \in G} }}}}} \| \xi (g) \|^p \right)^{ 1 / p}$$ is finite. If now ${(X, {\mathcal{B}}, \mu)}$ is a [measure space]{}  and $E = L^p (X, \mu),$ then for $\xi \in l^p (G, E)$ we get a function $T (\xi)$ on $G \times X,$ given by $T (\xi) (g, x) = \xi (g) (x).$ One easily checks that $T$ is an isometric isomorphism from $l^p \big( G, \, L^p (X, \mu) \big)$ to $L^p (G \times X, \, \nu \times \mu).$ We therefore identify these spaces. \[L-ExistRegRep\] Let $p \in [1, {\infty}),$ let $G$ be a locally compact group with left Haar measure $\nu,$ and let $(G, A, {\alpha})$ be an isometric $G$-$L^p$ operator algebra. Let ${(X, {\mathcal{B}}, \mu)}$ be a [measure space]{}, and let $\pi_0 \colon A \to {L (L^p (X, \mu))}$ be a contractive [representation]{}. Then: 1. \[ExistRegRep-Exv\] There exists a unique [representation]{} $v$ of $G$ on $L^p (G \times X, \, \nu \times \mu)$ such that $v_g (\xi) (h, x) = \xi (g^{-1} h, \, x)$ for $g, h \in G,$ $\xi \in L^p (G \times X, \, \nu \times \mu),$ and $x \in X.$ 2. \[ExistRegRep-VIsom\] The [representation]{} $v$ is isometric. 3. \[ExistRegRep-Expi\] There is a unique [representation]{} $\pi$ of $A$ on $L^p (G \times X, \, \nu \times \mu)$ such that for $$a \in A, \,\,\,\,\,\, h \in G, {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}\xi \in C_{\mathrm{c}} \big( G, \, L^p (X, \mu) \big) {\subset}L^p (G \times X, \, \nu \times \mu)$$ (see Remark \[R-CcGLp\]), we have $$\label{Eq:RegRepFormula} \big( \pi (a) \xi \big) (h) = \pi_0 ( {\alpha}_{h}^{-1} (a)) \big( \xi (h) \big).$$ 4. \[ExistRegRep-PiContr\] The [representation]{} $\pi$ is contractive. 5. \[ExistRegRep-GDiscr\] Suppose $G$ is discrete and countable. For any $\xi \in L^p (G \times X, \, \nu \times \mu),$ represented, following Remark \[R-lpG\], as an element of $l^p (G, \, L^p (X, \mu)),$ the formula (\[Eq:RegRepFormula\]) holds. 6. \[ExistRegRep-Cov\] The pair $(v, \pi)$ is covariant. 7. \[ExistRegRep-Ndg\] If $\pi_0$ is nondegenerate then $\pi$ is nondegenerate. 8. \[ExistRegRep-Sft\] If $G$ is second countable and $\mu$ is [${\sigma}$-finite]{}, then $\nu \times \mu$ is [${\sigma}$-finite]{}. 9. \[ExistRegRep-Sep\] If $G$ is second countable and $L^p (X, \mu)$ is separable, then $L^p (G \times X, \, \nu \times \mu)$ is separable. Uniqueness of $v$ is clear. Now let $g \mapsto u_g$ be the usual left regular [representation]{} of $G$ on $L^p (G).$ Existence and contractivity of $v$ follow from the tensor product decomposition $$L^p (G \times X, \, \nu \times \mu) = L^p (G, \nu) \otimes_p L^p (X, \mu)$$ as in Remark \[R-LpTens\], which gives $v_g = u_g \otimes 1$ for $g \in G.$ We now consider $\pi.$ It is clear that for $a \in A,$ the formula (\[Eq:RegRepFormula\]) gives a well defined map $$T (a) \colon C_{\mathrm{c}} \big( G, \, L^p (X, \mu) \big) \to C_{\mathrm{c}} \big( G, \, L^p (X, \mu) \big).$$ Since $\pi_0$ and ${\alpha}_{h}^{-1}$ are contractive, one readily checks that $\| T (a) \xi \|_p \leq \| a \| \cdot \| \xi \|_p$ for $a \in A$ and $\xi \in C_{\mathrm{c}} \big( G, \, L^p (X, \mu) \big).$ Moreover, $C_{\mathrm{c}} \big( G, \, L^p (X, \mu) \big)$ is dense in $L^p (G \times X, \, \nu \times \mu)$ by Remark \[R-CcGLp\]. Therefore $T (a)$ has a unique extension to a linear operator $\pi (a)$ on $L^p (G \times X, \, \nu \times \mu)$ with $\| \pi (a) \| = \| T (a) \| \leq \| a \|.$ One checks that $\pi$ is a [representation]{} by considering its action on $C_{\mathrm{c}} \big( G, \, L^p (X, \mu) \big).$ This proves existence, uniqueness, and contractivity of $\pi.$ For part (\[ExistRegRep-GDiscr\]), one checks that when $G$ is discrete, the formula (\[Eq:RegRepFormula\]) gives a well defined bounded operator $S (a)$ on $L^p (G \times X, \, \nu \times \mu)$ which agrees with $T (a)$ on $C_{\mathrm{c}} \big( G, \, L^p (X, \mu) \big).$ Therefore $S (a) = \pi (a).$ Covariance can be checked by comparing values on elements of $C_{\mathrm{c}} \big( G, \, L^p (X, \mu) \big).$ We prove part (\[ExistRegRep-Ndg\]). It suffices to show that the linear span of the ranges of the maps $T (a),$ for $a \in A,$ is dense in $C_{\mathrm{c}} \big( G, \, L^p (X, \mu) \big)$ in the norm on $L^p (G \times X, \, \nu \times \mu).$ So let $\xi \in C_{\mathrm{c}} \big( G, \, L^p (X, \mu) \big)$ and let ${\varepsilon}> 0.$ Choose a compact set $K {\subset}G$ such that ${{\mathrm{supp}}}(\xi) {\subset}{{\mathrm{int}}}(K).$ Set $${\varepsilon}_0 = \frac{{\varepsilon}}{3 \nu (K)^{1/p} + 1}.$$ Since $\pi_0$ is nondegenerate, for $g \in G$ there are $$l (g) \in {{\mathbb{Z}}_{> 0}}, \,\,\,\,\,\, {\eta}_{g, 1}, \, {\eta}_{g, 2}, \ldots, {\eta}_{g, l (g)} \in L^p (X, \mu), {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}a_{g, 1}, \, a_{g, 2}, \ldots, a_{g, l (g)} \in A$$ such that $$\left\| \xi (g) - {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{j = 1}^{l (g)} }}}}} \pi_0 (a_{g, j}) \xi_{g, j} \right\|_p < {\varepsilon}_0.$$ Choose an open set $U (g) {\subset}G$ containing $g$ such that for $h \in U (G)$ we have $\| \xi (h) - \xi (g) \| < {\varepsilon}_0$ and $$\big\| (\pi_0 \circ {\alpha}_{h^{-1} g} ) (a_{g, j}) {\eta}_{g, j} - \pi_0 (a_{g, j}) \xi_{g, j} \big\|_p < \frac{{\varepsilon}_0}{l (g)}$$ for $j = 1, 2, \ldots, l (g).$ For any $h \in U (g),$ we then get $$\left\| \xi (h) - {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{j = 1}^{l (g)} }}}}} \pi_0 \big( {\alpha}_h^{-1} ({\alpha}_g (a_{g, j})) \big) {\eta}_{g, j} \right\|_p < 3 {\varepsilon}_0.$$ Choose elements $g_1, g_2, \ldots, g_n \in G$ and [continuous function]{}[s]{} $f_1, f_2, \ldots, f_n \colon G \to [0, 1]$ with compact supports ${{\mathrm{supp}}}(f_k) {\subset}U (g_k) \cap {{\mathrm{int}}}(K)$ such that $\sum_{k = 1}^n f_k (g) = 1$ for all $g \in {{\mathrm{supp}}}(\xi)$ and such that $\sum_{k = 1}^n f_k (g) \in [0, 1]$ for all $g \in G.$ For $k = 1, 2, \ldots, n$ and $j = 1, 2, \ldots, l (g_k),$ define ${\lambda}_{j, k} \in C_{\mathrm{c}} \big( G, \, L^p (X, \mu) \big)$ by ${\lambda}_{j, k} (g) = f_k (g) {\eta}_{g_k, j}$ and define $\xi_k \in C_{\mathrm{c}} \big( G, \, L^p (X, \mu) \big)$ by $\xi_k (g) = f_k (g) \xi (g).$ Then $\xi = \sum_{k = 1}^n \xi_k.$ Therefore $$\begin{aligned} & \left\| \xi - {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{k = 1}^{n} }}}}} {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{j = 1}^{l (g)} }}}}} \pi ({\alpha}_{g_k} (a_{g_k, j})) {\lambda}_{j, k} \right\|_p^p \\ & \hspace*{3em} {\mbox{}} = \int_G \left\| {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{k = 1}^{n} }}}}} f_k (h) \left[ \xi (h) - {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{j = 1}^{l (g)} }}}}} \pi_0 \big( {\alpha}_h^{-1} ({\alpha}_{g_k} (a_{g_k, j})) \big) {\eta}_{g_k, j} \right] \right\|^p \, d \nu (h) \\ & \hspace*{3em} {\mbox{}} \leq \int_G \left[ {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{k = 1}^{n} }}}}} f_k (h) \left\| \xi (h) - {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{j = 1}^{l (g)} }}}}} \pi_0 \big( {\alpha}_h^{-1} ({\alpha}_{g_k} (a_{g_k, j})) \big) {\eta}_{g_k, j} \right\| \right]^p \, d \nu (h) \\ & \hspace*{3em} {\mbox{}} \leq \int_G \left( {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{k = 1}^{n} }}}}} f_k (h) \cdot 3 {\varepsilon}_0 \right)^p \, d \nu (h) \leq 3^p {\varepsilon}_0^p \nu (K).\end{aligned}$$ So $$\left\| \xi - {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{k = 1}^{n} }}}}} {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{j = 1}^{l (g)} }}}}} \pi ({\alpha}_{g_k} (a_{g_k, j})) {\lambda}_{j, k} \right\|_p \leq 3 {\varepsilon}_0 \nu (K)^{1/p} < {\varepsilon}.$$ This completes the proof that $\pi$ is nondegenerate. Part (\[ExistRegRep-Sft\]) and part (\[ExistRegRep-Sep\]) are well known. In [Lemma \[L-ExistRegRep\]]{}, we at least need $\sup_{g \in G} \| {\alpha}_g \|$ to be finite. Otherwise, the densely defined operators $T (a)$ in the proof might not be bounded. \[D-RegRep\] Let the notation be as in [Lemma \[L-ExistRegRep\]]{}. The covariant [representation]{} $(v, \pi)$ is called the [*regular covariant representation*]{} of $(G, A, {\alpha})$ associated to $\pi_0.$ We refer to any [representation]{} obtained this way as a [*regular contractive covariant representation*]{}. We call it [*separable*]{}, [*${\sigma}$-finite*]{}, or [*nondegenerate*]{} if the original [representation]{} $\pi_0$ has the corresponding property. Crossed products of $L^p$ operator algebras {#Sec_CP} =========================================== We present a rudimentary theory of crossed products of algebras of bounded operators on $L^p$ spaces by isometric actions of countable discrete groups. Our intended purpose is the computation of the K-theory of Cuntz algebras on $L^p$ spaces, by realizing them as crossed products by actions of ${{\mathbb{Z}}}.$ For the initial part of the theory, however, there is no extra work in allowing an arbitrary second countable locally compact group, and we therefore work in that generality. As mentioned in the introduction to Section \[Sec\_LpOpAlg\], the completely isometric version of the theory may well be at least as important. One can copy everything done here in that context. For example, in Definition \[D-CrPrd\], one would use completely contractive [representation]{}[s]{} instead of contractive [representation]{}[s]{}. In fact, though, the resulting theory is essentially a special case of the theory we develop here and in the next section. Let ${{\overline}{M}}^p_{{{\mathbb{Z}}_{> 0}}}$ be as in Example \[E-MatpS\]. Let ${\eta}$ be counting measure on ${{\mathbb{Z}}_{> 0}}.$ Let the spatial $L^p$ operator tensor product be as in Example \[E-LpTP\]. Let $A {\subset}{L (L^p (X, \mu))}$ be a norm closed subalgebra. Then a [representation]{} $\pi \colon A \to L ( L^p (Y, \nu))$ is completely contractive [if and only if]{}  the map $${{\mathrm{id}}}_{{{\overline}{M}}^p_{{{\mathbb{Z}}_{> 0}}}} \otimes \pi \colon {{\overline}{M}}^p_{{{\mathbb{Z}}_{> 0}}} \otimes_{\mathrm{alg}} A \to L \big( L^p ({{\mathbb{Z}}_{> 0}}\times Y, \, {\eta}\times \nu) \big)$$ extends to a contractive [homomorphism]{}  defined on ${{\overline}{M}}^p_{{{\mathbb{Z}}_{> 0}}} \otimes_{p} A,$ an action of $G$ on $A$ is completely isometric [if and only if]{}  it induces an isometric action on ${{\overline}{M}}^p_{{{\mathbb{Z}}_{> 0}}} \otimes_{p} A,$ etc. So one can simply form the full or reduced crossed product, as constructed here, by the action on ${{\overline}{M}}^p_{{{\mathbb{Z}}_{> 0}}} \otimes_{p} A,$ and cut down by $e_{1, 1} \otimes 1.$ \[N-CcGA\] Let $A$ be a Banach algebra, let $G$ be a locally compact group with left Haar measure $\nu,$ and let ${\alpha}\colon G \to {{\mathrm{Aut}}}(A)$ be an action of $G$ on $A.$ We let $C_{\mathrm{c}} (G, A, {\alpha})$ be the vector space of all compactly supported continuous functions from $G$ to $A,$ made into an algebra over ${{\mathbb{C}}}$ with the convolution product $$(a b) (g) = \int_G a (h) {\alpha}_h (b ( h^{-1} g)) \, d \nu (h)$$ for $a, b \in C_{\mathrm{c}} (G, A, {\alpha})$ and $g \in G,$ as at the beginning of Section 3 of [@DDW], or as at the beginning of Section 2.3 of [@Wl]. If $(v, \pi)$ is a covariant [representation]{} of $(G, A, {\alpha})$ on a Banach space $E$ in the general sense of Definition 2.12 of [@DDW], then we denote by $v \ltimes \pi$ the [representation]{} of $C_{\mathrm{c}} (G, A, {\alpha})$ defined in equation 3.2 of [@DDW] (called $\pi \rtimes v$ there), given by $$(v \ltimes \pi) (a) \xi = \int_G \pi (a (g)) v_g \xi \, d \nu (g)$$ for $a \in C_{\mathrm{c}} (G, A, {\alpha})$ and $\xi \in E.$ The integral is defined by duality, as in Theorem 2.17 and Proposition 2.19 of [@DDW]: for all ${\omega}$ in the dual space $E'$ of $E,$ we require that $$\label{Eq:DefOfInt} {\omega}\big( (v \ltimes \pi) (a) \xi \big) = \int_G {\omega}\big( \pi (a (g)) v_g \xi \big) \, d \nu (g).$$ In particular, we use the notation $v \ltimes \pi$ when $(v, \pi)$ is a covariant [representation]{} in the sense of [Definition \[D-CovRep\]]{}. \[R-RegRepCP\] Let $p \in [1, {\infty}),$ and let the notation be as in [Lemma \[L-ExistRegRep\]]{}. Assume that $\mu$ is [${\sigma}$-finite]{}. Then a computation shows that for $$\xi \in C_{\mathrm{c}} \big( G, \, L^p (X, \mu) \big) \subset L^p (G \times X, \, \nu \times \mu)$$ (following Remark \[R-CcGLp\]) and for $a \in C_{\mathrm{c}} (G, A, {\alpha}),$ we have $$\big( (v \ltimes \pi) (a) \xi \big) (g) = \int_G \pi_0 \big( {\alpha}_{g}^{-1} (a (h) ) \big) \big( \xi (h^{- 1} g) \big) \, d \nu (h).$$ If $p \in (1, {\infty}),$ we claim that this integral is uniquely determined by the following requirement. Let ${\omega}\colon G \to L^p (X, \mu)'$ be an arbitrary [continuous function]{}  with compact support. Then ${\omega}$ determines a [continuous]{}  linear functional on $L^p (G \times X, \, \nu \times \mu),$ whose value on an element ${\eta}\in C_{\mathrm{c}} (G, A, {\alpha})$ is given by $\int_G {\omega}(g) ( {\eta}(g)) \, d \nu (g).$ The requirement is then that the value of this functional on $(v \ltimes \pi) (a) \xi$ be $$\begin{aligned} \label{Eq:WeakIntRegRep} & \int_G {\omega}(g) \big( \big[ (v \ltimes \pi) (a) \xi \big] (g) \big) \, d \nu (g) \\ & \hspace*{3em} {\mbox{}} = \int_{G \times G} {\omega}(g) \big( \pi_0 \big( {\alpha}_{g}^{-1} (a (h)) \big) \big( \xi (h^{- 1} g) \big) \big) \, d \nu (h) d \nu (g). \notag\end{aligned}$$ (To see that these values determine $(v \ltimes \pi) (a) \xi,$ let $q \in (1, {\infty})$ be the conjugate exponent, that is, $\frac{1}{p} + \frac{1}{q} = 1.$ Apply Remark \[R-CcGLp\] to $L^q (G \times X, \, \nu \times \mu).$ Conclude that the set of such linear functionals is dense in $L^p (G \times X, \, \nu \times \mu)'.$) At this point, we must make a choice: do we require covariant [representation]{}[s]{} to be nondegenerate? For C\* crossed products, both versions are used. For example, nondegeneracy is required in 7.6.5 of [@Pd1], but not in Lemma 2.27 of [@Wl]. Lemma 2.31 of [@Wl] shows that one gets the same crossed product either way. In our context, with no analog of selfadjointness, $L^p$ operator algebras need not even be nondegenerately representable. (See Example \[E\_3419\_Deg\].) Even if the algebra has an approximate identity, we have not determined whether one gets the same crossed product with the two choices. We require nondegeneracy because we want representations of unital algebras to be unital, an outcome which is forced by nondegeneracy. \[D-CrPrd\] Let $p \in [1, {\infty}),$ let $G$ be a second countable locally compact group, and let $(G, A, {\alpha})$ be an isometric $G$-$L^p$ operator algebra which is nondegenerately ${\sigma}$-finitely representable (Definition \[D-LpRep\](\[D\_3914\_LpRep\_Ndg\])). We define two crossed products following Definition 3.2 of [@DDW] (justified in Lemma \[L\_3916\_Exist\] below), but with different choices of the family ${\mathcal{R}}$ of covariant [representation]{}[s]{} which appears there: 1. \[D-CrPrd-f\] We let $F^p (G, A, {\alpha})$ be the crossed product obtained by taking ${\mathcal{R}}$ to be the family ${\mathcal{R}}^p (G, A, {\alpha})$ of all nondegenerate ${\sigma}$-finite contractive covariant [representation]{}[s]{}. We call it the [*full $L^p$ operator crossed product*]{} of $(G, A, {\alpha}).$ 2. \[D-CrPrd-r\] We let $F^p_{\mathrm{r}} (G, A, {\alpha})$ be the crossed product obtained by taking ${\mathcal{R}}$ to be the family ${\mathcal{R}}^p_{\mathrm{r}} (G, A, {\alpha})$ of all regular covariant [representation]{}[s]{} coming from nondegenerate ${\sigma}$-finite contractive [representation]{}[s]{} of $A.$ We call it the [*reduced $L^p$ operator crossed product*]{} of $(G, A, {\alpha}).$ \[L\_3916\_Exist\] Let $p \in [1, {\infty}),$ let $G$ be a second countable locally compact group, and let $(G, A, {\alpha})$ be a nondegenerately ${\sigma}$-finitely representable isometric $G$-$L^p$ operator algebra. (See Definition \[D:Action\] and Definition \[D-LpRep\].) Then the crossed products $F^p (G, A, {\alpha})$ and $F^p_{\mathrm{r}} (G, A, {\alpha})$ of Definition \[D-CrPrd\] exist as in Definition 3.2 of [@DDW]. We must show that ${\mathcal{R}}^p (G, A, {\alpha})$ and ${\mathcal{R}}^p_{\mathrm{r}} (G, A, {\alpha})$ are uniformly bounded in the sense of Definition 3.1 of [@DDW], and are nonempty. Uniform boundedness is obvious. (In the notation of [@DDW], take $C = 1$ and $\nu (g) = 1$ for all $g \in G.$) Lemma \[L-ExistRegRep\] shows that ${\mathcal{R}}^p_{\mathrm{r}} (G, A, {\alpha}) \neq {\varnothing}.$ Then ${\mathcal{R}}^p (G, A, {\alpha}) \neq {\varnothing}$ because ${\mathcal{R}}^p_{\mathrm{r}} (G, A, {\alpha}) \subset {\mathcal{R}}^p (G, A, {\alpha}).$ Here is what happens for [C\*-algebra]{}[s]{}. We are grateful to David Blecher for pointing out the reference [@BM]. \[P-CompCStar\] Let $G$ be a locally compact group, let $A$ be a [C\*-algebra]{}, and let ${\alpha}\colon G \to {{\mathrm{Aut}}}(A)$ be an isometric action. Then $A$ is nondegenerately ${\sigma}$-finitely representable and ${\alpha}_g$ is a \*-automorphism for all $g \in G.$ Moreover, if $G$ is second countable, then $F^2 (G, A, {\alpha}) = C^* (G, A, {\alpha})$ and $F^2_{\mathrm{r}} (G, A, {\alpha}) = C^*_{\mathrm{r}} (G, A, {\alpha}).$ To prove that $A$ is nondegenerately ${\sigma}$-finitely representable, we need to know that there are arbitrarily large Hilbert spaces of the form $L^2 (X, \mu)$ for ${\sigma}$-finite measures $\mu.$ For a sufficiently large set $S,$ take $X = [0, 1]^S$ and take $\mu$ to be the product of copies of Lebesgue measure. By Proposition A.5.8 of [@BM], every contractive representation of $A$ on a Hilbert space is necessarily a \*-[homomorphism]{}. It follows immediately that ${\alpha}_g$ is a \*-automorphism for all $g \in G,$ and, if $G$ is second countable, that $F^p_{\mathrm{r}} (G, A, {\alpha}) = C^*_{\mathrm{r}} (G, A, {\alpha}).$ To get $F^p (G, A, {\alpha}) = C^* (G, A, {\alpha}),$ we also need the well known fact that every isometric bijection on a Hilbert space is unitary. The following two theorems state parts of the universal properties of our two crossed products. \[T-UPropFull\] Let $p \in [1, {\infty}),$ let $G$ be a second countable locally compact group, and let $(G, A, {\alpha})$ be a nondegenerately ${\sigma}$-finitely representable isometric $G$-$L^p$ operator algebra. (See Definition \[D:Action\] and Definition \[D-LpRep\].) 1. \[T-UPropFull-1\] There is a [homomorphism]{} $${\iota}\colon C_{\mathrm{c}} (G, A, {\alpha}) \to F^p (G, A, {\alpha})$$ such that whenever ${(X, {\mathcal{B}}, \mu)}$ is a ${\sigma}$-finite [measure space]{} and $(v, \pi)$ is a nondegenerate contractive covariant [representation]{} of $(G, A, {\alpha})$ on $L^p (X, \mu),$ there is a unique contractive [representation]{} ${\rho}_{v, \pi} \colon F^p (G, A, {\alpha}) \to {L (L^p (X, \mu))}$ such that ${\rho}_{v, \pi} \circ {\iota}= v \ltimes \pi.$ 2. \[T-UPropFull-2\] For every $a \in F^p (G, A, {\alpha}),$ we have $$\begin{aligned} \| a \| & = \sup \big( \big\{ \| {\rho}_{v, \pi} (a) \| \colon {\mbox{$(v, \pi)$ is a nondegenerate {${\sigma}$-finite}}} \\ & \hspace*{5em} {\mbox{contractive covariant {representation}{} of $(G, A, {\alpha})$}} \big\} \big).\end{aligned}$$ 3. \[T-UPropFull-3\] The [homomorphism]{}  ${\iota}$ of (\[T-UPropFull-1\]) has dense range. Let ${\mathcal{R}}^p (G, A, {\alpha})$ be as in Definition \[D-CrPrd\](\[D-CrPrd-f\]), and abbreviate it to ${\mathcal{R}}.$ The map we call ${\iota}$ comes from Definition 3.2 of [@DDW], in which the crossed product is defined to be a certain Hausdorff completion of $C_{\mathrm{c}} (G, A, {\alpha}),$ and Lemma \[L\_3916\_Exist\]. (This map is called $q^{\mathcal{R}}$ in the discussion after Remark 3.3 of [@DDW].) By construction, it has dense range. Existence of ${\rho}_{v, \pi}$ follows from the discussion after Remark 3.3 of [@DDW]. Uniqueness is clear from the fact that ${\iota}$ has dense range. Part (\[T-UPropFull-2\]) follows from Proposition 3.4 of [@DDW]. \[T-UPropRed\] Let $p \in [1, {\infty}),$ let $G$ be a second countable locally compact group, and let $(G, A, {\alpha})$ be an isometric $G$-$L^p$ operator algebra which is nondegenerately ${\sigma}$-finitely representable (Definition \[D:Action\] and Definition \[D-LpRep\]). 1. \[T-UPropRed-1\] There is a [homomorphism]{} $${\iota}_{\mathrm{r}} \colon C_{\mathrm{c}} (G, A, {\alpha}) \to F^p_{\mathrm{r}} (G, A, {\alpha})$$ such that whenever ${(X, {\mathcal{B}}, \mu)}$ is a ${\sigma}$-finite [measure space]{} and $(v, \pi)$ is a contractive regular covariant [representation]{} of $(G, A, {\alpha})$ on $L^p (X, \mu),$ there is a unique contractive [representation]{} ${\rho}_{v, \pi} \colon F^p_{\mathrm{r}} (G, A, {\alpha}) \to {L (L^p (X, \mu))}$ such that ${\rho}_{v, \pi} \circ {\iota}_{\mathrm{r}} = v \ltimes \pi.$ 2. \[T-UPropRed-2\] For every $a \in F^p_{\mathrm{r}} (G, A, {\alpha}),$ we have $$\begin{aligned} \| a \| & = \sup \big( \big\{ \| {\rho}_{v, \pi} (a) \| \colon {\mbox{$(v, \pi)$ is a nondegenerate {${\sigma}$-finite}}} \\ & \hspace*{4em} {\mbox{contractive regular covariant {representation}{} of $(G, A, {\alpha})$}} \big\} \big).\end{aligned}$$ 3. \[T-UPropRed-3\] The [homomorphism]{}  ${\iota}_{\mathrm{r}}$ of (\[T-UPropFull-1\]) has dense range. The proof is essentially the same as that of Theorem \[T-UPropFull\]. \[D-IndRpn\] We refer to the [representation]{}[s]{} ${\rho}_{v, \pi}$ in Theorem \[T-UPropFull\] and Theorem \[T-UPropRed\] as the [*representations induced by the covariant representation $(v, \pi).$*]{} When no confusion can arise, we also denote them by $v \ltimes \pi.$ If $(v, \pi)$ is a regular covariant representation, we refer to $v \ltimes \pi,$ as a [representation]{} of any of $C_{\mathrm{c}} (G, A, {\alpha}),$ $F^p (G, A, {\alpha}),$ and $F^p_{\mathrm{r}} (G, A, {\alpha}),$ as a [*regular representation*]{} of the appropriate algebra. A [*contractive regular representation*]{} is a regular representation for which the original [representation]{} of $A$ is contractive. It is clear that separability and ${\sigma}$-finiteness of $v \ltimes \pi$ are the same as the corresponding properties of $\pi.$ If $(v, \pi)$ is a regular covariant representation, derived from a [representation]{} $\pi_0$ of $A,$ then, by Lemma \[L-ExistRegRep\], they are the same as the corresponding properties for $\pi_0.$ \[L\_3916\_IntIsNdg\] Let $p \in [1, {\infty}),$ let $G$ be a second countable locally compact group, and let $(G, A, {\alpha})$ be a nondegenerately ${\sigma}$-finitely representable isometric $G$-$L^p$ operator algebra. (See Definition \[D:Action\] and Definition \[D-LpRep\].) 1. \[L\_3916\_IntIsNdg\_Crr\] Let $(v, \pi)$ be a covariant representation of $(G, A, {\alpha})$ in which $\pi$ is nondegenerate. Then $v \ltimes \pi$ is nondegenerate. 2. \[L\_3916\_IntIsNdg\_Reg\] Let $\pi_0$ be a nondegenerate [representation]{} of $A,$ and let $(v, \pi)$ be the associated regular covariant representation of $(G, A, {\alpha}).$ If $\pi_0$ is nondegenerate, then $v \ltimes \pi$ is nondegenerate. Part (\[L\_3916\_IntIsNdg\_Crr\]) follows from Proposition 5.5(i) of [@DDW]. Part (\[L\_3916\_IntIsNdg\_Reg\]) follows from part (\[L\_3916\_IntIsNdg\_Crr\]) and Lemma \[L-ExistRegRep\](\[ExistRegRep-Ndg\]). Under additional hypotheses, there is a converse to Lemma \[L\_3916\_IntIsNdg\](\[L\_3916\_IntIsNdg\_Crr\]); see Proposition 5.5(i) of [@DDW]. There is also, again under additional hypotheses, an analog of the statement for [C\*-algebra]{}[s]{} that every nondegenerate [representation]{} of $F^p (G, A, {\alpha})$ comes from a nondegenerate covariant [representation]{} of $(G, A, {\alpha}),$ and a related statement for $F^p_{\mathrm{r}} (G, A, {\alpha}).$ See Theorem 8.1 and Theorem 9.1 of [@DDW]. We do not reformulate these results for our situation here. We also find it convenient to use the $L^1$ crossed product. (As far as we know, in general this crossed product need not be the same as either of the $L^1$ operator crossed products. However, for the special case $A = {{\mathbb{C}}},$ it agrees with both the $L^1$ operator crossed products. See Proposition \[P\_3217\_L1OpGpAlg\] below.) \[N-L1CrPrd\] Let $A$ be a Banach algebra, let $G$ be a locally compact group with left Haar measure $\nu,$ and let ${\alpha}\colon G \to {{\mathrm{Aut}}}(A)$ be an isometric action of $G$ on $A.$ Then $L^1 (G, A, {\alpha})$ denotes the convolution algebra consisting of all $L^1$ functions from $G$ to $A.$ The space $L^1 (G, A, {\alpha})$ is as defined in the discussion on page 630 of [@Sw2], taking ${\sigma}(g) = 1$ for all $g \in G.$ The proof that it is a Banach algebra under convolution follows from the estimates in the proof of Theorem 2.2.6 of [@Sw2]. Alternatively, see Appendix B of [@Wl], and note that everything there not involving the \* operation goes through for an isometric action of $G$ on a Banach algebra. The important point is that $L^1 (G, A, {\alpha})$ is the completion of $C_{\mathrm{c}} (G, A, {\alpha})$ in the norm $\| a \|_1 = \int_G \| a (g) \| \, d \nu (g).$ A third source, with a detailed proof, is Proposition IV.1.5 of [@Jn2]. (See Lemma IV.1.1 and Definition IV.1.4 of [@Jn2] for the notation. Actions are assumed isometric; see Definition I.2.2 of [@Jn1].) The following proposition and its corollary will not be used. They are included to show one way in which $L^p$ operator crossed products are well behaved. \[P-RegCcInj\] Let $p \in [1, {\infty}),$ let $G$ be a locally compact group with left Haar measure $\nu,$ and let $(G, A, {\alpha})$ be an isometric $G$-$L^p$ operator algebra. Let ${(X, {\mathcal{B}}, \mu)}$ be a [measure space]{}, let $\pi_0 \colon A \to {L (L^p (X, \mu))}$ be an injective [representation]{}, and let $(v, \pi)$ be the associated regular [representation]{}, as in [Definition \[D-RegRep\]]{}. Then the map $v \ltimes \pi \colon C_{\mathrm{c}} (G, A, {\alpha}) \to L \big( L^p ( G \times X, \, \nu \times \mu) \big)$ of Notation \[N-CcGA\] is injective. The proof is essentially the same as that of Lemma 2.26 of [@Wl]. (However, we avoid the initial translation simplification there, since we don’t need the properties of the translation map for anything else.) Thus, let $a \in C_{\mathrm{c}} (G, A, {\alpha})$ be nonzero. Choose $g_0 \in G$ such that $a (g_0) \neq 0.$ Then $\big( \pi_0 \circ {\alpha}_{g_0}^{-1} \big) (a (g_0) ) \neq 0.$ Choose $\xi_0 \in L^p (X, \mu)$ and ${\omega}_0 \in L^p (X, \mu)'$ such that $${\omega}_0 \big( \big( \pi_0 \circ {\alpha}_{g_0}^{-1} \big) (a (g_0) ) \xi_0 \big) \neq 0.$$ Define a [continuous function]{} $c \colon G \times G \to {{\mathbb{C}}}$ by $$c (g, h) = {\omega}_0 \big( \big( \pi_0 \circ {\alpha}_{g}^{-1} \big) (a (h) ) \xi_0 \big)$$ for $g, h \in G.$ Then $c (g_0, g_0) \neq 0.$ Therefore there is an open set $U {\subset}G$ such that $g_0 \in U$ and such that for all $g, h \in U,$ we have $| c (g, h) - c (g_0, g_0) | < \tfrac{1}{2} | c (g_0, g_0) |.$ Choose an open set $W {\subset}G$ such that $1 \in W$ and $W^2 {\subset}U^{-1} g_0.$ Choose $f \in C_{\mathrm{c}} (G)$ such that $f \geq 0,$ ${{\mathrm{supp}}}(f) {\subset}W,$ and $f (1) > 0.$ Let $\xi \in L^p (G \times X, \, \nu \times \mu)$ be given by the function $g \mapsto f (g) \xi_0$ from $G$ to $L^p (X, \mu).$ Let ${\omega}\in C_{\mathrm{c}} (G, L^p (X, \mu)' )$ be the function ${\omega}(g) = f (g^{-1} g_0) {\omega}_0$ for $g \in G,$ and identify ${\omega}$ with an element of $L^p (G \times X, \, \nu \times \mu)'$ as in Remark \[R-RegRepCP\]. Set $${\lambda}= \int_{G \times G} f (g^{-1} g_0) f (h^{-1} g) \, d \nu (h) d \nu (g).$$ Two changes of variables, first from $g$ to $g_0 g,$ then from $h$ to $g_0 h,$ show that $${\lambda}= \int_{G \times G} f (g^{-1}) f (h^{-1} g) \, d \nu (h) d \nu (g),$$ which is clearly strictly positive. Suppose $g, h \in G$ and $f (g^{-1} g_0) f (h^{-1} g) \neq 0.$ Then $g^{-1} g_0 \in W$ and $h^{-1} g \in W.$ So $$g^{-1} g_0 = g^{-1} g_0 \cdot 1 \in W^2 {\subset}U^{-1} g_0 {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}h^{-1} g_0 = h^{-1} g \cdot g^{-1} g_0 \in W^2 {\subset}U^{-1} g_0.$$ It follows that $g, h \in U,$ so $| c (g, h) - c (g_0, g_0) | < \tfrac{1}{2} | c (g_0, g_0) |.$ Using this result at the second step, and the formula (\[Eq:WeakIntRegRep\]) in Remark \[R-RegRepCP\] at the first step, we get $$\begin{aligned} & \big| {\omega}\big( (v \ltimes \pi) (a) \xi \big) - {\lambda}c (g_0, g_0) \big| \\ & \hspace*{3em} {\mbox{}} \leq \int_{G \times G} f (g^{-1} g_0) f (h^{-1} g) | c (g, h) - c (g_0, g_0) | \, d \nu (h) d \nu (g) \leq \frac{{\lambda}| c (g_0, g_0) |}{2}.\end{aligned}$$ Since ${\lambda}> 0$ and $c (g_0, g_0) \neq 0,$ we deduce that $(v \ltimes \pi) (a) \neq 0.$ \[C-CcRedInj\] Let $p \in [1, {\infty}),$ let $G$ be a second countable locally compact group, and let $(G, A, {\alpha})$ be a nondegenerately ${\sigma}$-finitely representable isometric $G$-$L^p$ operator algebra. (See Definition \[D:Action\] and Definition \[D-LpRep\].) Then the map ${\iota}\colon C_{\mathrm{c}} (G, A, {\alpha}) \to F^p (G, A, {\alpha})$ of Theorem \[T-UPropFull\](\[T-UPropFull-1\]) and the map ${\iota}_{\mathrm{r}} \colon C_{\mathrm{c}} (G, A, {\alpha}) \to F^p_{\mathrm{r}} (G, A, {\alpha})$ of Theorem \[T-UPropRed\](\[T-UPropRed-1\]) are both injective. This is immediate from Proposition \[P-RegCcInj\] and Theorem \[T-UPropFull\](\[T-UPropFull-2\]) (for ${\iota}$) and Theorem \[T-UPropRed\](\[T-UPropRed-2\]) (for ${\iota}_{\mathrm{r}}$). \[L-CompOfCrPrd\] Let $p \in [1, {\infty}),$ let $G$ be a second countable locally compact group, and let $(G, A, {\alpha})$ be a nondegenerately ${\sigma}$-finitely representable isometric $G$-$L^p$ operator algebra. (See Definition \[D:Action\] and Definition \[D-LpRep\].) Let ${\iota}\colon C_{\mathrm{c}} (G, A, {\alpha}) \to F^p (G, A, {\alpha})$ be as in Theorem \[T-UPropFull\](\[T-UPropFull-1\]) and let ${\iota}_{\mathrm{r}} \colon C_{\mathrm{c}} (G, A, {\alpha}) \to F^p_{\mathrm{r}} (G, A, {\alpha})$ be as in Theorem \[T-UPropRed\](\[T-UPropRed-1\]). Then there are unique [continuous]{}  homomorphisms $${\kappa}\colon L^1 (G, A, {\alpha}) \to F^p (G, A, {\alpha}) {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}{\kappa}_{\mathrm{r}} \colon F^p (G, A, {\alpha}) \to F^p_{\mathrm{r}} (G, A, {\alpha})$$ such that ${\kappa}|_{C_{\mathrm{c}} (G, A, {\alpha})} = {\iota}$ and ${\kappa}_{\mathrm{r}} \circ {\iota}= {\iota}_{\mathrm{r}}.$ Moreover, ${\kappa}$ and ${\kappa}_{\mathrm{r}}$ are contractive and have dense ranges. Once everything else is done, the statements about uniqueness and dense ranges follow from Theorem \[T-UPropFull\](\[T-UPropFull-3\]), Theorem \[T-UPropRed\](\[T-UPropRed-3\]), and density of $C_{\mathrm{c}} (G, A, {\alpha})$ in $L^1 (G, A, {\alpha}).$ The remaining assertions about ${\kappa}_{\mathrm{r}}$ are immediate from Theorem \[T-UPropFull\](\[T-UPropFull-2\]) and Theorem \[T-UPropRed\](\[T-UPropRed-2\]), because every contractive regular covariant [representation]{} is a contractive covariant [representation]{}. It remains to prove that ${\kappa}$ exists and is contractive. By Theorem \[T-UPropFull\](\[T-UPropFull-2\]), it suffices to show that if ${(X, {\mathcal{B}}, \mu)}$ is a [measure space]{}, $(v, \pi)$ is a contractive covariant [representation]{} of $(G, A, {\alpha})$ on $L^p (X, \mu),$ and $a \in C_{\mathrm{c}} (G, A, {\alpha}),$ then $\| (v \ltimes \pi) (a) \| \leq \| a \|_1.$ We let $\xi \in L^p (X, \mu)$ and ${\omega}\in L^p (X, \mu)',$ and use (\[Eq:DefOfInt\]) in Notation \[N-CcGA\]. Since $\pi$ and $v$ are contractive, we get $$\big| {\omega}\big( (v \ltimes \pi) (a) \xi \big) \big| \leq \int_G \big| {\omega}\big( \pi (a (g)) v_g \xi \big) \big| \, d \nu (g) \leq \| {\omega}\| \cdot \| a \|_1 \cdot \| \xi \|.$$ The required estimate follows. When $p = 2$ and $A$ is a [C\*-algebra]{}, the map ${\kappa}_{\mathrm{r}} \colon C^* (G, A, {\alpha}) \to C^*_{\mathrm{r}} (G, A, {\alpha})$ is always surjective (this follows from Theorem 1.5.7 of [@Pd1]), and is an isometric isomorphism if $G$ is amenable (Theorem 7.7.7 of [@Pd1]). Moreover, for $A = {{\mathbb{C}}},$ if ${\kappa}_{\mathrm{r}}$ is an isomorphism, then $G$ is amenable. (See Theorem 7.3.9 of [@Pd1].) We have not investigated whether any of these facts carries over to $L^p$ operator crossed products for $p \neq 2,$ except for some very special cases. We will see in Remark \[R-FGpCP\] below that if $G$ is finite then ${\kappa}_{\mathrm{r}}$ is bijective. Also, the case $p = 1$ is apparently special, at least when considering the $L^p$ operator group algebras, which we denote by $F^p (G)$ and $F^p_{\mathrm{r}} (G).$ \[P\_3217\_L1OpGpAlg\] Let $G$ be any second countable locally compact group. Then the maps $${\kappa}\colon L^1 (G) \to F^1 (G) {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}{\kappa}_{\mathrm{r}} \colon F^1 (G) \to F^1_{\mathrm{r}} (G)$$ of Lemma \[L-CompOfCrPrd\] are isometric isomorphisms of Banach algebras. By Lemma \[L-CompOfCrPrd\], it suffices to prove that $\| ({\kappa}_{\mathrm{r}} \circ {\kappa}) (a) \| \geq \| a \|$ for all $a \in L^1 (G).$ For this, by Theorem \[T-UPropRed\](\[T-UPropRed-2\]) it suffices to find one [representation]{} $\pi$ of ${{\mathbb{C}}}$ on an $L^1$ space $L^1 (X, \mu)$ such that, if $(v, \pi)$ is the associated regular [representation]{}, as in [Definition \[D-RegRep\]]{}, then the map $v \ltimes \pi$ has the property that $\| (v \ltimes \pi) (a) \| \geq \| a \|.$ We take $X$ to be a one point space and $\mu$ to be counting measure. Then $v \ltimes \pi \colon L^1 (G) \to L ( L^1 (G))$ sends $a \in L^1 (G)$ to left convolution by $a.$ This operator has norm $\| a \|$ because $L^1 (G)$ has an approximate identity consisting of elements of norm $1.$ In particular, the map ${\kappa}_{\mathrm{r}} \colon F^1 (G) \to F^1_{\mathrm{r}} (G)$ is an isomorphism regardless of whether $G$ is amenable. In the rest of this section, we define the dual action on an $L^p$ operator crossed product (both full and reduced) when the group is abelian. We use the dual action for the case $G = {{\mathbb{Z}}}$ in Section \[Sec\_PV\] to obtain properties of a smooth subalgebra of the crossed product. \[D-DualCc\] Let $A$ be a Banach algebra, let $G$ be a locally compact abelian group, and let ${\alpha}\colon G \to {{\mathrm{Aut}}}(A)$ be an action of $G$ on $A.$ For ${\tau}\in {\widehat{G}},$ we define $${\widehat{{\alpha}}}_{{\tau}} \colon C_{\mathrm{c}} (G, A, {\alpha}) \to C_{\mathrm{c}} (G, A, {\alpha})$$ by $${\widehat{{\alpha}}}_{{\tau}} (a) (g) = {{\overline}{{\tau}(g)}} a (g)$$ for $a \in C_{\mathrm{c}} (G, A, {\alpha})$ and $g \in G.$ The map ${\tau}\mapsto {\widehat{{\alpha}}}_{{\tau}}$ is called the [*dual action of ${\widehat{G}}$ on $C_{\mathrm{c}} (G, A, {\alpha})$*]{}. The choice ${{\overline}{{\tau}(g)}}$ rather than ${\tau}(g)$ seems to be more common. It agrees with the convention in [@Tki] (see the beginning of Section 3 there) and [@Wl] (see the beginning of Section 7.1 there), but disagrees with the convention in [@Pd1] (see Proposition 7.8.3 there). \[L-DualIsAct\] In the situation of [Definition \[D-DualCc\]]{}, the map ${\tau}\mapsto {\widehat{{\alpha}}}_{{\tau}}$ is a [homomorphism]{}  from ${\widehat{G}}$ to the group of algebraic automorphisms of $C_{\mathrm{c}} (G, A, {\alpha}).$ This is a well known computation. \[T-DualL1\] Let $A$ be a Banach algebra, let $G$ be a locally compact abelian group, and let ${\alpha}\colon G \to {{\mathrm{Aut}}}(A)$ be a isometric action of $G$ on $A.$ Then there exists a unique continuous isometric action, also called ${\tau}\mapsto {\widehat{{\alpha}}}_{{\tau}},$ of ${\widehat{G}}$ on $L^1 (G, A, {\alpha})$ such that the inclusion of $C_{\mathrm{c}} (G, A, {\alpha})$ in $L^1 (G, A, {\alpha})$ is equivariant. When ${\alpha}$ is isometric, it is easy to see that for all $a \in C_{\mathrm{c}} (G, A, {\alpha})$ we have $\| {\widehat{{\alpha}}}_{{\tau}} (a) \|_1 = \| a \|_1.$ Therefore ${\tau}\mapsto {\widehat{{\alpha}}}_{{\tau}}$ extends uniquely to a group [homomorphism]{}  from ${\widehat{G}}$ to the isometric automorphisms of $L^1 (G, A, {\alpha}).$ One checks that if $a \in C_{\mathrm{c}} (G, A, {\alpha})$ then ${\tau}\mapsto {\widehat{{\alpha}}}_{{\tau}} (a)$ is [continuous]{}  in $\| \cdot \|_1.$ Since ${\widehat{{\alpha}}}_{{\tau}}$ is isometric for all ${\tau}\in {\widehat{G}},$ a standard $\frac{{\varepsilon}}{3}$ argument shows that ${\tau}\mapsto {\widehat{{\alpha}}}_{{\tau}} (a)$ is [continuous]{}  for all $a \in L^1 (G, A, {\alpha}).$ \[T-DualpFull\] Let $p \in [1, {\infty}),$ let $G$ be a second countable locally compact abelian group, and let $(G, A, {\alpha})$ be a nondegenerately ${\sigma}$-finitely representable isometric $G$-$L^p$ operator algebra. (See Definition \[D:Action\] and Definition \[D-LpRep\].) Then there exists a unique continuous isometric action, also called ${\tau}\mapsto {\widehat{{\alpha}}}_{{\tau}},$ of ${\widehat{G}}$ on $F^p (G, A, {\alpha})$ such that the inclusion ${\iota}\colon C_{\mathrm{c}} (G, A, {\alpha}) \to F^p (G, A, {\alpha})$ is equivariant. For a Banach space $E,$ a [representation]{} $v$ of $G$ on $E,$ and ${\tau}\in {\widehat{G}},$ we define a [representation]{} ${\tau}v \colon G \to L (E)$ by $({\tau}v)_g = {\tau}(g) v_g$ for $g \in G.$ If $(v, \pi)$ is a nondegenerate [${\sigma}$-finite]{} contractive covariant [representation]{} of $(G, A, {\alpha}),$ then one easily checks that $({\tau}v, \, \pi)$ is too. Since ${{\overline}{{\tau}}} ( {\tau}v) = v,$ the map $(v, \pi) \to ({{\overline}{{\tau}}} v, \, \pi)$ is a bijection on the collection of nondegenerate [${\sigma}$-finite]{} contractive covariant [representation]{}[s]{} of $(G, A, {\alpha})$ on spaces of the form $L^p (X, \mu).$ One checks that $$({{\overline}{{\tau}}} v \ltimes \pi) (a) = (v \ltimes \pi) \big( {\widehat{{\alpha}}}_{{\tau}} (a) \big)$$ for all $a \in C_{\mathrm{c}} (G, A, {\alpha}).$ It now follows from Theorem \[T-UPropFull\](\[T-UPropFull-2\]) that $\| {\iota}\big( {\widehat{{\alpha}}}_{{\tau}} (a) \big) \| = \| {\iota}(a) \|$ for all $a \in C_{\mathrm{c}} (G, A, {\alpha}).$ By Theorem \[T-UPropFull\](\[T-UPropFull-3\]), ${\widehat{{\alpha}}}_{{\tau}}$ extends to an isometric endomorphism of $F^p (G, A, {\alpha}).$ These endomorphisms are easily seen to be an action of ${\widehat{G}}$ on $F^p (G, A, {\alpha})$; in particular, they are automorphisms. Since ${\kappa}\colon L^1 (G, A, {\alpha}) \to F^p (G, A, {\alpha})$ is contractive, it follows from Theorem \[T-DualL1\] that for $a \in L^1 (G, A, {\alpha}),$ the map ${\tau}\mapsto {\widehat{{\alpha}}}_{{\tau}} ( {\iota}(a)) \in F^p (G, A, {\alpha})$ is [continuous]{}. Since ${\widehat{{\alpha}}}_{{\tau}}$ is isometric for all ${\tau}\in {\widehat{G}},$ continuity of ${\tau}\mapsto {\widehat{{\alpha}}}_{{\tau}} (b)$ for all $b \in F^p (G, A, {\alpha})$ follows from density of the range of ${\iota}$ by a standard $\frac{{\varepsilon}}{3}$ argument. For the dual action on the reduced crossed product, we need a lemma. \[L-RedOneRep\] Let $p \in [1, {\infty}),$ let $G$ be a second countable locally compact group, and let $(G, A, {\alpha})$ be a separable nondegenerately representable isometric $G$-$L^p$ operator algebra. (See Definition \[D:Action\] and Definition \[D-LpRep\].) Then there exists a [${\sigma}$-finite measure space]{} ${(X, {\mathcal{B}}, \mu)}$ and a nondegenerate isometric [representation]{} $\pi_0 \colon A \to {L (L^p (X, \mu))}$ such that, with $(v, \pi)$ being the associated regular covariant [representation]{}, the [representation]{} $v \ltimes \pi$ is nondegenerate and isometric on $F^p_{\mathrm{r}} (G, A, {\alpha}).$ Since $G$ is second countable and $A$ is separable, it is easily checked that $F^p_{\mathrm{r}} (G, A, {\alpha})$ is separable. Let $S {\subset}F^p_{\mathrm{r}} (G, A, {\alpha})$ be a countable dense subset. For ${\varepsilon}> 0$ and $b \in F^p_{\mathrm{r}} (G, A, {\alpha}),$ use Theorem \[T-UPropRed\](\[T-UPropRed-2\]) to choose a [${\sigma}$-finite measure space]{} $(X_{b, {\varepsilon}}, \, {{\mathcal{B}}}_{b, {\varepsilon}}, \, \mu_{b, {\varepsilon}})$ and a nondegenerate contractive [representation]{} $$\pi_0^{b, {\varepsilon}} \colon A \to L \big( L^p (X_{b, {\varepsilon}}, \, \mu_{b, {\varepsilon}} ) \big)$$ such that, with $\big( v^{b, {\varepsilon}}, \, \pi^{b, {\varepsilon}} \big)$ being the associated regular covariant [representation]{}, we have $$\big\| \big( v^{b, {\varepsilon}} \ltimes \pi^{b, {\varepsilon}} \big) (b) \big\| > \| b \| - {\varepsilon}.$$ Further choose a [${\sigma}$-finite measure space]{} ${(X, {\mathcal{B}}, \mu)}$ and a nondegenerate isometric [representation]{} ${\sigma}_0 \colon A \to {L (L^p (X, \mu))},$ and let $(u, {\sigma})$ be the associated regular covariant [representation]{}. Let $\pi_0$ be the $L^p$ direct sum (Definition \[D-LpDirectSum\]) of ${\sigma}_0$ and all $\pi_0^{b, {\varepsilon}}$ for ${\varepsilon}\in \left\{ 1, \frac{1}{2}, \frac{1}{3}, \ldots \right\}$ and $b \in S.$ Then $\pi_0$ is nondegenerate (by Lemma \[L\_3914\_SmNdg\]), isometric, and [${\sigma}$-finite]{} (being a countable direct sum of [${\sigma}$-finite]{} [representation]{}[s]{}). Also, if $(v, \pi)$ is the associated regular covariant [representation]{}, then $v \ltimes \pi$ is the $L^p$ direct sum of $u \ltimes {\sigma}$ and all $v^{b, {\varepsilon}} \ltimes \pi^{b, {\varepsilon}}$ for ${\varepsilon}\in \left\{ 1, \frac{1}{2}, \frac{1}{3}, \ldots \right\}$ and $b \in S.$ These representations are all nondegenerate by Lemma \[L\_3916\_IntIsNdg\](\[L\_3916\_IntIsNdg\_Reg\]). Therefore $v \ltimes \pi$ is nondegenerate (by Lemma \[L\_3914\_SmNdg\]), and is clearly isometric. \[T-DualpRed\] Let $p \in [1, {\infty}),$ let $G$ be a second countable locally compact abelian group, and let $(G, A, {\alpha})$ be a separable nondegenerately representable isometric $G$-$L^p$ operator algebra. (See Definition \[D:Action\] and Definition \[D-LpRep\].) Then there exists a unique continuous isometric action, also called ${\tau}\mapsto {\widehat{{\alpha}}}_{{\tau}},$ of ${\widehat{G}}$ on $F^p_{\mathrm{r}} (G, A, {\alpha})$ such that the inclusion ${\iota}_{\mathrm{r}} \colon C_{\mathrm{c}} (G, A, {\alpha}) \to F^p_{\mathrm{r}} (G, A, {\alpha})$ is equivariant. Let $$\pi_0 \colon A \to {L (L^p (X, \mu))}, \,\,\,\,\,\,\, \pi \colon A \to L \big( L^p (G \times X, \, \nu \times \mu) \big),$$ and $$v \colon G \to L \big( L^p (G \times X, \, \nu \times \mu) \big)$$ be as in [Lemma \[L-RedOneRep\]]{}. For ${\tau}\in {\widehat{G}},$ define $$w_{{\tau}} \in L \big( L^p (G \times X, \, \nu \times \mu) \big)$$ by $$(w_{{\tau}} \xi) (g, x) = {{\overline}{{\tau}(g) }} \xi (g, x)$$ for $\xi \in L^p (G \times X, \, \nu \times \mu),$ $g \in G,$ and $x \in X.$ Then $w_{{\tau}}$ is an isometric bijection because it is multiplication by a function with absolute value $1.$ It is easy to check that $w_{{\tau}}$ commutes with $\pi (a)$ for all ${\tau}\in {\widehat{G}}$ and $a \in A,$ and that $w_{{\tau}} v_g w_{{\tau}}^{-1} = {{\overline}{{\tau}(g) }} v_g$ for all ${\tau}\in {\widehat{G}}$ and $g \in G.$ It follows from the definition of $v \ltimes \pi$ that $$w_{{\tau}} ((v \ltimes \pi) \circ {\iota}_{\mathrm{r}}) (b) w_{{\tau}}^{-1} = ((v \ltimes \pi) \circ {\iota}_{\mathrm{r}}) \big( {\widehat{{\alpha}}}_{{\tau}} (b) \big)$$ for all ${\tau}\in {\widehat{G}}$ and $b \in C_{\mathrm{c}} (G, A, {\alpha}).$ Clearly $a \mapsto w_{{\tau}} a w_{{\tau}}^{-1}$ defines an isometric action of ${\widehat{G}}$ (with the discrete topology) on $F^p_{\mathrm{r}} (G, A, {\alpha}).$ The proof of continuity in the usual topology of ${\widehat{G}}$ is the same as in the proof of Theorem \[T-DualL1\]. Reduced $L^p$ operator crossed products by countable discrete groups {#Sec_DiscCP} ==================================================================== In this section, we assume that the group $G$ is discrete and, starting with Lemma \[L:FGpCP\], countable. The Haar measure $\nu$ on $G$ will always be counting measure. We will require that $A$ be separable, since we use Lemma \[L-RedOneRep\]. We define a Banach conditional expectation from the reduced $L^p$ operator crossed product $F^p_{\mathrm{r}} (G, A, {\alpha})$ to $A,$ and use it to define coordinates of elements of $F^p_{\mathrm{r}} (G, A, {\alpha}),$ in a manner similar to what is done for reduced crossed product [C\*-algebra]{}[s]{}. We begin by writing the formula for a regular [representation]{} of $C_{\mathrm{c}} (G, A, {\alpha})$ in a more convenient way. \[N-Sum\] Let $A$ be a Banach algebra, let $G$ be a discrete group, and let ${\alpha}\colon G \to {{\mathrm{Aut}}}(A)$ be an action of $G$ on $A.$ Let ${\widetilde{A}}$ be the unitization of $A$ (we do not add a new identity if $A$ is already unital), and identify $C_{\mathrm{c}} (G)$ with a subspace of $C_{\mathrm{c}} \big( G, {\widetilde{A}}, {\alpha}\big)$ in the obvious way. We let $u_g$ be the characteristic function of $\{ g \},$ regarded as an element of $C_{\mathrm{c}} \big( G, {\widetilde{A}}, {\alpha}\big).$ We can then write an element $a \in C_{\mathrm{c}} (G, A, {\alpha})$ as a finite sum $a = \sum_{g \in G} a_g u_g$ (using $a_g$ rather than $a (g)$). \[N\_3920\_sgtg\] Let $G$ be a countable set with counting measure $\nu,$ and let ${(X, {\mathcal{B}}, \mu)}$ be a measure space. For $g \in G$ define $$s_g \colon L^p (X, \mu) \to L^p (G \times X, \, \nu \times \mu) {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}t_g \colon L^p (G \times X, \, \nu \times \mu) \to L^p (X, \mu)$$ as follows. Set $$s_g ({\eta}) (h) = \begin{cases} {\eta}& h = g \\ 0 & h \neq g \end{cases}$$ for ${\eta}\in L^p (X, \mu)$ and $h \in G.$ For $\xi \in L^p (G \times X, \, \nu \times \mu),$ define $t_g (\xi)$ to be the composition of $\xi$ with the map $x \mapsto (g, x)$ from $X$ to $G \times X.$ \[L\_3920\_sgsp\] Let the notation be as in Notation \[N\_3920\_sgtg\]. Then $s_g$ is a spatial isometry in the sense of Definition 6.4 of [@PhLp1], $t_g$ is its reverse in the sense of Definition 6.13 of [@PhLp1], $t_g s_g = 1,$ and $s_g t_g$ is multiplication by ${\chi}_{ \{ g \} \times X}.$ Everything is immediate. \[L-StructRR\] Let $p \in [1, {\infty}),$ let $G$ be a countable discrete group, and let $(G, A, {\alpha})$ be an isometric $G$-$L^p$ operator algebra. Let ${(X, {\mathcal{B}}, \mu)}$ be a [measure space]{}, and let $\pi_0 \colon A \to {L (L^p (X, \mu))}$ be a contractive [representation]{}. Let $(v, \pi)$ be the associated regular covariant [representation]{} as in [Definition \[D-RegRep\]]{}. For $g \in G,$ let $s_g$ and $t_g$ be as in Notation \[N\_3920\_sgtg\]. Then: 1. \[L-StructRR-1\] For $a \in C_{\mathrm{c}} (G, A, {\alpha}),$ $\xi \in C_{\mathrm{c}} \big( G, \, L^p (X, \mu) \big),$ and $h \in G,$ we have $$\big( (v \ltimes \pi) (a) \xi \big) (h) = \sum_{g \in G} \pi_0 \big( {\alpha}_h^{-1} (a_g) \big) \big( \xi (g^{-1} h ) \big).$$ 2. \[L-StructRR-2b\] If $c \in L \big( L^p (G \times X, \, \nu \times \mu) \big)$ satisfies $t_h c s_k = 0$ for all $h, k \in G,$ then $c = 0.$ 3. \[L-StructRR-3\] For $a \in C_{\mathrm{c}} (G, A, {\alpha})$ and $h, k \in G,$ we have $$t_h (v \ltimes \pi) (a) s_k = \pi_0 \big( {\alpha}_h^{-1} ( a_{h k^{-1} }) \big).$$ These statements are all calculations. \[L:FGpCP\] Let $p \in [1, {\infty}),$ let $G$ be a countable discrete group, and let $(G, A, {\alpha})$ be a separable nondegenerately representable isometric $G$-$L^p$ operator algebra. (See Definition \[D:Action\] and Definition \[D-LpRep\].) Let $\| \cdot \|$ be the norm on $F^p (G, A, {\alpha}),$ restricted to $C_{\mathrm{c}} (G, A, {\alpha})$; let $\| \cdot \|_{\mathrm{r}}$ be the norm on $F^p_{\mathrm{r}} (G, A, {\alpha}),$ restricted to $C_{\mathrm{c}} (G, A, {\alpha})$; and let $\| \cdot \|_{{\infty}}$ be the supremum norm. Then for every $a \in C_{\mathrm{c}} (G, A, {\alpha}),$ we have $\| a \|_{{\infty}} \leq \| a \|_{\mathrm{r}} \leq \| a \| \leq \| a \|_1.$ The middle and last parts of this inequality follow from [Lemma \[L-CompOfCrPrd\]]{}. We prove the first inequality. Let $a \in C_{\mathrm{c}} (G, A, {\alpha}).$ Write $a = \sum_{g \in G} a_g u_g,$ as in Notation \[N-Sum\]. Let $g \in G.$ Choose an isometric [representation]{} $\pi_0 \colon A \to {L (L^p (X, \mu))}$ as in [Lemma \[L-RedOneRep\]]{}. Following Notation \[N\_3920\_sgtg\], use [Lemma \[L-StructRR\]]{}(\[L-StructRR-3\]) at the second step to get $$\| a_g \| = \| \pi_0 (a_g) \| = \| t_1 (v \ltimes \pi) (a) s_{g^{-1}} \| \leq \| (v \ltimes \pi) (a) \| \leq \| a \|_{\mathrm{r}}.$$ This completes the proof. \[R-Ident\] Lemma \[L:FGpCP\] implies that the map $a \mapsto a u_1,$ from $A$ to $F^p_{\mathrm{r}} (G, A, {\alpha}),$ is isometric. We routinely identify $A$ with its image in $F^p_{\mathrm{r}} (G, A, {\alpha})$ under this map, thus treating it as a subalgebra of $F^p_{\mathrm{r}} (G, A, {\alpha}).$ We do the same with the full [crossed product]{}  $F^p (G, A, {\alpha}).$ \[R-FGpCP\] Adopt the notation of [Lemma \[L:FGpCP\]]{}, and assume that $G$ is finite. Then $\| \cdot \|_1$ is equivalent to $\| \cdot \|_{{\infty}},$ and $C_{\mathrm{c}} (G, A, {\alpha})$ is complete in both these norms. It follows that the map ${\kappa}_{\mathrm{r}} \colon F^p (G, A, {\alpha}) \to F^p_{\mathrm{r}} (G, A, {\alpha})$ of [Lemma \[L-CompOfCrPrd\]]{} is bijective, and also that the maps $${\iota}\colon C_{\mathrm{c}} (G, A, {\alpha}) \to F^p (G, A, {\alpha}) {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}{\iota}_{\mathrm{r}} \colon C_{\mathrm{c}} (G, A, {\alpha}) \to F^p_{\mathrm{r}} (G, A, {\alpha})$$ are bijective. Unlike in the C\* situation, it does not immediately follow that ${\kappa}_{\mathrm{r}}$ is isometric, and we have not tried to determine whether it is. \[P:CondExpt\] Let $p \in [1, {\infty}),$ let $G$ be a countable discrete group, and let $(G, A, {\alpha})$ be a separable nondegenerately representable isometric $G$-$L^p$ operator algebra. (See Definition \[D:Action\] and Definition \[D-LpRep\].) Then for each $g \in G,$ there is a linear map $E_g \colon F^p_{\mathrm{r}} (G, A, {\alpha}) \to A$ with $\| E_g \| \leq 1$ such that if $$a = \sum_{g \in G} a_g u_g \in C_{\mathrm{c}} (G, A, {\alpha}),$$ then $E_g (a) = a_g.$ Moreover, for every [representation]{} $\pi_0$ of $A$ as in [Lemma \[L-StructRR\]]{}, with $v$ and $\pi$ as in [Lemma \[L-StructRR\]]{} and $s_g$ and $t_g$ as in Notation \[N\_3920\_sgtg\], we have $$t_h (v \ltimes \pi) (a) s_k = \pi_0 \big( {\alpha}_h^{-1} (E_{h k^{-1}} (a)) \big)$$ for all $h, k \in G.$ The first part is immediate from the first inequality in Lemma \[L:FGpCP\]. The last statement follows from [Lemma \[L-StructRR\]]{}(\[L-StructRR-3\]) by continuity. It follows that coefficients $a_g$ of elements $a \in F^p_{\mathrm{r}} (G, A, {\alpha})$ make sense. One does not get convergence of $\sum_{g \in G} a_g u_g,$ since this already fails for [C\*-algebra]{}[s]{} and $p = 2$ is allowed in the proposition. However, we can prove that, as in the C\* case, $a$ is uniquely determined by the coefficients $a_g.$ \[P:Faithful\] Let $p \in [1, {\infty}),$ let $G$ be a countable discrete group, and let $(G, A, {\alpha})$ be a separable nondegenerately representable isometric $G$-$L^p$ operator algebra. (See Definition \[D:Action\] and Definition \[D-LpRep\].) Let the maps $E_g \colon F^p_{\mathrm{r}} (G, A, {\alpha}) \to A$ be as in Proposition \[P:CondExpt\]. Then: 1. \[P:Faithful:G\] If $a \in F^p_{\mathrm{r}} (G, A, {\alpha})$ and $E_g (a) = 0$ for all $g \in G,$ then $a = 0.$ 2. \[P:Faithful:Inj\] If ${(X, {\mathcal{B}}, \mu)}$ is a [${\sigma}$-finite measure space]{} and $\pi_0 \colon A \to {L (L^p (X, \mu))}$ is a nondegenerate contractive representation such that the $L^p$ direct sum (Definition \[D-LpDirectSum\]) $\bigoplus_{g \in G} \pi_0 \circ {\alpha}_g$ is injective, then the regular representation ${\sigma}$ of $F^p_{\mathrm{r}} (G, A, {\alpha})$ associated to $\pi_0$ is injective. In [C\*-algebra]{}[s]{}, if $a \in C^*_{\mathrm{r}} (G, A, {\alpha})$ and $E_1 (a^* a) = 0,$ then $a = 0.$ For $p \neq 2,$ we don’t have adjoints, and we do not know a sensible version of this statement. Also, in part (\[P:Faithful:Inj\]), we do not know whether the regular representation of $F^p_{\mathrm{r}} (G, A, {\alpha})$ is isometric, not even if $\pi_0$ itself is assumed isometric. We prove (\[P:Faithful:G\]). Let ${(X, {\mathcal{B}}, \mu)}$ be a [${\sigma}$-finite measure space]{}, let $\pi_0 \colon A \to {L (L^p (X, \mu))}$ be a nondegenerate contractive representation, and let $(v, \pi)$ be the associated regular covariant [representation]{}. Let $s_g$ and $t_g$ be as in Notation \[N\_3920\_sgtg\]. If $a \in F^p_{\mathrm{r}} (G, A, {\alpha})$ satisfies $E_g (a) = 0$ for all $g \in G,$ then $t_h (v \ltimes \pi) (a) s_k = 0$ for all $h, k \in G,$ whence $(v \ltimes \pi) (a) = 0$ by [Lemma \[L-StructRR\]]{}(\[L-StructRR-2b\]). Since $\pi_0$ is arbitrary, it follows that $a = 0.$ This proves (\[P:Faithful:G\]). For (\[P:Faithful:Inj\]), suppose $a \in F^p_{\mathrm{r}} (G, A, {\alpha})$ and ${\sigma}(a) = 0.$ Fix $l \in G.$ Taking $h = g^{-1}$ and $k = l^{-1} g^{-1}$ in Proposition \[P:CondExpt\], we get $(\pi_0 \circ {\alpha}_g) (E_l (a)) = 0$ for all $g \in G.$ So $E_l (a) = 0.$ This is true for all $l \in G,$ so $a = 0.$ \[C-L1Inj\] Let $p \in [1, {\infty}),$ let $G$ be a countable discrete group, and let $(G, A, {\alpha})$ be a separable nondegenerately representable isometric $G$-$L^p$ operator algebra. (See Definition \[D:Action\] and Definition \[D-LpRep\].) With notation as in [Lemma \[L-CompOfCrPrd\]]{}, the map ${\kappa}_{\mathrm{r}} \circ {\kappa}\colon L^1 (G, A, {\alpha}) \to F^p_{\mathrm{r}} (G, A, {\alpha})$ is injective. This is immediate from Proposition \[P:Faithful\](\[P:Faithful:G\]). \[D:StdCond\] Let $p \in [1, {\infty}),$ let $G$ be a countable discrete group, and let $(G, A, {\alpha})$ be a separable nondegenerately representable isometric $G$-$L^p$ operator algebra. (See Definition \[D:Action\] and Definition \[D-LpRep\].) The map $E_1 \colon F^p_{\mathrm{r}} (G, A, {\alpha}) \to A$ of Proposition \[P:CondExpt\], determined by $$E_1 \left( {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{g \in G} }}}}} a_g u_g \right) = a_1$$ when $\sum_{g \in G} a_g u_g \in C_{\mathrm{c}} (G, A, {\alpha}),$ is called the [*standard conditional expectation from $F^p_{\mathrm{r}} (G, A, {\alpha})$ to $A$*]{}, and is denoted by $E.$ \[P-EIsCondExpt\] Let $p \in [1, {\infty}),$ let $G$ be a countable discrete group, and let $(G, A, {\alpha})$ be a separable nondegenerately representable isometric $G$-$L^p$ operator algebra. (See Definition \[D:Action\] and Definition \[D-LpRep\].) Identify $A$ with its image in $F^p_{\mathrm{r}} (G, A, {\alpha}),$ as in Remark \[R-Ident\]. Then the map $E \colon F^p_{\mathrm{r}} (G, A, {\alpha}) \to A$ is a Banach conditional expectation in the sense of Definition 2.1 of [@PhLp2a]. We already know that $E$ is linear and $\| E \| = 1.$ It is immediate that $E (a) = a$ for $a \in A.$ It remains to verify $E (a b c) = a E (b) c$ for $a, c \in A$ and $b \in F^p_{\mathrm{r}} (G, A, {\alpha}).$ A calculation shows that this is true when $b \in C_{\mathrm{c}} (G, A, {\alpha}),$ and the general case follows by continuity. We will also need the following technical lemma concerning right and left multiplication by $u_g.$ It is easy when $A$ is unital, since then $u_g$ is in the crossed product and $\| u_g \| = 1$ by [Lemma \[L:FGpCP\]]{}. However, we need it in the nonunital case. Essentially, it says that $u_g$ defines isometric elements of the multiplier algebras of $F^p (G, A, {\alpha})$ and $F^p_{\mathrm{r}} (G, A, {\alpha}).$ Since we only need these particular multipliers, we do not discuss the general theory of multiplier algebras. \[L-NormOfMultU\] Let $p \in [1, {\infty}),$ let $G$ be a countable discrete group, and let $(G, A, {\alpha})$ be a separable nondegenerately representable isometric $G$-$L^p$ operator algebra. (See Definition \[D:Action\] and Definition \[D-LpRep\].) Then for every $g \in G$ there are unique isometric maps $$L_g, R_g \colon F^p (G, A, {\alpha}) \to F^p (G, A, {\alpha}) {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}L_g, R_g \colon F^p_{\mathrm{r}} (G, A, {\alpha}) \to F^p_{\mathrm{r}} (G, A, {\alpha})$$ given by $L_g (a) = u_g a$ and $R_g (a) = a u_g$ for $a \in F^p (G, A, {\alpha})$ and $a \in F^p_{\mathrm{r}} (G, A, {\alpha}).$ That is, for $$a = \sum_{h \in G} a_h u_h \in C_{\mathrm{c}} (G, A, {\alpha}),$$ we have $$L_g ({\iota}(a)) = {\iota}\left( {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{h \in G} }}}}} {\alpha}_g (a_h) u_{g h} \right) {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}R_g ({\iota}(a)) = {\iota}\left( {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{h \in G} }}}}} a_h u_{h g} \right),$$ and similarly with ${\iota}_{\mathrm{r}}$ in place of ${\iota}.$ Moreover, for $g \in G$ and $a, b \in F^p (G, A, {\alpha})$ or $a, b \in F^p_{\mathrm{r}} (G, A, {\alpha}),$ we have $$L_g (a b) = L_g (a) b, \,\,\,\,\,\, R_g (a b) = a R_g (b), {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}a L_g (b) = R_g (a) b.$$ By Theorem \[T-UPropFull\](\[T-UPropFull-3\]) and Theorem \[T-UPropRed\](\[T-UPropRed-3\]), it suffices to work with elements of $C_{\mathrm{c}} (G, A, {\alpha}).$ In particular, uniqueness is clear. Let $(v, \pi)$ be any nondegenerate ${\sigma}$-finite contractive covariant [representation]{} of $(G, A, {\alpha}),$ and let $a \in C_{\mathrm{c}} (G, A, {\alpha}).$ Then we have $$( v \ltimes \pi) \left( {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{h \in G} }}}}} {\alpha}_g (a_h) u_{g h} \right) = \sum_{h \in G} \pi ( {\alpha}_g (a_h) ) v_{g} v_{h} = v_{g} \sum_{h \in G} \pi ( a_h) v_{h} = v_g ( v \ltimes \pi) (a)$$ and $$( v \ltimes \pi) \left( {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{h \in G} }}}}} a_h u_{h g} \right) = \sum_{h \in G} \pi ( a_h ) v_{h} v_{g} = ( v \ltimes \pi) (a) v_g.$$ Since $v_g$ is an isometry, it follows that $$\left\| ( v \ltimes \pi) \left( {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{h \in G} }}}}} {\alpha}_g (a_h) u_{g h} \right) \right\| = \| ( v \ltimes \pi) (a) \|$$ and $$\left\| ( v \ltimes \pi) \left( {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{h \in G} }}}}} a_h u_{h g} \right) \right\| = \| ( v \ltimes \pi) (a) \|.$$ By Theorem \[T-UPropFull\](\[T-UPropFull-2\]), taking the supremum over all nondegenerate ${\sigma}$-finite contractive covariant [representation]{}[s]{} of $(G, A, {\alpha})$ gives the result for the full crossed product. By Theorem \[T-UPropRed\](\[T-UPropRed-2\]), restricting to nondegenerate ${\sigma}$-finite contractive regular covariant [representation]{}[s]{} gives the result for the reduced crossed product. \[N-Multug\] Let the notation be as in [Lemma \[L-NormOfMultU\]]{}. We write $u_g a$ for $L_g (a)$ and $a u_g$ for $R_g (a).$ This notation is consistent with Notation \[N-Sum\]. Structure theorems for reduced $L^p$ operator crossed products by discrete groups {#Sec_FpCX} ================================================================================= In this section, we prove three results about the structure of reduced $L^p$ operator crossed products by discrete groups. None of them will be used in the computation of the K-theory of ${{\mathcal{O}}_{d}^{p}},$ but they are fairly easy to prove using known techniques, and it therefore seems appropriate to include them. The results are as follows. First, crossed products by isometric actions of countable discrete amenable groups on unital nondegenerately ${\sigma}$-finitely representable $L^p$ operator algebras preserve Banach algebra amenability (in the sense of [@Rnd]). Second, if a countable discrete group $G$ acts freely and minimally on a compact metrizable space $X,$ then $F^p_{\mathrm{r}} (G, \, C (X))$ is simple. Third, if a countable discrete group $G$ acts freely on a compact metrizable space $X,$ then the traces on $F^p_{\mathrm{r}} (G, \, C (X))$ all come from $G$-invariant measures on $X.$ Besides these results, it is proved in [@PH] that, for $p \neq 1,$ the reduced $L^p$ operator crossed product by a $G$-simple isometric action of a Powers group $G$ is simple. The definition of an amenable Banach algebra is given in Definition 2.1.9 of [@Rnd]; see Theorem 2.2.4 of [@Rnd] for two standard equivalent conditions. The amenability result has essentially the same proof as has been already used for [C\*-algebra]{}[s]{}. (See Theorem 1 of [@Rs] or Proposition IV.4.3 of [@Jn2].) We state the result on $L^1 (G, A, {\alpha})$ from [@Jn2] for completeness. \[P\_3917\_L1Amen\] Let $A$ be an amenable unital Banach algebra, let $G$ be an amenable discrete group, and let ${\alpha}\colon G \to {{\mathrm{Aut}}}(A)$ be an isometric action of $G$ on $A.$ Then $L^1 (G, A, {\alpha})$ is an amenable Banach algebra. This is Proposition IV.4.2 of [@Jn2]. See Lemma IV.1.1 and Definition IV.1.4 of [@Jn2] for the notation. Actions in [@Jn2] are assumed isometric; see Definition I.2.2 of [@Jn1]. \[T-CPAmen\] Let $p \in [1, {\infty}),$ let $G$ be a countable discrete amenable group, let $A$ be a unital $L^p$ operator algebra which is nondegenerately ${\sigma}$-finitely representable (Definition \[D-LpRep\](\[D\_3914\_LpRep\_Ndg\])), and let ${\alpha}\colon G \to {{\mathrm{Aut}}}(A)$ be an isometric action. Suppose that $A$ is amenable as a Banach algebra. Then $F^p (G, A, {\alpha})$ and $F^p_{\mathrm{r}} (G, A, {\alpha})$ are amenable as Banach algebras. We follow the proof of Proposition IV.4.3 of [@Jn2]. (The paper [@Rs] does not have Proposition \[P\_3917\_L1Amen\] as an intermediate step.) [Lemma \[L-CompOfCrPrd\]]{} provides contractive homomorphisms $${\kappa}\colon L^1 (G, A, {\alpha}) \to F^p (G, A, {\alpha}) {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}{\kappa}_{\mathrm{r}} \colon F^p (G, A, {\alpha}) \to F^p_{\mathrm{r}} (G, A, {\alpha})$$ which have dense ranges. So amenability of $F^p (G, A, {\alpha})$ follows from Proposition 2.3.1 of [@Rnd] and Proposition \[P\_3917\_L1Amen\]. Amenability of $F^p_{\mathrm{r}} (G, A, {\alpha})$ then follows from amenability of $F^p (G, A, {\alpha})$ and Proposition 2.3.1 of [@Rnd]. Now we consider the results on free actions. We need a lemma, which is based on the main part of the proof of Lemma VIII.3.7 of [@Dv]. We simplify and generalize it in several ways. In particular, we do not need the Rokhlin Lemma, and the proof works for free minimal actions of arbitrary discrete groups. \[L-RmvOneElt\] Let $G$ be a discrete group, let $X$ be a free compact $G$-space, and let $F \in G \setminus \{ 1 \}$ be finite. Then there exist $n \in {{\mathbb{Z}}_{> 0}}$ and $s_1, s_2, \ldots, s_n \in C (X)$ such that $| s_k (x) | = 1$ for $k = 1, 2, \ldots, n$ and all $x \in X,$ and such that for all $x \in U$ and $g \in F,$ we have $$\frac{1}{n} \sum_{k = 1}^n s_k (x) {\overline{s_k (g^{-1} x) }} = 0.$$ For use only within this proof, we define a pair $(F, U),$ consisting of a finite subset $F \subset G \setminus \{ 1 \}$ and an open subset $U \subset X,$ to be inessential if there exist $n \in {{\mathbb{Z}}_{> 0}}$ and $s_1, s_2, \ldots, s_n \in C (X)$ such that $| s_k (x) | = 1$ for $k = 1, 2, \ldots, n$ and all $x \in X,$ and such that for all $x \in U$ and $g \in F,$ we have $$\frac{1}{n} \sum_{k = 1}^n s_k (x) {\overline{s_k (g^{-1} x) }} = 0.$$ Thus, we have to prove that $(F, X)$ is inessential for every finite subset $F \subset G \setminus \{ 1 \}.$ We claim the following: 1. \[L-Iness-1\] For every $x \in X$ and every $g \in G {\setminus}\{ 1 \},$ there exists an open set $U \subset X$ with $x \in U$ such that $( \{ g \}, U )$ is inessential. 2. \[L-Iness-2\] If $F \subset G \setminus \{ 1 \}$ is finite and $U, V {\subset}X$ are open, and if $(F, U)$ and $(F, V)$ are both inessential, then so is $(F, \, U \cup V).$ 3. \[L-Iness-3\] If $E, F \subset G \setminus \{ 1 \}$ are finite and $U {\subset}X$ is open, and if $(E, U)$ and $(F, U)$ are both inessential, then so is $(E \cup F, \, U).$ To prove (\[L-Iness-1\]), choose an open set $U \subset X$ with $x \in U$ such that ${\overline{U}} \cap g^{-1} {\overline{U}} = \varnothing.$ Take $n = 2,$ and take $s_1$ to be the constant function $1.$ Choose a [continuous function]{}  $r \colon X \to {{\mathbb{R}}}$ such that $r (x) = 0$ for $x \in {\overline{U}}$ and $r (x) = \pi$ for $x \in g^{-1} {\overline{U}}.$ Set $s_2 (x) = \exp (i r (x))$ for $x \in X.$ For $x \in U,$ we have $$\frac{1}{n} \sum_{k = 1}^n s_k (x) {\overline{s_k (g^{-1} x) }} = \frac{1}{2} \big[ 1 \cdot 1 + 1 \cdot (-1) \big] = 0.$$ Thus $( \{ g \}, U )$ is inessential. The proofs of (\[L-Iness-2\]) and (\[L-Iness-3\]) are both based on the following calculation. Let $$m, n \in {{\mathbb{Z}}_{> 0}}, \,\,\,\,\,\, r_1, r_2, \ldots, r_m, s_1, s_2, \ldots, s_n \in C (X), \,\,\,\,\,\, x \in X, {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}g \in G.$$ Then $$\begin{aligned} \label{Eq:RmvProd} \lefteqn{ \frac{1}{m n} \sum_{j = 1}^m \sum_{k = 1}^n (r_j s_k) (x) {\overline{(r_j s_k) (g^{-1} x) }} } \\ & \hspace*{3em} = \left( \frac{1}{m} \sum_{j = 1}^m r_j (x) {\overline{r_j (g^{-1} x) }} \right) \left( \frac{1}{n} \sum_{k = 1}^n s_k (x) {\overline{s_k (g^{-1} x) }} \right). \notag\end{aligned}$$ For (\[L-Iness-2\]), choose $m, n \in {{\mathbb{Z}}_{> 0}}$ and [continuous function]{}s $$r_1, r_2, \ldots, r_m, s_1, s_2, \ldots, s_n \colon X \to S^1$$ such that for every $g \in F,$ we have $$\label{Eq:L-RmvUnion-Eq} {\mbox{${\displaystyle{ \frac{1}{m} \sum_{j = 1}^m r_j (x) {\overline{r_j (g^{-1} x) }} = 0}}$ for $x \in U$}} {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}{\mbox{${\displaystyle{ \frac{1}{n} \sum_{k = 1}^n s_k (x) {\overline{s_k (g^{-1} x) }} = 0}}$ for $x \in V$}}.$$ The functions $r_j s_k$ are [continuous function]{}s from $X$ to $S^1,$ and (\[Eq:RmvProd\]) implies that for every $g \in F$ and $x \in U \cup V,$ we have $$\frac{1}{m n} \sum_{j = 1}^m \sum_{k = 1}^n (r_j s_k) (x) {\overline{(r_j s_k) (g^{-1} x) }} = 0.$$ Similarly, for (\[L-Iness-3\]) choose $m, n \in {{\mathbb{Z}}_{> 0}}$ and [continuous function]{}s $$r_1, r_2, \ldots, r_m, s_1, s_2, \ldots, s_n \colon X \to S^1$$ such that for every $x \in U,$ we have $${\mbox{${\displaystyle{ \frac{1}{m} \sum_{j = 1}^m r_j (x) {\overline{r_j (g^{-1} x) }} = 0}}$ for $g \in E$}} {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}{\mbox{${\displaystyle{ \frac{1}{n} \sum_{k = 1}^n s_k (x) {\overline{s_k (g^{-1} x) }} = 0}}$ for $g \in F$}}.$$ The claim again follows from (\[Eq:RmvProd\]). We now claim that $( \{ g \}, X )$ is inessential for all $g \in G {\setminus}\{ 1 \}.$ Use compactness of $X$ and (\[L-Iness-1\]) to find $n$ and open sets $U_1, U_2, \ldots, U_n \subset X$ such that $( \{ g \}, U_k)$ is inessential for $k = 1, 2, \ldots, n$ and such that $\bigcup_{k = 1}^n U_k = X.$ Then apply (\[L-Iness-2\]) a total of $n - 1$ times. This proves the claim. For an arbitrary finite subset $F \subset G \setminus \{ 1 \},$ we use this claim and apply (\[L-Iness-3\]) repeatedly to see that $(F, X)$ is inessential, as desired. We use the following analog of conventional notation for transformation group [C\*-algebra]{}[s]{}. \[N-TransfGpLpAlg\] Let $X$ be a locally compact metrizable space, and let $G$ be a second countable locally compact group which acts on $X.$ Let ${\alpha}\colon G \to {{\mathrm{Aut}}}(C (X))$ be the action of Example \[E-GpOnSp\]. We abbreviate $F^p (G, A, {\alpha})$ to $F^p (G, X)$ and $F^p_{\mathrm{r}} (G, A, {\alpha})$ to $F^p_{\mathrm{r}} (G, X).$ If the action of $G$ on $X$ is called $h,$ we write $F^p (G, X, h)$ and $F^p_{\mathrm{r}} (G, X, h).$ \[P-RmvEst\] Let $G$ be a countable discrete group, let $X$ be a free compact $G$-space, and let $E \colon F^p_{\mathrm{r}} (G, X) \to C (X)$ be the standard conditional expectation (Definition \[D:StdCond\]), viewed as a map $F^p_{\mathrm{r}} (G, X) \to F^p_{\mathrm{r}} (G, X)$ via Remark \[R-Ident\]. Then for every $a \in F^p_{\mathrm{r}} (G, X)$ and ${\varepsilon}> 0,$ there exist $n \in {{\mathbb{Z}}_{> 0}}$ and $s_1, s_2, \ldots, s_n \in C (X)$ such that $| s_k (x) | = 1$ for $k = 1, 2, \ldots, n$ and all $x \in X,$ and such that $$\left\| E (a) - \frac{1}{n} \sum_{k = 1}^n s_k a {\overline{s_k}} \right\| < {\varepsilon}.$$ Let ${\alpha}\colon G \to {{\mathrm{Aut}}}(C (X))$ be the action of Example \[E-GpOnSp\]. Also, for $g \in G$ let $u_g \in F^p_{\mathrm{r}} (G, X)$ be as in Notation \[N-Sum\]. Choose a finite set $F \subset G$ and elements $b_g \in C (X)$ for $g \in G$ such that, with $b = \sum_{g \in F} b_g u_g,$ we have $\left\| a - b \right\| < \tfrac{1}{2} {\varepsilon}.$ [Without loss of generality]{}  $1 \in F.$ By Lemma \[L-RmvOneElt\], there exist $n \in {{\mathbb{Z}}_{> 0}}$ and $s_1, s_2, \ldots, s_n \in C (X)$ such that $| s_k (x) | = 1$ for $k = 1, 2, \ldots, n$ and all $x \in X,$ and such that for all $x \in U$ and $g \in F \setminus \{ 1 \},$ we have $$\label{Eq_3922_Star} \frac{1}{n} \sum_{k = 1}^n s_k (x) {\overline{s_k (g^{-1} x) }} = 0.$$ Define $P \colon F^p_{\mathrm{r}} (G, X) \to F^p_{\mathrm{r}} (G, X)$ by $$P (c) = \frac{1}{n} \sum_{k = 1}^n s_k c {\overline{s_k}}$$ for $c \in F^p_{\mathrm{r}} (G, X).$ We have to show that $\| E (a) - P (a) \| < {\varepsilon}.$ Since $\| s_k \| = \| {\overline{s_k}} \| = 1$ for all $k,$ we have $\| P \| \leq 1.$ Therefore $$\begin{aligned} \| E (a) - P (a) \| & \leq \| E (a) - E (b) \| + \| E (b) - P (b) \| + \| P (b) - P (a) \| \\ & < \tfrac{1}{2} {\varepsilon}+ \| E (b) - P (b) \| + \tfrac{1}{2} {\varepsilon}= \| E (b) - P (b) \| + {\varepsilon}.\end{aligned}$$ So it suffices to prove that $P (b) = E (b).$ Let $g \in F \setminus \{ 1 \}.$ Then, using (\[Eq\_3922\_Star\]) and the definition of ${\alpha}_g,$ $$P (b_g u_g) = \frac{1}{n} \sum_{k = 1}^n s_k b_g u_g {\overline{s_k}} = b_g \left( \frac{1}{n} \sum_{k = 1}^n s_k {\alpha}_g ({\overline{s_k}}) \right) u_g = 0.$$ Also, $$P (b_1 u_1) = b_1 \cdot \frac{1}{n} \sum_{k = 1}^n s_k {\overline{s_k}} = b_1 = E (b).$$ Thus, $P (b) = E (b),$ as desired. \[T-FreeMinSimp\] Let $G$ be a countable discrete group, and let $X$ be a free minimal compact metrizable $G$-space. Then $F^p_{\mathrm{r}} (G, X)$ is simple. Let $I \subset F^p_{\mathrm{r}} (G, X)$ be a proper closed ideal. We first claim that $I \cap C (X) = \{ 0 \}.$ If not, let $f \in I \cap C (X)$ be nonzero. Choose a nonempty open set $U \subset X$ on which $f$ does not vanish. By minimality, we have $\bigcup_{g \in G} g U = X.$ Since $X$ is compact, there is a finite set $S \subset G$ such that $\bigcup_{g \in S} g U = X.$ Define $b \in C (X)$ by $$b (x) = \sum_{g \in S} f (g^{-1} x) {\overline{ f (g^{-1} x) }}$$ for $x \in X.$ Then $b (x) > 0$ for all $x \in X,$ so $b$ is invertible. For $g \in G$ let $u_g \in F^p_{\mathrm{r}} (G, X)$ be as in Notation \[N-Sum\]. Then $b = \sum_{g \in S} u_g f {\overline{f}} u_g^{-1} \in I.$ So $I$ contains an invertible element, contradicting the assumption that $I$ is proper. This proves the claim. Let $E \colon F^p_{\mathrm{r}} (G, X) \to C (X)$ be the standard conditional expectation (Definition \[D:StdCond\]), viewed as a map $F^p_{\mathrm{r}} (G, X) \to F^p_{\mathrm{r}} (G, X)$ (following Remark \[R-Ident\]). We claim that $E (a) = 0$ for all $a \in I.$ It suffices to show that $E (a) \in I.$ To prove this, let ${\varepsilon}> 0.$ Use Proposition \[P-RmvEst\] to choose $n \in {{\mathbb{Z}}_{> 0}}$ and $s_1, s_2, \ldots, s_n \in C (X)$ such that the element $b = \frac{1}{n} \sum_{k = 1}^n s_k a {\overline{s_k}}$ satisfies $\| E (a) - b \| < {\varepsilon}.$ Clearly $b \in I.$ Since ${\varepsilon}> 0$ is arbitrary, this implies that $E (a) \in {\overline{I}} = I.$ The claim is proved. Now let $a \in I.$ For all $g \in G,$ we have $a u_g^{-1} \in I,$ so $E (a u_g^{-1}) = 0.$ In the notation of Proposition \[P:CondExpt\], this means that $E_g (a) = 0$ for all $g \in G.$ Proposition \[P:Faithful\](\[P:Faithful:G\]) now implies that $a = 0.$ We can use the same methods to identify all the normalized traces on $F^p_{\mathrm{r}} (G, X).$ \[D-NTrace\] Let $A$ be a unital Banach algebra. Then a [*normalized trace*]{} on $A$ is a linear functional satisfying the following three conditions: 1. \[D-NTrace-1\] ${\tau}(1) = 1.$ 2. \[D-NTrace-2\] $\| {\tau}\| = 1.$ 3. \[D-NTrace-3\] ${\tau}(b a) = {\tau}(a b)$ for all $a, b \in A.$ When $A$ is a unital [C\*-algebra]{}, the normalized traces are exactly the tracial states. Our result requires that the action be free, but not necessarily minimal. The main point is contained in the following proposition. The proof follows the proof of Corollary VIII.3.8 of [@Dv]. \[P-TracesOnSubalg\] Let $G$ be a countable discrete group, let $X$ be a free compact metrizable $G$-space, and let $A \subset F^p_{\mathrm{r}} (G, X)$ be a subalgebra such that $C (X) \subset A.$ Let $E \colon F^p_{\mathrm{r}} (G, X) \to C (X)$ be the standard conditional expectation (Definition \[D:StdCond\]). Let ${\tau}\colon A \to {{\mathbb{C}}}$ be a normalized trace (Definition \[D-NTrace\]). Then there exists a unique Borel probability measure $\mu$ on $X$ such that for all $a \in A$ we have $${\tau}(a) = \int_X E (a) \, d \mu.$$ We prove that ${\tau}= ({\tau}|_{C (X)} ) \circ E.$ The statement then follows by applying the Riesz Representation Theorem to ${\tau}|_{C (X)}.$ Let $a \in A$ and let ${\varepsilon}> 0.$ We prove that $| {\tau}(a) - {\tau}(E (a)) | < {\varepsilon}.$ Use Proposition \[P-RmvEst\] to choose $n \in {{\mathbb{Z}}_{> 0}}$ and $s_1, s_2, \ldots, s_n \in C (X)$ such that $| s_k (x) | = 1$ for $k = 1, 2, \ldots, n$ and all $x \in X,$ and such that $\left\| E (a) - \frac{1}{n} \sum_{k = 1}^n s_k a {\overline{s_k}} \right\| < {\varepsilon}.$ Since $s_1, s_2, \ldots, s_n, {\overline{s_1}}, {\overline{s_2}}, \ldots, {\overline{s_n}} \in A,$ we have ${\tau}(s_k a {\overline{s_k}} ) = {\tau}(a)$ for $k = 1, 2, \ldots, n.$ Therefore $$\big| {\tau}(a) - {\tau}(E (a)) \big| = \left| {\tau}\left( \frac{1}{n} \sum_{k = 1}^n s_k a {\overline{s_k}} \right) - {\tau}(E (a)) \right| \leq \left\| E (a) - \frac{1}{n} \sum_{k = 1}^n s_k a {\overline{s_k}} \right\| < {\varepsilon}.$$ This completes the proof. \[T-11202Traces\] Let $G$ be a countable discrete group, and let $X$ be a free compact metrizable $G$-space. Let $E \colon F^p_{\mathrm{r}} (G, X) \to C (X)$ be the standard conditional expectation (Definition \[D:StdCond\]). For a $G$-invariant Borel probability measure $\mu$ on $X,$ define a linear functional ${\tau}_{\mu}$ on $F^p_{\mathrm{r}} (G, X)$ by $${\tau}_{\mu} (a) = \int_X E (a) \, d \mu$$ for all $a \in F^p_{\mathrm{r}} (G, X).$ Then $\mu \mapsto {\tau}_{\mu}$ is an affine bijection from the $G$-invariant Borel probability measures on $X$ to the normalized traces on $F^p_{\mathrm{r}} (G, X)$ (Definition \[D-NTrace\]). Its inverse sends ${\tau}$ to the measure obtained from the functional ${\tau}|_{C (X)}$ via the Riesz Representation Theorem. It is easy to check that if $\mu$ is a $G$-invariant Borel probability measure on $X,$ then ${\tau}_{\mu}$ is a normalized trace on $F^p_{\mathrm{r}} (G, X).$ Clearly ${\tau}_{\mu} (f) = \int_X f \, d \mu$ for $f \in C (X).$ This implies that $\mu \mapsto {\tau}_{\mu}$ is injective and that the description of its inverse is correct on the range of this map. It remains only to prove that $\mu \mapsto {\tau}_{\mu}$ is surjective. Let ${\tau}$ be a normalized trace on $F^p_{\mathrm{r}} (G, X).$ Proposition \[P-TracesOnSubalg\] provides a Borel probability measure $\mu$ on $X$ such that ${\tau}(a) = \int_X E (a) \, d \mu$ for all $a \in F^p_{\mathrm{r}} (G, X).$ For $g \in G$ and $f \in C (X),$ using the fact that ${\tau}$ is a trace at the second step, we have $$\int_X f (g^{-1} x) \, d \mu (x) = {\tau}\big( u_g f u_g^{-1} \big) = {\tau}(f) = \int_X f \, d \mu.$$ Uniqueness in the Riesz Representation Theorem now implies that $\mu$ is $G$-invariant. This completes the proof. The K-theory of direct limits and crossed products by ${{\mathbb{Z}}}$ {#Sec_PV} ====================================================================== We give two general results on K-theory that are needed for the computation of $K_* \big( {{\mathcal{O}}_{d}^{p}} \big).$ The first is the K-theory of direct limits of Banach algebras with contractive homomorphisms, and the second is the K-theory of a reduced $L^p$ operator crossed product by ${{\mathbb{Z}}}.$ For the main definitions and theorems related to the K-theory of Banach algebras (in fact, “local Banach algebras”), we refer to Sections 5, 8, and 9 of [@Bl3]. We start with direct limits. Without some condition on norms, we do not expect direct limits to exist in general. For example, the limit which describes $\| {\varphi}_i (a) \|$ in Proposition \[P-DLim\] might be infinite, or fail to exist because the net has more than one limit point. \[P-DLim\] Let $I$ be a directed set, and let $\big( (A_i)_{i \in I}, \, ({\varphi}_{j, i})_{i \leq j} \big)$ be a direct system of Banach algebras with contractive homomorphisms. Then the direct limit $A = {\varinjlim}A_i$ exists in the category of Banach algebras and contractive homomorphisms. If the maps to the direct limit are called ${\varphi}_i \colon A_i \to A,$ then $\bigcup_{i \in I} {\varphi}_i (A_i)$ is a dense subalgebra of $A$ and for all $i \in I$ and all $a \in A_i,$ we have $\| {\varphi}_i (a) \| = \lim_{j \geq i} \| {\varphi}_{j, i} (a) \|.$ The proof is essentially the same as the proof of Proposition 2.5.1 of [@Ph0], where the statement is proved for the category of [C\*-algebra]{}[s]{} equipped with actions of a fixed group $G.$ Also see Section 3.3 of [@Bl3], where a weaker boundedness condition on the maps is used, but where the universal property of the direct limit is not addressed. If the $A_i$ in Proposition \[P-DLim\] are $L^p$ operator algebras, we do not know whether it follows that ${\varinjlim}A_i$ is an $L^p$ operator algebra. \[C-ClUIsDLim\] Let $A$ be a Banach algebra, let $I$ be a directed set, and let $(A_i)_{i \in I}$ be a family of closed subalgebras of $A$ such that $A_i {\subset}A_j$ for $i \leq j,$ and such that ${{\overline}{\bigcup_{i \in I} A_i}} = A.$ For $i \leq j,$ let ${\varphi}_{j, i} \colon A_i \to A_j$ be the inclusion. Then the canonical map from ${\varinjlim}A_i$ to $A$ is an isometric isomorphism. The formula for $\| {\varphi}_i (a) \|$ in Proposition \[P-DLim\] implies that the map ${\varinjlim}A_i \to A$ is isometric. Therefore density of $\bigcup_{i \in I} A_i$ in $A$ implies surjectivity. For a Banach algebra $A$ and a locally compact Hausdorff space $X,$ we let $C_0 (X, A)$ be the Banach algebra of all [continuous function]{}[s]{} $b \colon X \to A$ such that $x \mapsto \| b (x) \|$ vanishes at infinity on $X,$ with pointwise multiplication and the supremum norm. If ${\varphi}\colon A \to B$ is a [continuous]{} [homomorphism]{} of Banach algebras, let $C_0 (X, {\varphi}) \colon C_0 (X, A) \to C_0 (X, B)$ be the [continuous]{} [homomorphism]{} determined by $C_0 (X, {\varphi}) (b) (x) = {\varphi}(b (x))$ for $b \in C_0 (X, A)$ and $x \in X.$ The following lemma is well known, but we do not know a reference. The case we care about is $X = {{\mathbb{R}}},$ but the proof in this case is no simpler. \[L\_3918\_LimC0\] Let $X$ be a locally compact Hausdorff space. Then $A \mapsto C_0 (X, A)$ is a functor from the category of Banach algebras with contractive homomorphisms to itself which preserves direct limits. The only nonobvious part is that $C_0 (X, {-})$ preserves direct limits. So let $I$ be a directed set, and let $\big( (A_i)_{i \in I}, \, ({\varphi}_{j, i})_{i \leq j} \big)$ be a direct system of Banach algebras with contractive homomorphisms. Set $A = {\varinjlim}A_i,$ and for $i \in I$ let ${\varphi}_i \colon A_i \to A$ be the map to the direct limit. For $i \leq j,$ set ${\psi}_{j, i} = C_0 (X, {\varphi}_{j, i}) \colon C_0 (X, A_i) \to C_0 (X, A_j),$ and for $i \in I$ set ${\psi}_i = C_0 (X, {\varphi}_i) \colon C_0 (X, A_i) \to C_0 (X, A).$ We must show that the algebra $C_0 (X, A)$ and the maps ${\varphi}_i$ satisfy the universal property of the direct limit. We first claim that for every $b \in C_0 (X, A)$ and every ${\varepsilon}> 0$ there are $i \in I$ and $c \in C_0 (X, A_i)$ such that $\| {\psi}_i (c) - b \| < {\varepsilon}.$ Choose a compact subset $K {\subset}X$ such that $\| b (x) \| < \frac{{\varepsilon}}{5}$ for all $x \in X {\setminus}K.$ Choose a finite cover $(U_1, U_2, \ldots, U_n)$ of $K$ consisting of nonempty open sets $U_m {\subset}X$ such that $\| b (x) - b (y) \| < \frac{{\varepsilon}}{5}$ for $m = 1, 2, \ldots, n$ and $x, y \in U_m,$ and choose [continuous function]{}[s]{} $f_1, f_2, \ldots, f_n \colon X \to [0, 1]$ with compact supports ${{\mathrm{supp}}}(f_m) {\subset}U_m$ such that $\sum_{m = 1}^n f_m (g) = 1$ for all $x \in K$ and such that $\sum_{m = 1}^n f_m (g) \in [0, 1]$ for all $x \in X.$ For $m = 1, 2, \ldots, n,$ choose any $x_m \in U_m,$ and then choose $i (m) \in I$ and $a_m \in A_{i (m)}$ such that $\| {\varphi}_{i (m)} (a_m) - b (x_m) \| < \frac{{\varepsilon}}{5}.$ Define $c_m \in C_0 (X, A_{i (m)})$ by $c_m (x) = f_m (x) a_m$ for $x \in X.$ Choose $i \in I$ such that $i \geq i_m$ for $m = 1, 2, \ldots, n.$ Set $c = \sum_{m = 1}^{n} {\psi}_{i, i (m)} (c_m).$ We estimate $\| {\psi}_i (c) - b \|.$ If $x \not\in \bigcup_{m = 1}^n {{\mathrm{supp}}}(f_m),$ then $\| b (x) \| < \frac{{\varepsilon}}{5}$ and $c (x) = 0,$ so $\| {\varphi}_i (c (x)) - b (x) \| < \frac{{\varepsilon}}{5}.$ Otherwise, let $J$ be the set of $m \in \{ 1, 2, \ldots, n \}$ such that $x \in U_m.$ If $x \not\in K,$ then for all $m \in J$ we have $$\| b (x_m) \| \leq \| b (x) \| + \| b (x) - b (x_m) \| < \frac{{\varepsilon}}{5} + \frac{{\varepsilon}}{5} = \frac{2 {\varepsilon}}{5}.$$ So $$\begin{aligned} \| {\varphi}_i (c (x)) \| & \leq \| c (x) \| \leq \sum_{m \in J} f_m (x) \| {\varphi}_{i (m)} (a_m) \| \\ & \leq \sum_{m \in J} f_m (x) \left(\| b (x_m) \| + \frac{{\varepsilon}}{5} \right) \leq \sum_{m \in J} f_m (x) \left( \frac{2 {\varepsilon}}{5} + \frac{{\varepsilon}}{5} \right) \leq \frac{3 {\varepsilon}}{5}.\end{aligned}$$ Since $\| b (x) \| < \frac{{\varepsilon}}{5},$ we get $\| c (x) - b (x) \| < \frac{4 {\varepsilon}}{5}.$ Finally, for $x \in K,$ we have $$\| {\varphi}_i (c (x)) - b (x) \| \leq \sum_{m \in J} f_m (x) \| {\varphi}_{i (m)} (a_m) - b (x) \| \leq \sum_{m \in J} f_m (x) \cdot \frac{{\varepsilon}}{5} = \frac{{\varepsilon}}{5}.$$ Therefore $\| {\psi}_i (c) - b \| \leq \frac{4 {\varepsilon}}{5} < {\varepsilon}.$ The claim is proved. Next, we claim that if $i \in I$ and $b \in C_0 (X, A_i),$ then $\lim_{j \geq i} \| {\psi}_{j, i} (b) \| = \| {\psi}_i (b) \|.$ To see this, let ${\varepsilon}> 0.$ Choose a finite cover $(U_1, U_2, \ldots, U_n)$ of $K$ consisting of nonempty open sets $U_m {\subset}X$ such that $\| b (x) - b (y) \| < \frac{{\varepsilon}}{3}$ for $m = 1, 2, \ldots, n$ and $x, y \in U_m.$ For $m = 1, 2, \ldots, n,$ choose $x_m \in U_m$ and choose $i (m) \in I$ such that $\| {\varphi}_{i (m), \, i} (b (x_m)) \| < \| {\varphi}_i (b (x_m)) \| + \frac{{\varepsilon}}{3}.$ Choose $j \in I$ such that $j \geq i (m)$ for $m = 1, 2, \ldots, n.$ Now let $x \in X.$ If there is $m$ such that $x \in U_m,$ then $$\begin{aligned} \| {\varphi}_{j, i} (b (x)) \| & \leq \| {\varphi}_{j, i} ( b (x) - b (x_m) ) \| + \big\| {\varphi}_{j, i (m)} \big( {\varphi}_{i (m), \, i} (b (x_m)) \big) \big\| \\ & < \frac{{\varepsilon}}{3} + \| {\varphi}_i (b (x_m)) \| + \frac{{\varepsilon}}{3} \leq \| {\psi}_i (b) \| + \frac{2 {\varepsilon}}{3}.\end{aligned}$$ Otherwise, $x \not\in K,$ so $\| b (x) \| < \frac{{\varepsilon}}{3},$ whence $\| {\varphi}_{j, i} (b (x)) \| < \frac{{\varepsilon}}{3}.$ It follows that $\| {\psi}_{j, i} (b) \| \leq \| {\psi}_i (b) \| + \frac{2 {\varepsilon}}{3} < \| {\psi}_i (b) \| + {\varepsilon}.$ The claim is proved. We now prove the universal property. Let $D$ be a Banach algebra, and let $({\gamma}_i)_{i \in I}$ be a family of contractive homomorphisms ${\gamma}_i \colon C_0 (X, A_i) \to D$ such that ${\gamma}_j \circ {\psi}_{j, i} = {\gamma}_i$ whenever $i, j \in I$ satisfy $i \leq j.$ We need a contractive [homomorphism]{} ${\gamma}\colon C_0 (X, A) \to D$ such that ${\gamma}\circ {\psi}_i = {\gamma}_i$ for all $i \in I.$ The first claim implies that $\bigcup_{i \in I} {\psi}_i ( C_0 (X, A_i) )$ is dense in $C_0 (X, A),$ so ${\gamma}$ is unique if it exists. Let $b \in \bigcup_{i \in I} {\psi}_i ( C_0 (X, A_i) ).$ We claim that whenever $i_1, i_2 \in I,$ and $b_1 \in C_0 (X, A_{i_1})$ and $b_2 \in C_0 (X, A_{i_2})$ satisfy ${\psi}_{i_1} (b_1) = b$ and ${\psi}_{i_2} (b_2) = b,$ then ${\gamma}_{i_1} (b_1) = {\gamma}_{i_2} (b_2).$ To prove this, let ${\varepsilon}> 0.$ Choose $j \in I$ such that $j \geq i_1$ and $j \geq i_2.$ Then ${\psi}_j ( {\psi}_{j, i_1} (b_1) - {\psi}_{j, i_2} (b_2) ) = 0.$ Use the second claim to choose $k \geq j$ such that $\big\| {\psi}_{k, j} ( {\psi}_{j, i_1} (b_1) - {\psi}_{j, i_2} (b_2) ) \big\| < {\varepsilon}.$ Then $$\| {\gamma}_{i_1} (b_1) - {\gamma}_{i_2} (b_2) \| = \big\| {\gamma}_{k} \big( {\psi}_{k, i_1} (b_1) - {\psi}_{k, i_2} (b_2) \big) \big\| \leq \big\| {\psi}_{k, j} \big( {\psi}_{j, i_1} (b_1) - {\psi}_{j, i_2} (b_2) \big) \big\| < {\varepsilon}.$$ Since ${\varepsilon}> 0$ is arbitrary, the claim follows. There is therefore a well defined map ${\beta}\colon \bigcup_{i \in I} {\psi}_i ( C_0 (X, A_i) ) \to D$ such that ${\beta}\circ {\psi}_i = {\gamma}_i$ for all $i \in I.$ It is easy to check that ${\beta}$ is an algebra [homomorphism]{}. We next claim that for all $b \in \bigcup_{i \in I} {\psi}_i ( C_0 (X, A_i) ),$ we have $\| {\beta}(b) \| \leq \| b \|.$ This will complete the proof, since we can take ${\gamma}$ to be the extension of ${\beta}$ to $C_0 (X, A)$ by continuity. Let ${\varepsilon}> 0.$ Choose $i \in I$ and $c \in C_0 (X, A_i)$ such that ${\psi}_i (c) = b.$ Use the second claim to choose $j \in I$ such that $j \geq i$ and $\| {\psi}_{j, i} (c) \| < \| b \| + {\varepsilon}.$ Then $$\| {\gamma}(b) \| = \| {\gamma}_j ({\psi}_{j, i} (c)) \| \leq \| {\psi}_{j, i} (c) \| < \| b \| + {\varepsilon}.$$ This completes the proof. \[P-KThDLim\] Let $I$ be a directed set, and let $\big( (A_i)_{i \in I}, \, ({\varphi}_{j, i})_{i \leq j} \big)$ be a direct system of Banach algebras with contractive homomorphisms. Then the induced map ${\varinjlim}K_* (A_i) \to K_* \big( {\varinjlim}A_i \big)$ is an isomorphism. It is shown in 5.2.4 of [@Bl3] that the result holds for the Murray-von Neumann semigroups of idempotents over the algebra in place of $K_0.$ It is easy to see that the result then also holds for the Grothendieck groups of these semigroups. The result for $K_0$ follows from that for the Murray-von Neumann semigroups by unitizing. (See the proof of Proposition 2.5.4 of [@Ph0]. One can also copy the proof of this proposition, substituting Lemma 2.5.8 of [@Ph0] for the use of continuous functional calculus.) To prove the result for $K_1,$ use the result for $K_0,$ the natural isomorphism $K_1 (B) \cong K_0 (C_0 ({{\mathbb{R}}}, B))$ for any Banach algebra $B$ (Theorem 8.2.2 of [@Bl3]), and Lemma \[L\_3918\_LimC0\]. \[P-KStdUHF\] Let $P$ be the set of prime numbers, enumerated as $\{ p_1, p_2, \ldots \}.$ Let $N \colon P \to {{\mathbb{Z}}_{\geq 0}}\cup \{ {\infty}\}$ be any function such that $\sum_{t \in P} N (t) = {\infty},$ and for $n \in {{\mathbb{Z}}_{\geq 0}}$ define $$r (n) = p_1^{\min (N (1), n)} p_2^{\min (N (2), n)} \cdots p_n^{\min (N (n), n)}.$$ Let $p \in [1, {\infty}].$ Let $D$ be a spatial $L^p$ UHF algebra of type $N,$ as in Example \[E-pUHF\]. Then $K_1 (D) = 0$ and $K_0 (D) \cong \bigcup_{n = 1}^{{\infty}} r (n)^{-1} {{\mathbb{Z}}}{\subset}{{\mathbb{Q}}},$ via an isomorphism which sends the class $[1_D]$ of the idempotent $1_D$ to $1 \in {{\mathbb{Q}}}.$ Theorem 3.10 and Definition 3.5 of [@PhLp2a] imply that there are subalgebras $D_0 {\subset}D_1 {\subset}\cdots {\subset}D$ such that ${{\overline}{\bigcup_{n = 0}^{{\infty}} D_n}} = D$ and such that $D_n$ is isometrically isomorphic to $M^p_{r (n)}$ for all $n \in {{\mathbb{Z}}_{\geq 0}}.$ Since $M^p_{r (n)}$ is $M_{r (n)}$ with an equivalent norm, its K-theory is the same as for $M_{r (n)}.$ Thus, we have isomorphisms ${\eta}_n \colon K_0 (D_n) \to r (n)^{-1} {{\mathbb{Z}}},$ with ${\eta}_n ([1]) = 1,$ and we have $K_1 (D_n) = 0,$ just as in the [C\*-algebra]{} case. Apply Corollary \[C-ClUIsDLim\] and Theorem \[P-KThDLim\] to get the result. \[L-KthyTPM\] Let $p \in [1, {\infty}),$ let ${(X, {\mathcal{B}}, \mu)}$ be a [measure space]{}, and let $A {\subset}{L (L^p (X, \mu))}$ be a norm closed subalgebra. Let $S$ be a set, let ${{\overline}{M}}^p_S$ be as in Example \[E-MatpS\], and let $s_0 \in S.$ Let ${{\overline}{M}}^p_S \otimes_p A$ be as in Example \[E-LpTP\]. Let ${\varepsilon}\colon A \to {{\overline}{M}}^p_S \otimes_p A$ be the [homomorphism]{} defined by ${\varepsilon}(a) = e_{s_0, s_0} \otimes a$ for $a \in A.$ Then ${\varepsilon}$ is an isomorphism on K-theory. Let ${\mathcal{F}}$ be the collection of finite subsets of $S$ which contain $s_0,$ ordered by inclusion. For $E \in {\mathcal{F}}$ let $M_E^p {\subset}{{\overline}{M}}_S^p$ be as in Example \[E-MatpS\]. If $F \in {\mathcal{F}}$ contains $E,$ then Lemma \[L-MatpSMap\] gives isometric maps ${\varepsilon}_{F, E} \colon M_E^p \otimes_p A \to M^p_F \otimes_p A$ and ${\varepsilon}_E \colon M_E^p \otimes_p A \to {{\overline}{M}}^p_S \otimes_p A,$ coming from the inclusions. Clearly ${\varepsilon}_F \circ {\varepsilon}_{F, E} = {\varepsilon}_E.$ Also, by the construction of ${{\overline}{M}}^p_S$ and density of elementary tensors in ${{\overline}{M}}^p_S \otimes_p A,$ the union of the ranges of the maps ${\varepsilon}_F$ is dense. Therefore we can make the identification ${{\overline}{M}}^p_S \otimes_p A = {\varinjlim}_{F \in {\mathcal{F}} } M^p_F \otimes_p A,$ using the maps ${\varepsilon}_F$ for $F \in {\mathcal{F}}.$ For $E, F \in {\mathcal{F}}$ with $E {\subset}F,$ we have $M^p_E \otimes_p A = M^p_E \otimes_{\mathrm{alg}} A$ and $M^p_F \otimes_p A = M^p_F \otimes_{\mathrm{alg}} A.$ The map ${\varepsilon}_{F, E}$ is therefore clearly an isomorphism on K-theory. It now follows from Theorem \[P-KThDLim\] that $({\varepsilon}_F)_*$ is an isomorphism for all $F \in {\mathcal{F}}.$ Take $F = \{ s_0 \}$ to get the result. We now give an analog of the Pimsner-Voiculescu exact sequence [@PV] for reduced $L^p$ operator crossed products by ${{\mathbb{Z}}}.$ Rather than repeating the proof of [@PV] (or any of a number of later proofs), we show that the smooth crossed product, for which the result is already known [@PS], has the same K-theory as the reduced $L^p$ operator crossed product. Therefore we start with smooth crossed products. The following definition is a special case of a much more general definition. (See Definition 2.1.0 of [@Sw2].) \[D-SmoothCrPrd\] Let $A$ be a Banach algebra, and let ${\alpha}$ be an isometric automorphism of $A.$ We use the same notation for the obvious isometric action ${\alpha}\colon {{\mathbb{Z}}}\to {{\mathrm{Aut}}}(A).$ We define the [*smooth crossed product*]{} ${\mathcal{S}} ({{\mathbb{Z}}}, A, {\alpha})$ to be the vector space of all functions $a \in L^1 ({{\mathbb{Z}}}, A, {\alpha})$ such that for every $r \in {{\mathbb{Z}}_{\geq 0}}$ the number $$\| a \|_{r, 1} = \sum_{n \in {{\mathbb{Z}}_{> 0}}} (1 + | n | )^r \| a_n \|$$ is finite. We equip it with the topology given by the seminorms $\| \cdot \|_{r, 1},$ and with the (convolution) multiplication inherited from $L^1 ({{\mathbb{Z}}}, A, {\alpha}).$ Let ${\kappa}_{{\infty}} \colon {\mathcal{S}} ({{\mathbb{Z}}}, A, {\alpha}) \to L^1 ({{\mathbb{Z}}}, A, {\alpha})$ be the inclusion. \[P-SIsFrechet\] The space ${\mathcal{S}} ({{\mathbb{Z}}}, A, {\alpha})$ of [Definition \[D-SmoothCrPrd\]]{} is a locally $m$-convex Fréchet algebra, and ${\kappa}_{{\infty}}$ is [continuous]{}. Since ${{\mathbb{Z}}}$ is a discrete group, this algebra is the same as the one called $L_1^{{\sigma}} ({{\mathbb{Z}}}, A)$ in Equation (2.1.3) in [@Sw2], with ${\sigma}(n) = 1 + | n |$ for $n \in {{\mathbb{Z}}}.$ The conclusion now follows from the second part of Theorem 3.1.7 of [@Sw2]. (The condition there, that ${\sigma}_{-}$ bound ${{\mathrm{Ad}}},$ is trivial. See the discussion after Definition 1.3.9 of [@Sw2].) \[L-SchwInj\] Let $p \in [1, {\infty}),$ let $A$ be an $L^p$ operator algebra, and let ${\alpha}\in {{\mathrm{Aut}}}(A)$ be isometric. With notation as in [Lemma \[L-CompOfCrPrd\]]{} and [Definition \[D-SmoothCrPrd\]]{}, the map $${\kappa}_{\mathrm{r}} \circ {\kappa}\circ {\kappa}_{{\infty}} \colon {\mathcal{S}} ({{\mathbb{Z}}}, A, {\alpha}) \to F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, {\alpha})$$ is injective and has dense range. Injectivity is immediate from injectivity of ${\kappa}_{{\infty}}$ and Corollary \[C-L1Inj\]. Density of the range follows from Theorem \[T-UPropRed\](\[T-UPropRed-3\]). \[N-ScwhSubalg\] Let $p \in [1, {\infty}),$ let $A$ be an $L^p$ operator algebra, and let ${\alpha}\in {{\mathrm{Aut}}}(A)$ be isometric. Using [Lemma \[L-SchwInj\]]{}, from now on we identify ${\mathcal{S}} ({{\mathbb{Z}}}, A, {\alpha})$ with the corresponding subalgebra of $F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, {\alpha}).$ The following definition is again a very special case of something much more general. See the Appendix in [@Sw2], especially Theorem A.2. \[D-SmoothElts\] Let $B$ be a Banach algebra, and let ${\beta}\colon S^1 \to {{\mathrm{Aut}}}(B)$ be an action of the circle $S^1$ on $B.$ We say that $b \in B$ is [*smooth*]{} (or ${\beta}$-smooth if ${\beta}$ must be specified) if the function ${\lambda}\mapsto {\beta}_{\exp (i {\lambda})} (b)$ is a $C^{\infty}$ function from ${{\mathbb{R}}}$ to $B.$ \[T-SmoothIffSchw\] Let $p \in [1, {\infty}),$ let $A$ be a separable $L^p$ operator algebra, and let ${\alpha}\in {{\mathrm{Aut}}}(A)$ be isometric. Let ${\widehat{{\alpha}}} \colon S^1 \to {{\mathrm{Aut}}}\big( F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, {\alpha}) \big)$ be the dual action of Theorem \[T-DualpRed\]. Then $b \in F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, {\alpha})$ is ${\widehat{{\alpha}}}$-smooth [if and only if]{}  $b \in {\mathcal{S}} ({{\mathbb{Z}}}, A, {\alpha}).$ We first prove that smooth elements are in ${\mathcal{S}} ({{\mathbb{Z}}}, A, {\alpha}).$ For $n \in {{\mathbb{Z}}},$ let $E_n \colon F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, {\alpha}) \to A$ be as in Proposition \[P:CondExpt\]. It follows from continuity of $E_n$ and the definition of ${\widehat{{\alpha}}}$ (see Definition \[D-DualCc\] for the formula) that $$\label{Eq:EnOnDual} E_n \big( {\widehat{{\alpha}}}_{\exp (i {\lambda})} (b) \big) = \exp (- i n {\lambda}) E_n (b)$$ for all $b \in F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, {\alpha})$ and $n \in {{\mathbb{Z}}}.$ We claim that if $b \in F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, {\alpha})$ and the function ${\lambda}\mapsto {\widehat{{\alpha}}}_{\exp (i {\lambda})} (b)$ is differentiable, with derivative $f \colon {{\mathbb{R}}}\to F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, {\alpha}),$ then $$E_n (f ({\lambda})) = - i n \exp (- i n {\lambda}) E_n (b)$$ for all $n \in {{\mathbb{Z}}}.$ To prove the claim, we first observe that $f$ is determined by the condition $$\lim_{h \to 0} \frac{\big\| {\widehat{{\alpha}}}_{\exp (i ({\lambda}+ h))} (b) - {\widehat{{\alpha}}}_{\exp (i {\lambda})} (b) - h f ({\lambda}) \big\|}{| h |} = 0.$$ Apply $E_n,$ and use (\[Eq:EnOnDual\]) and boundedness of $E_n$ to get $$\lim_{h \to 0} \frac{\big\| \big[ \exp (- i n ({\lambda}+ h)) - \exp (- i n {\lambda}) \big] E_n (b) - h E_n (f ({\lambda})) \big\|}{| h |} = 0.$$ The claim follows. The claim implies that if the function ${\lambda}\mapsto {\widehat{{\alpha}}}_{\exp (i {\lambda})} (b)$ is differentiable, with derivative $f \colon {{\mathbb{R}}}\to F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, {\alpha}),$ then there is $c \in F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, {\alpha})$ (namely $f (0)$) such that $E_n (c) = - i n E_n (b)$ for all $n \in {{\mathbb{Z}}}.$ Moreover, from (\[Eq:EnOnDual\]) and Proposition \[P:Faithful\](\[P:Faithful:G\]) we get $f ({\lambda}) = {\widehat{{\alpha}}}_{\exp (i {\lambda})} (c)$ for all ${\lambda}\in {{\mathbb{R}}}.$ Now suppose $b \in F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, {\alpha})$ is smooth. Then, by induction, for every $r \in {{\mathbb{Z}}_{\geq 0}},$ there is $b_r \in F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, {\alpha})$ such that $E_n (b_r) = (- i n)^r E_n (b)$ for all $n \in {{\mathbb{Z}}_{> 0}}.$ Since $\sup_{n \in {{\mathbb{Z}}}} \| E_n (b_r) \| \leq \| b_r \|,$ it follows that for every $r \in {{\mathbb{Z}}_{\geq 0}}$ we have $$\sup_{n \in {{\mathbb{Z}}}} | n |^r \| E_n (b) \| < {\infty}.$$ So for every $r \in {{\mathbb{Z}}_{\geq 0}}$ we have, using $(1 + | n | )^{r + 2} \leq 2^{r + 2} \big( 1 + | n |^{r + 2} \big),$ $$\begin{aligned} \sum_{n \in {{\mathbb{Z}}}} (1 + | n | )^r \| E_n (b) \| & \leq \sum_{n \in {{\mathbb{Z}}}} \frac{2^{r + 2} \big( 1 + | n |^{r + 2} \big) \| E_n (b) \|}{ (1 + | n | )^2} \\ & \leq 2^{r + 2} \left( \sup_{n \in {{\mathbb{Z}}}} \| E_n (b) \| + \sup_{n \in {{\mathbb{Z}}}} | n |^{r + 2} \| E_n (b) \| \right) \sum_{n \in {{\mathbb{Z}}}} \frac{1}{(1 + | n | )^2} \\ & < {\infty}.\end{aligned}$$ It follows that $b \in {\mathcal{S}} ({{\mathbb{Z}}}, A, {\alpha}).$ Now let $b \in {\mathcal{S}} ({{\mathbb{Z}}}, A, {\alpha}).$ Since $\sum_{n \in {{\mathbb{Z}}}} \| b_n \| < {\infty},$ we can write $b = \sum_{n \in {{\mathbb{Z}}}} b_n u_n,$ with absolute convergence in $F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, {\alpha}).$ For ${\lambda}\in {{\mathbb{R}}},$ set $g ({\lambda}) = {\widehat{{\alpha}}}_{\exp (i {\lambda})} (b).$ Then $$g ({\lambda}) = \sum_{n \in {{\mathbb{Z}}}} \exp (- i n {\lambda}) b_n u_n.$$ It is easy to prove by induction on $r$ that the function $${\lambda}\mapsto \sum_{n \in {{\mathbb{Z}}}} (- i n)^r \exp (- i n {\lambda}) b_n u_n$$ is well defined (the series converges absolutely in $F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, {\alpha})$), takes values in ${\mathcal{S}} ({{\mathbb{Z}}}, A, {\alpha}),$ and is equal to $g^{(r)} ({\lambda}).$ Thus $b$ is smooth. Recall (see the second paragraph of Definition 1.1 of [@Sw0]) that if $B$ is a unital Banach algebra and $C$ is a dense subalgebra, then $C$ is spectrally invariant in $B$ if whenever $c \in C$ has an inverse $b \in B,$ then $b \in C.$ In the nonunital case, we ask that the unitization of $C$ be spectrally invariant in the unitization of $B.$ (It is shown in Lemma 1.2 of [@Sw0] that, under good conditions, $C$ is spectrally invariant in $B$ [if and only if]{}  $C$ is closed under holomorphic functional calculus in $B.$) \[C-SpecInv\] Let $p \in [1, {\infty}),$ let $A$ be a $L^p$ operator algebra, and let ${\alpha}\in {{\mathrm{Aut}}}(A)$ be isometric. Then ${\mathcal{S}} ({{\mathbb{Z}}}, A, {\alpha})$ is spectrally invariant in $F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, {\alpha}).$ It is easy to check that if $b \in F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, {\alpha})$ is invertible and smooth, then $b^{-1}$ is smooth. Apply Theorem \[T-SmoothIffSchw\]. In fact, Theorem 2.2 of [@Sw1] shows that ${\mathcal{S}} ({{\mathbb{Z}}}, A, {\alpha})$ is strongly spectrally invariant in $F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, {\alpha})$ in the sense of Definition 1.2 of [@Sw1]. For a locally $m$-convex Fréchet algebra $A,$ we let $RK_* (A)$ denote the representable K-theory of $A,$ as defined in [@Ph1]. \[C-KThyIso\] Let $p \in [1, {\infty}),$ let $A$ be a $L^p$ operator algebra, and let ${\alpha}\in {{\mathrm{Aut}}}(A)$ be isometric. Let ${\kappa}_{{\infty}}$ be as in Definition \[D-SmoothCrPrd\]. Then $$({\kappa}_{{\infty}})_* \colon RK_* ( {\mathcal{S}} ({{\mathbb{Z}}}, A, {\alpha}) ) \to RK_* ( F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, {\alpha}) ) $$ is an isomorphism. This is true for any [continuous]{}  inclusion of a spectrally invariant locally $m$-convex Fréchet algebra, by Lemma 1.1.9(1) of [@PS]. \[T-PV\] Let $p \in [1, {\infty}),$ let $A$ be a $L^p$ operator algebra, and let ${\alpha}\in {{\mathrm{Aut}}}(A)$ be isometric. Then we have the following natural six term exact sequence in $K$-theory, in which ${\varepsilon}\colon A \to F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, \alpha)$ is the inclusion of Remark \[R-Ident\]: $$\label{Eq:PVLp} \begin{CD} K_{0} (A) @>{{\operatorname{id}} - (\alpha^{-1})_{*}}>> K_{0} (A) @>{{\varepsilon}_{*}}>> K_{0} \big( F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, \alpha) \big) \\ @A{\partial}AA @. @VV{\partial}V \\ K_{1} \big( F^p_{\mathrm{r}} ({{\mathbb{Z}}}, A, \alpha) \big) @<{{\varepsilon}_{*}}<< K_{1} (A) @<{{\operatorname{id}} - (\alpha^{-1})_{*}}<< K_{1}(A). \\ \end{CD}$$ The action satisfies the hypotheses of Theorem 2.6 of [@PS]. Therefore, with ${\varepsilon}_{{\infty}} \colon A \to {\mathcal{S}} ({{\mathbb{Z}}}, A, \alpha)$ being the inclusion analogous to that of Remark \[R-Ident\], we get the following natural six term exact sequence: $$\begin{CD} RK_{0} (A) @>{{\operatorname{id}} - (\alpha^{-1})_{*}}>> RK_{0} (A) @>{({\varepsilon}_{{\infty}})_{*}}>> RK_{0} \big( {\mathcal{S}} ({{\mathbb{Z}}}, A, \alpha) \big) \\ @A{\partial}AA @. @VV{\partial}V \\ RK_{1} \big( {\mathcal{S}} ({{\mathbb{Z}}}, A, \alpha) \big) @<{({\varepsilon}_{{\infty}})_{*}}<< RK_{1} (A) @<{{\operatorname{id}} - (\alpha^{-1})_{*}}<< RK_{1}(A). \\ \end{CD}$$ Now apply Corollary \[C-KThyIso\] and use ${\varepsilon}= {\kappa}_{{\infty}} \circ {\varepsilon}_{{\infty}},$ getting (\[Eq:PVLp\]) except with $RK_*$ in place of $K_*.$ Since the algebras in (\[Eq:PVLp\]) are all Banach algebras, Corollary 7.8 of [@Ph1] allows us to replace $RK_*$ with $K_*.$ We suppose that there is also an analog of the Connes isomorphism theorem for reduced $L^p$ operator crossed products by isometric actions of ${{\mathbb{R}}},$ but we have not investigated this question. The analog for smooth crossed products is Theorem 1.2.7 of [@PS]. Realizing ${{\mathcal{O}}_{d}^{p}}$ as a crossed product {#Sec_OpdCP} ======================================================== In this section, we show that ${{\mathcal{O}}_{d}^{p}}$ is stably isomorphic (in a suitable sense) to a reduced $L^p$ operator crossed product by an isometric action of ${{\mathbb{Z}}},$ in a manner analogous to the C\* case (Section 2.1 of [@Cu1]). We use this isomorphism to compute $K_* \big( {{\mathcal{O}}_{d}^{p}} \big).$ The methods of the original computation, in [@Cu2], do not seem to work. We have only partial information about to what extent the description of the K-theory of purely infinite simple [C\*-algebra]{}[s]{}, as in Section 1 of [@Cu2], carries over to purely infinite simple Banach algebras, as in Definition 5.1 of [@PhLp2a]. See Corollary 5.15 of [@PhLp2a] and Question \[Q\_3216\_K1Inv0\]. This, however, is not the main difficulty, since this description seems not to be essential for the rest of the argument of [@Cu2]. Rather, the problem occurs in the proof of Proposition 2.2 of [@Cu2]. The map $u \mapsto {\lambda}_u$ in Proposition 2.1 of [@Cu2], from unitaries in ${\mathcal{O}}_d$ to endomorphisms of ${\mathcal{O}}_d,$ seems to have an analog in the $L^p$ situation only when $u$ is an isometric bijection ($\| u \| \leq 1$ and $\| u^{-1} \| \leq 1$). If $\| u \| > 1,$ we have potential trouble with the norm of ${\lambda}_u (t_j^n)$ as $n \to {\infty},$ and if $\| u^{-1} \| > 1,$ we have potential trouble with the norm of ${\lambda}_u (s_j^n)$ as $n \to {\infty}.$ However, for $p \neq 2$ the isometries in $M_r ({{\mathbb{C}}})$ are necessarily spatial. (This is essentially Lamperti’s Theorem [@Lp]; see Theorem 6.9 and Lemma 6.15 of [@PhLp1].) The isometry group is therefore not connected. So we do not get the homotopy required in the proof of Proposition 2.2 of [@Cu2]. We give careful details in this section because it is often necessary to prove explicitly that homomorphisms are isometric. In particular, we want it to be clear that we have provided such a proof everywhere that one is needed. The proof of stable isomorphism involves a number of objects, including an algebra with notation for some of its elements, an action of ${{\mathbb{Z}}}$ on this algebra, and a number of homomorphisms. We introduce notation for these as we proceed though the construction, stating the properties as lemmas. Once established, notation implicitly stays in effect for the rest of this section. We represent the spatial $L^p$ UHF algebra of type $d^{\infty}$ (Example \[E-pUHF\]) following Example 3.8 of [@PhLp2a], taking $X_n$ there to be a $d$ point space with normalized counting measure. It is convenient here to use somewhat different notation. \[N-dinfty\] Fix $d \in \{ 2, 3, \ldots \}$ and $p \in [1, {\infty}) {\setminus}\{ 2 \}.$ (Here, and in the rest of the notation for this section, we mostly suppress the dependence on both $d$ and $p.$) We describe a representation of the spatial $L^p$ UHF algebra of type $d^{\infty}$ (Example \[E-pUHF\]). Define $$Z = \{ 0, 1, \ldots, d - 1 \} {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}X_0 = Z^{{{\mathbb{Z}}_{> 0}}} = \prod_{n = 1}^{{\infty}} Z.$$ Equip $Z$ with the discrete topology and normalized counting measure ${\lambda},$ as in Example \[E-Mpd\]. Write ${\lambda}^n$ for the product of $n$ copies of ${\lambda}$ on $Z^n.$ Equip $X_0$ with the product topology and the infinite product measure, which we call $\mu_0.$ For $r \in {{\mathbb{Z}}_{> 0}},$ we further let $X_r = \prod_{n = r + 1}^{{\infty}} Z,$ with the product topology and infinite product measure, now called $\mu_r,$ giving $X_0 = Z^r \times X_r.$ Following Remark \[R-LpTens\], we can then identify $$\label{Eq:LprTLpXr} L^p (X_0, \mu_0) = L^p ( Z^r, {\lambda}^r) \otimes_p L^p (X_r, \mu_r),$$ and we can further decompose the first factor in various ways, for example as $$L^p ( Z^r, {\lambda}^r) = L^p ( Z^{r - 1}, \, {\lambda}^{r - 1}) \otimes_p L^p ( Z, {\lambda}) = L^p ( Z, {\lambda})^{\otimes r}.$$ For $r \in {{\mathbb{Z}}_{\geq 0}},$ define $$1_{> r} = 1_{L^p (X_r, \mu_r)} {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}1_{\leq r} = 1_{L^p ( Z^r, {\lambda}^r)},$$ so that $1_{> 0} = 1_{\leq r} \otimes 1_{> r}.$ (These were called $1_{ {\mathbb{N}}_{> r}}$ and $1_{ {\mathbb{N}}_{\leq r}}$ in Example 3.8 of [@PhLp2a].) Take $M^p_d = L (L^p ( Z, {\lambda}))$ as in Example \[E-Mpd\]. For $r \in {{\mathbb{Z}}_{> 0}}$ let $D_r^{(0)}$ be the set of all operators of the form $$\label{Eq:1TaT1} 1_{\leq r - 1} \otimes a \otimes 1_{> r} \in L \big( L^p ( Z^{r - 1}, \, {\lambda}^{r - 1}) \otimes_p L^p (Z, {\lambda}) \otimes_p L^p (X_r, \mu_r) \big) = L \big( L^p ( X_0, \mu_0) \big)$$ with $a \in M^p_d,$ and define ${\varphi}_r^{(0)} \colon M^p_d \to L \big( L^p ( X_0, \mu_0) \big)$ by ${\varphi}_r ^{(0)}(a) = 1_{\leq r - 1} \otimes a \otimes 1_{> r}$ as in (\[Eq:1TaT1\]). Let $D_r$ be the subalgebra of $L \big( L^p ( X_0, \mu_0) \big)$ generated by $\bigcup_{k = 1}^r D_k^{(0)},$ and define ${\varphi}_r \colon L \big( L^p (Z^r, {\lambda}^r) \big) \to L \big( L^p ( X_0, \mu_0) \big)$ by ${\varphi}_r (a) = a \otimes 1_{> r}$ according to the tensor factorization in (\[Eq:LprTLpXr\]). Now set $D = {{\overline}{ \bigcup_{r = 1}^{{\infty}} D_r }}.$ \[L-DINisRight\] The objects defined in Notation \[N-dinfty\] have the following properties: 1. \[L-DINisRight-1\] For every $r \in {{\mathbb{Z}}_{> 0}},$ the map ${\varphi}_r^{(0)}$ is an isometric bijection from $M^p_d$ to $D_r^{(0)}.$ 2. \[L-DINisRight-1b\] For every $r \in {{\mathbb{Z}}_{> 0}},$ the map ${\varphi}_r$ is an isometric bijection from $L \big( L^p (Z^r, {\lambda}^r) \big)$ to $D_r.$ 3. \[L-DINisRight-2\] We have $D_1 {\subset}D_2 {\subset}\cdots.$ 4. \[L-DINisRight-3\] The algebra $D$ is isometrically isomorphic to the spatial $L^p$ UHF algebra of type $d^{\infty},$ as in Example \[E-pUHF\]. 5. \[L-DINisRight-5\] The algebra $D$ is separable. 6. \[L-DINisRight-4\] For $r \in {{\mathbb{Z}}_{> 0}}$ and $c_1, c_2, \ldots, c_r \in M^p_d,$ we have $${\varphi}_1^{(0)} (c_1) {\varphi}_2^{(0)} (c_2) \cdots {\varphi}_r^{(0)} (c_r) = c_1 \otimes c_2 \otimes \cdots \otimes c_r \otimes 1_{> r}$$ with respect to the tensor factorization $$L^p (X_0, \mu_0) = L^p (Z, {\lambda})^{\otimes r} \otimes_p L^p (X_r, \mu_r).$$ For (\[L-DINisRight-1\]), for all $a \in L \big( L^p (Z, {\lambda}) \big)$ we have $\| 1_{\leq r - 1} \otimes a \otimes 1_{> r} \| = \| a \|$ by Remark \[R-LpTens\]. The proof of (\[L-DINisRight-1b\]) is the same. Part (\[L-DINisRight-2\]) is obvious. Given parts (\[L-DINisRight-1b\]) and (\[L-DINisRight-2\]), part (\[L-DINisRight-3\]) follows immediately from Definition 3.5 and Theorem 3.10 of [@PhLp2a]. Part (\[L-DINisRight-5\]) follows from the fact that the $D_n$ are [finite dimensional]{}  and their union is dense in $D.$ Part (\[L-DINisRight-4\]) is clear. \[N-ActionOnD\] We follow Notation \[N-dinfty\]. For a set $S,$ let $\nu_S$ be counting measure on $S.$ (Thus ${\lambda}= d^{-1} \nu_Z.$) Define $X = {{\mathbb{Z}}_{\geq 0}}\times X_0,$ and equip $X$ with the measure $\mu = \nu_{{{\mathbb{Z}}_{\geq 0}}} \times \mu_0$ and the product topology. Define a function $h \colon X \to X$ as follows. For $m \in {{\mathbb{Z}}_{\geq 0}}$ and $k_1, k_2, \ldots \in Z,$ write $m = d m_0 + k_0$ with $m_0 \in {{\mathbb{Z}}_{\geq 0}}$ and $k_0 \in Z,$ and then set $$h (m, k_1, k_2, \ldots) = (m_0, k_0, k_1, k_2, \ldots ).$$ Define (justification in [Lemma \[L-ActionIsRight\]]{}(\[L-ActionIsRight-2\]) below) $v \in {L (L^p (X, \mu))}$ by $$(v \xi) (x) = d^{1 / p} \xi (h^{-1} (x))$$ for $\xi \in L_p (X, \mu)$ and $x \in X.$ \[N-MatUnit\] Recall the algebras ${{\overline}{M}}_{S}^p$ of Example \[E-MatpS\], and the matrix unit notation from there and from Example \[E-Mpd\]. Thus, the standard matrix units for $M^p_d$ are $( e_{j, k} )_{j, k = 0}^{d - 1}.$ Define $B {\subset}{L (L^p (X, \mu))}$ by $B = {{\overline}{M}}_{{{\mathbb{Z}}_{\geq 0}}}^p \otimes_p D$ as in Example \[E-LpTP\]. For $a \in {{\overline}{M}}_{{{\mathbb{Z}}_{\geq 0}}}^p,$ we have the element $$a \otimes 1_{> 0} \in L \big( L^p ( {{\mathbb{Z}}_{\geq 0}}\times X_0, \, \nu_{{{\mathbb{Z}}_{\geq 0}}} \times \mu_0 ) \big) = {L (L^p (X, \mu))}$$ as in Remark \[R-LpTens\], but recalling that $1_{L^p (X_0, \mu_0)} = 1_{> 0}$ (Notation \[N-dinfty\]). Similarly, for $b \in D$ we get $1 \otimes b \in {L (L^p (X, \mu))}.$ For $c \in M^p_d,$ we write $1 \otimes {\varphi}_r^{(0)} (c)$ as $$1 \otimes {\varphi}_r^{(0)} (c) = 1 \otimes 1_{\leq r - 1} \otimes c \otimes 1_{> r}.$$ In the last expression, $c$ is really in position $r + 1$: the first tensor factor is $1_{l^p ({{\mathbb{Z}}_{\geq 0}})},$ and $1_{\leq r - 1}$ is the tensor product of $r - 1$ copies of $1_{M^p_d}.$ We use similar notation for products as in [Lemma \[L-DINisRight\]]{}(\[L-DINisRight-4\]). For $n \in {{\mathbb{Z}}_{\geq 0}},$ we further define $f_n \in B$ by $$f_n = \sum_{m = 0}^{d^n - 1} e_{m, m} \otimes 1_{> 0}.$$ \[L-ActionIsRight\] The objects defined in Notation \[N-ActionOnD\] and Notation \[N-MatUnit\] have the following properties: 1. \[L-ActionIsRight-0\] The algebra $B$ is separable. 2. \[L-ActionIsRight-1\] The map $h$ is a [homeomorphism]{}, with inverse given by $$h^{-1} (m, k_1, k_2, k_3, \ldots) = (d m + k_1, \, k_2, \, k_3, \, \ldots )$$ for $m \in {{\mathbb{Z}}_{\geq 0}}$ and $k_1, k_2, \ldots \in Z.$ 3. \[L-ActionIsRight-2\] $v$ is a well defined bijective isometry in ${L (L^p (X, \mu))}$ which is spatial in the sense of Definition 6.4 of [@PhLp1]. 4. \[L-ActionIsRight-3\] $v B v^{-1} = B.$ 5. \[L-ActionIsRight-4\] For $$j, k \in {{\mathbb{Z}}_{\geq 0}}, \,\,\,\,\, r \in {{\mathbb{Z}}_{> 0}}, {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}l_1, m_1, l_2, m_2, \ldots, l_r, m_r \in Z,$$ we have $$\begin{aligned} \lefteqn{v^{-1} \big( e_{j, k} \otimes e_{l_{1}, m_{1}} \otimes e_{l_{2}, m_{2}} \otimes \cdots \otimes e_{l_{r}, m_{r}} \otimes 1_{> r} \big) v} \\ & \hspace*{2em} \mbox{} = e_{d j + l_1, \, d k + m_1} \otimes e_{l_{2}, m_{2}} \otimes e_{l_{3}, m_{3}} \otimes \cdots \otimes e_{l_{r}, m_{r}} \otimes 1_{> r - 1}.\end{aligned}$$ If we write $j = d j_0 + l_0$ and $k = d k_0 + m_0$ with $j_0, k_0 \in {{\mathbb{Z}}_{\geq 0}}$ and $l_0, m_0 \in Z,$ then (allowing now $r = 0$) $$\begin{aligned} \lefteqn{v \big( e_{j, k} \otimes e_{l_{1}, m_{1}} \otimes e_{l_{2}, m_{2}} \otimes \cdots \otimes e_{l_{r}, m_{r}} \otimes 1_{> r} \big) v^{-1}} \\ & \hspace*{2em} \mbox{} = e_{j_0, k_0} \otimes e_{l_{0}, m_{0}} \otimes e_{l_{1}, m_{1}} \otimes \cdots \otimes e_{l_{r}, m_{r}} \otimes 1_{> r + 1}.\end{aligned}$$ 6. \[L-ActionIsRight-5a\] For $n \in {{\mathbb{Z}}_{\geq 0}},$ the element $f_n$ is an idempotent. 7. \[L-ActionIsRight-5b\] For $n \in {{\mathbb{Z}}_{\geq 0}},$ we have $f_{n + 1} f_n = f_{n} f_{n + 1} = f_n.$ 8. \[L-ActionIsRight-5\] For $n \in {{\mathbb{Z}}_{\geq 0}},$ we have $v^{-1} f_n v = f_{n + 1}.$ 9. \[L-ActionIsRight-6\] $\| v b v^{-1} \| = \| b \|$ for all $b \in B.$ 10. \[L-ActionIsRight-7\] We have $f_0 B f_0 \subset f_1 B f_1 \subset f_2 B f_2 \subset \cdots$ and $B = {{\overline}{\bigcup_{n = 0}^{{\infty}} f_n B f_n}}.$ Part (\[L-ActionIsRight-0\]) follows from separability of $M^p_{{{\mathbb{Z}}_{\geq 0}}}$ (which is clear from its definition) and of $D$ ([Lemma \[L-DINisRight\]]{}(\[L-DINisRight-5\])). Part (\[L-ActionIsRight-1\]) is clear. We prove part (\[L-ActionIsRight-2\]). Given part (\[L-ActionIsRight-1\]), and since the formula for $v$ gives $v ({\chi}_E) = {\chi}_{h (E)},$ all we need to check is that if $E {\subset}X$ is measurable, then $\mu (h (E)) = d^{-1} \mu (E).$ This is true for all sets of the form $$E = \{ m \} \times \{ ( k_1, k_2, \ldots, k_r ) \} \times X_r$$ for $m \in {{\mathbb{Z}}_{\geq 0}},$ $r \in {{\mathbb{Z}}_{> 0}},$ and $k_1, k_2, \ldots, k_r \in Z.$ The characteristic functions of sets of this type are [continuous]{}. Moreover, for every $g \in C_{\mathrm{c}} (X),$ there is a compact set $K {\subset}X$ and a sequence of continuous functions $g_n,$ each a linear combination of functions ${\chi}_E$ with $E$ as above and $E {\subset}K,$ such that $g_n \to g$ uniformly. The statement now follows from the Riesz Representation Theorem. Part (\[L-ActionIsRight-4\]) is a computation. Part (\[L-ActionIsRight-3\]) follows from part (\[L-ActionIsRight-4\]), because the elements considered span a dense subspace of $B.$ Parts (\[L-ActionIsRight-5a\]) and (\[L-ActionIsRight-5b\]) are immediate. Part (\[L-ActionIsRight-5\]) also follows from part (\[L-ActionIsRight-4\]). To prove part (\[L-ActionIsRight-6\]), use the fact that $\| v \| = \| v^{-1} \| = 1,$ which is a consequence of part (\[L-ActionIsRight-2\]). We prove (\[L-ActionIsRight-7\]). For $n \in {{\mathbb{Z}}_{\geq 0}},$ the inclusion $f_n B f_n \subset f_{n + 1} B f_{n + 1}$ follows from (\[L-ActionIsRight-5b\]). Now define $z_n \in M^p_{{{\mathbb{Z}}_{\geq 0}}}$ by $z_n = \sum_{m = 0}^{d^n - 1} e_{m, m}$ for $n \in {{\mathbb{Z}}_{\geq 0}},$ so that $f_n = z_n \otimes 1_{> 0}.$ Then it follows directly from the definition in Example \[E-MatpS\] that $${{\overline}{M}}^p_{{{\mathbb{Z}}_{\geq 0}}} = {{\overline}{\bigcup_{n = 0}^{{\infty}} z_n M^p_{{{\mathbb{Z}}_{\geq 0}}} z_n}}.$$ The desired conclusion now follows from the density in $B$ of the linear span of the elementary tensors. \[N-DICrPrd\] We follow Notation \[N-dinfty\], Notation \[N-ActionOnD\], and Notation \[N-MatUnit\]. We define ${\beta}\in {{\mathrm{Aut}}}(B)$ by ${\beta}(b) = v b v^{-1}$ for all $b \in B.$ Since ${\beta}$ is an isometric automorphism of $B$ (by [Lemma \[L-ActionIsRight\]]{}(\[L-ActionIsRight-3\]) and [Lemma \[L-ActionIsRight\]]{}(\[L-ActionIsRight-6\])), we may define $A = F^p ({{\mathbb{Z}}}, B, {\beta}).$ We further identify an element $b \in B$ with $b u_0 \in F^p ({{\mathbb{Z}}}, B, {\beta}),$ as in Remark \[R-Ident\]. (Since $B$ is not unital, we do not have $u_n \in F^p ({{\mathbb{Z}}}, B, {\beta}).$) In particular, the elements $f_n$ of Notation \[N-MatUnit\] are considered to be in $A.$ For $a \in A$ and $n \in {{\mathbb{Z}}},$ we write $u_n a$ for $L_n (a)$ and $a u_n$ for $R_n (a),$ as in Notation \[N-Multug\]. We let $v$ also stand for the isometric [representation]{} of ${{\mathbb{Z}}}$ on $L^p (X, \mu)$ given by $n \mapsto v^n,$ and we define (justification in [Lemma \[L-CrPrdRight\]]{} below) $\pi = v \ltimes {{\mathrm{id}}}_B \colon F^p ({{\mathbb{Z}}}, B, {\beta}) \to {L (L^p (X, \mu))}$ to be the representation associated, as in Theorem \[T-UPropFull\](\[T-UPropFull-1\]), with the covariant [representation]{} $(v, {{\mathrm{id}}}_B)$ of $({{\mathbb{Z}}}, B, {\beta}).$ \[L-CrPrdRight\] The objects defined in Notation \[N-DICrPrd\] have the following properties: 1. \[L-CrPrdRight-5\] The algebra $A$ is separable. 2. \[L-CrPrdRight-1\] The pair $(v, {{\mathrm{id}}}_B)$ is a contractive covariant [representation]{} of $({{\mathbb{Z}}}, B, {\beta}).$ 3. \[L-CrPrdRight-2\] The [homomorphism]{}  $\pi$ exists and is contractive. 4. \[L-CrPrdRight\_uv\] $u_{-1} b u_1 = v^{-1} b v$ and $u_1 b u_{-1} = v b v^{-1}$ for all $b \in B.$ 5. \[L-CrPrdRight-4\] We have $f_0 A f_0 \subset f_1 A f_1 \subset f_2 A f_2 \subset \cdots$ and $A = {{\overline}{\bigcup_{n = 0}^{{\infty}} f_n A f_n}}.$ Part (\[L-CrPrdRight-5\]) follows from separability of $B$ ([Lemma \[L-ActionIsRight\]]{}(\[L-ActionIsRight-0\])) and the fact that the linear span of all $b u_n,$ with $b \in B$ and $n \in {{\mathbb{Z}}},$ is dense in $A$ (by Theorem \[T-UPropFull\](\[T-UPropFull-3\])). Part (\[L-CrPrdRight-1\]) is immediate from the definitions. Part (\[L-CrPrdRight-2\]) follows from part (\[L-CrPrdRight-1\]) and Theorem \[T-UPropFull\](\[T-UPropFull-1\]). Part (\[L-CrPrdRight\_uv\]) is the definition of the product in $F^p ({{\mathbb{Z}}}, B, {\beta}).$ We prove (\[L-CrPrdRight-4\]). For $n \in {{\mathbb{Z}}_{\geq 0}},$ the inclusion $f_n A f_n \subset f_{n + 1} A f_{n + 1}$ follows from [Lemma \[L-ActionIsRight\]]{}(\[L-ActionIsRight-5b\]). For the density statement, it suffices to show that for $b \in B$ and $k \in {{\mathbb{Z}}},$ we have $b u_k \in {{\overline}{\bigcup_{n = 0}^{{\infty}} f_n A f_n}}.$ Let ${\varepsilon}> 0.$ Choose $m \in {{\mathbb{Z}}_{\geq 0}}$ and $c \in f_m B f_m$ such that $\| c - b \| < {\varepsilon}.$ [Without loss of generality]{} $m \geq - k.$ Then $u_k^{-1} f_m u_k = f_{m + k}$ by (\[L-CrPrdRight\_uv\]) and Lemma \[L-ActionIsRight\](\[L-ActionIsRight-5\]). Take $n = \max (m, m + k).$ Then, using Lemma \[L-ActionIsRight\](\[L-ActionIsRight-5b\]) at the third step, $$c u_k = f_m c f_m u_k = f_m c u_k f_{m + k} = f_n f_m c u_k f_{m + k} f_n \in f_n A f_n,$$ and $\| c u_k - b u_k \| < {\varepsilon}.$ \[N-Omega\] Adopt the notation of Example \[E-Odp\]. Let ${\omega}\colon M_d^p \to L_d$ be the unital [homomorphism]{}  determined by ${\omega}(e_{j, k}) = s_j t_k$ for $j, k \in \{ 0, 1, \ldots, d - 1 \}.$ The following proposition is an easy consequence of Theorems 7.2 and 7.7 of [@PhLp1] when $p \neq 1,$ but the case $p = 1$ is not explicitly in [@PhLp1]. \[P\_3922\_Spt\] Let $p \in [1, {\infty}) {\setminus}\{ 2 \}$ and let $d \in \{ 2, 3, \ldots \}.$ Let ${(Y, {\mathcal{C}}, \nu)}$ be a [${\sigma}$-finite measure space]{}. Let ${\rho}\colon L_d \to L ( L^p (Y, \nu) )$ be a unital [homomorphism]{}. Suppose that for $j = 0, 1, \ldots, d - 1,$ we have $\| {\rho}(s_j) \| \leq 1$ and $\| {\rho}(t_j) \| \leq 1.$ Suppose further that, with ${\omega}$ as in Notation \[N-Omega\], the [homomorphism]{} ${\rho}\circ {\omega}\colon M_d^p \to L ( L^p (Y, \nu) )$ is contractive. Then ${\rho}$ is spatial in the sense of Definition 7.4(2) of [@PhLp1]. The implication from (4) to (5) in Theorem 7.2 of [@PhLp1] provides a measurable partition $Y = \coprod_{j = 0}^{d - 1} Y_j$ such that ${\rho}(s_j t_j)$ is multiplication by ${\chi}_{Y_j}$ for $j = 0, 1, \ldots, d - 1.$ Therefore ${\rho}$ is disjoint in the sense of Definition 7.4(1) of [@PhLp1]. For $j = 0, 1, \ldots, d - 1,$ the relation $t_j s_j = 1$ and the bounds $\| {\rho}(s_j) \| \leq 1$ and $\| {\rho}(t_j) \| \leq 1$ imply that ${\rho}(s_j)$ is an isometry. Lemma 7.12 of [@PhLp1] now implies that ${\rho}$ is spatial. We will need a related nonunital result. \[L-UniqLd\] Let the hypotheses and notation be as in Proposition \[P\_3922\_Spt\], except that we require that ${\rho}$ be nonzero but not necessarily unital. Further assume that $L^p (Y, \nu)$ is separable. Then ${\rho}$ extends uniquely to an isometric injective [homomorphism]{} ${{\overline}{{\rho}}} \colon {{\mathcal{O}}_{d}^{p}} \to L \big( L^p (Y, \nu) \big).$ It is clear that $e = {\rho}(1)$ is an idempotent in ${L (L^p (X, \mu))}.$ Set $E = {\mathrm{ran}} (e).$ The hypotheses imply that $\| e \| = 1.$ It follows from Theorem 3 in Section 17 of [@Lc] that there is a [measure space]{}  $(Y_0, {\mathcal{C}}_0, \nu_0)$ such that $E$ is isometrically isomorphic to $L^p (Y_0, \nu_0).$ Since $L^p (Y, \nu)$ is separable, so is $E,$ and therefore we may take $\nu_0$ to be [${\sigma}$-finite]{}. (See the corollary to Theorem 3 in Section 15 of [@Lc].) The corestriction ${\rho}_0 \colon L_d \to L (E)$ is now a unital [representation]{} of $L_d$ on $L^p (Y_0, \nu_0)$ which satisfies the hypotheses of Proposition \[P\_3922\_Spt\]. So ${\rho}_0$ is spatial. Now apply Theorem 8.7 of [@PhLp1]. \[L-OdIsoMdOd\] Let ${(Y, {\mathcal{C}}, \nu)}$ be a [measure space]{} such that $L^p (Y, \nu)$ is separable, and let ${\rho}\colon {{\mathcal{O}}_{d}^{p}} \to L \big( L^p (Y, \nu) \big)$ be an isometric [homomorphism]{}  (not necessarily unital). Let $(T, {\mathcal{D}}, {\eta})$ be a [measure space]{} such that $L^p (T, {\eta})$ is separable, and let ${\gamma}\colon M^p_d \to L ( L^p (T, {\eta}) )$ be an isometric [homomorphism]{}  (not necessarily unital). Then there is an isometric isomorphism ${\psi}\colon {{\mathcal{O}}_{d}^{p}} \to {\gamma}( M^p_d ) \otimes_p {\rho}\big( {{\mathcal{O}}_{d}^{p}} \big)$ such that for $j = 0, 1, \ldots, d - 1,$ we have $${\psi}(s_j) = \sum_{l = 0}^{d - 1} {\gamma}(e_{j, l}) \otimes {\rho}(s_l) {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}{\psi}(t_j) = \sum_{l = 0}^{d - 1} {\gamma}(e_{l, j}) \otimes {\rho}(t_l).$$ Moreover, for $j, k \in \{ 0, 1, \ldots, d - 1 \}$ and $a \in {{\mathcal{O}}_{d}^{p}},$ we have ${\psi}(s_j a t_k) = {\gamma}(e_{j, k}) \otimes {\rho}(a).$ In particular, the $L^p$ operator analog of the C\* minimal tensor product of $M^p_d$ and ${{\mathcal{O}}_{d}^{p}}$ does not depend on how these algebras are represented, at least if we restrict to separable $L^p$ spaces. Recall that $p \in [1, {\infty}) {\setminus}\{ 2 \}.$ The hypotheses imply that both ${\rho}(1)$ and ${\gamma}(1)$ are idempotents of norm $1.$ As in the proof of [Lemma \[L-UniqLd\]]{}, we can use Theorem 3 in Section 17 of [@Lc] to see that ${\rho}(1) L^p (Y, \nu)$ and ${\gamma}(1) L^p (T, {\eta})$ are isometrically isomorphic to $L^p$ spaces of [${\sigma}$-finite measure space]{}s. Taking the corestrictions of ${\rho}$ and ${\gamma},$ we thus reduce to the case in which both ${\rho}$ and ${\gamma}$ are unital. It now follows from Proposition \[P\_3922\_Spt\] that ${\rho}|_{L_d}$ is spatial, and from Theorem 7.2 of [@PhLp1] that ${\gamma}$ is spatial. In particular, we can write $Y = \coprod_{j = 0}^{d - 1} Y_j$ in such a way that for for $j = 0, 1, \ldots, d - 1,$ the operator ${\rho}(s_j)$ is a spatial isometry with domain support $Y,$ range support $Y_j,$ and reverse ${\rho}(t_j)$ (in the sense of Definition 6.13 of [@PhLp1]). Also, we can write $T = \coprod_{j = 0}^{d - 1} T_j$ in such a way that for $k, l \in \{ 0, 1, \ldots, d - 1 \},$ the operator ${\gamma}(e_{k, l})$ is a spatial partial isometry with domain support $T_l,$ range support $T_k,$ and reverse ${\gamma}(e_{l, k}).$ For $j = 0, 1, \ldots, d - 1,$ set $$v_j = \sum_{l = 0}^{d - 1} {\gamma}(e_{j, l}) \otimes {\rho}(s_l) {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}w_j = \sum_{l = 0}^{d - 1} {\gamma}(e_{l, j}) \otimes {\rho}(t_l).$$ One easily checks that the elements $v_j,$ playing the role of $s_j,$ and $w_j,$ playing the role of $t_j,$ satisfy the relations (\[Eq:Leavitt1\]), (\[Eq:Leavitt2\]), and (\[Eq:Leavitt3\]). Therefore there is a [homomorphism]{} ${\varphi}\colon L_d \to {\gamma}( M^p_d ) \otimes_p {\rho}\big( {{\mathcal{O}}_{d}^{p}} \big)$ such that ${\varphi}(s_j) = v_j$ and ${\varphi}(t_j) = w_j$ for $j = 0, 1, \ldots, d - 1.$ Lemma 6.20 of [@PhLp1] implies that ${\gamma}(e_{k, l}) \otimes {\rho}(s_j)$ is a spatial partial isometry with domain support $T_l \times Y,$ range support $T_k \times Y_j,$ and reverse ${\gamma}(e_{l, k}) \otimes {\rho}(t_j).$ Now Lemma 3.8 of [@PhLp2b] implies that for $j = 0, 1, \ldots, d - 1,$ the operator $v_j$ is a spatial isometry with reverse $w_j.$ That is, ${\varphi}$ is a spatial [representation]{} in the sense of Definition 7.4(2) of [@PhLp1]. It now follows from Theorem 8.7 of [@PhLp1] that ${\varphi}$ extends to an isometric [homomorphism]{} ${\psi}\colon {{\mathcal{O}}_{d}^{p}} \to {\gamma}( M^p_d ) \otimes {\rho}\big( {{\mathcal{O}}_{d}^{p}} \big).$ We now prove the formula ${\psi}(s_j a t_k) = {\gamma}(e_{j, k}) \otimes {\rho}(a).$ A calculation shows that ${\psi}(s_j t_k) = {\gamma}(e_{j, k}) \otimes {\rho}(1)$ for $j, k \in \{ 0, 1, \ldots, d - 1 \}.$ Now let $j, k, r \in \{ 0, 1, \ldots, d - 1 \}.$ Then $${\psi}(s_j s_r t_k) = {\psi}(s_j) {\psi}(s_r t_k) = \sum_{l = 0}^{d - 1} [ {\gamma}(e_{j, l}) \otimes {\rho}(s_l)] [ {\gamma}(e_{r, k}) \otimes {\rho}(1)] = {\gamma}(e_{j, k}) \otimes {\rho}(s_r)$$ and $${\psi}(s_j t_r t_k) = {\psi}(s_j t_r) {\psi}(t_k) = \sum_{l = 0}^{d - 1} [ {\gamma}(e_{j, r}) \otimes {\rho}(1)] [ {\gamma}(e_{l, k}) \otimes {\rho}(t_l)] = {\gamma}(e_{j, k}) \otimes {\rho}(t_r).$$ For $j, k \in \{ 0, 1, \ldots, d - 1 \},$ the map $a \mapsto s_j a t_j$ is a (nonunital) continuous endomorphism of ${{\mathcal{O}}_{d}^{p}}.$ Since $s_0, s_1, \ldots, s_{d - 1}, t_0, t_1, \ldots, t_{d - 1}$ generate ${{\mathcal{O}}_{d}^{p}}$ as a Banach algebra, we conclude that ${\psi}(s_j a t_j) = {\gamma}(e_{j, j}) \otimes {\rho}(a)$ for $j = 0, 1, \ldots, d - 1$ and $a \in {{\mathcal{O}}_{d}^{p}}.$ For $k = 0, 1, \ldots, d - 1,$ we then get $${\psi}(s_j a t_k) = {\psi}(s_j a t_j) {\psi}(s_j t_k) = [{\gamma}(e_{j, j}) \otimes {\rho}(a)] [{\gamma}(e_{j, k}) \otimes {\rho}(1)] = {\gamma}(e_{j, k}) \otimes {\rho}(a),$$ as desired. It remains to prove that ${\psi}$ is surjective. The previous paragraph implies that the range of ${\psi}$ contains ${\gamma}(e_{j, k}) \otimes {\rho}(a)$ for all $j, k \in \{ 0, 1, \ldots, d - 1 \}$ and $a \in {{\mathcal{O}}_{d}^{p}}.$ It follows that ${\psi}$ has dense range. Since ${\psi}$ is isometric, it is surjective. \[L-MapToCorner\] There exists an isometric isomorphism ${\sigma}\colon {{\mathcal{O}}_{d}^{p}} \to f_0 A f_0$ such that for $j = 0, 1, \ldots, d - 1$ we have (recalling Notation \[N-dinfty\], Notation \[N-ActionOnD\], and Notation \[N-DICrPrd\]) $$\label{Eq:DfnOfSm} {\sigma}(s_j) = u_1 ( e_{j, 0} \otimes 1_{> 0} ) {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}{\sigma}(t_j) = ( e_{0, j} \otimes 1_{> 0} ) u_{-1}.$$ We first check that the elements in (\[Eq:DfnOfSm\]) are in $f_0 A f_0.$ Using $u_1^{-1} f_0 u_1 = f_1$ (which follows from Lemma \[L-CrPrdRight\](\[L-CrPrdRight\_uv\]) and Lemma \[L-ActionIsRight\](\[L-ActionIsRight-5\])) at the first step, and the definitions of $f_0$ and $f_1$ at the second step, for $j = 0, 1, \ldots, d - 1$ we get $$f_0 u_1 ( e_{j, 0} \otimes 1_{> 0} ) f_0 = u_1 f_1 ( e_{j, 0} \otimes 1_{> 0} ) f_0 = u_1 ( e_{j, 0} \otimes 1_{> 0} ).$$ Therefore $u_1 ( e_{j, 0} \otimes 1_{> 0} ) \in f_0 A f_0.$ The proof that $( e_{0, j} \otimes 1_{> 0} ) u_{-1} \in f_0 A f_0$ is similar. We next check that the elements in (\[Eq:DfnOfSm\]) give a unital [homomorphism]{}  ${\tau}\colon L_d \to f_0 A f_0.$ This follows from the calculation, for $j, k \in \{ 0, 1, \ldots, d - 1 \},$ $$( e_{0, j} \otimes 1_{> 0} ) u_{-1} u_1 ( e_{k, 0} \otimes 1_{> 0} ) = e_{0, j} e_{k, 0} = \begin{cases} 0 & j \neq k \\ f_0 & j = k, \end{cases}$$ and from the calculation (using Lemma \[L-CrPrdRight\](\[L-CrPrdRight\_uv\]) and Lemma \[L-ActionIsRight\](\[L-ActionIsRight-5\]) at the last step) $$\sum_{j = 0}^{d - 1} u_1 ( e_{j, 0} \otimes 1_{> 0} ) ( e_{0, j} \otimes 1_{> 0} ) u_{-1} = u_1 \left( {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{j = 0}^{d - 1} }}}}} e_{j, j} \otimes 1_{> 0} \right) u_{-1} = u_1 f_1 u_{-1} = f_0.$$ Next, we claim that ${\tau}\circ {\omega}\colon M^p_d \to f_0 A f_0$ is contractive. To see this, first use Lemma \[L-CrPrdRight\](\[L-CrPrdRight\_uv\]) and Lemma \[L-ActionIsRight\](\[L-ActionIsRight-4\]) at the last step to check that, for $j, k \in \{ 0, 1, \ldots, d - 1 \},$ we have $$\begin{aligned} ({\tau}\circ {\omega}) (e_{j, k}) & = {\tau}(s_j t_k) = u_1 ( e_{j, 0} \otimes 1_{> 0} ) ( e_{0, k} \otimes 1_{> 0} ) u_{-1} \\ & = u_1 ( e_{j, k} \otimes 1_{> 0} ) u_{-1} = e_{0, 0} \otimes e_{j, k} \otimes 1_{> 1}.\end{aligned}$$ Therefore $({\tau}\circ {\omega}) (a) = e_{0, 0} \otimes a \otimes 1_{> 1}$ for all $a \in M^p_d.$ By Remark \[R-LpTens\], the [homomorphism]{}  ${\tau}\circ {\omega}$ is isometric from $M^p_d$ to $B.$ Since the inclusion of $B$ in $A$ is contractive (by the inequality $\| a \| \leq \| a \|_1$ in [Lemma \[L:FGpCP\]]{}), the claim follows. The inequality $\| a \| \leq \| a \|_1$ and contractivity of multiplication by $u_1$ and $u_{-1}$ (Lemma \[L-NormOfMultU\]) imply that $\| {\tau}(s_j) \| \leq 1$ and $\| {\tau}(t_j) \| \leq 1$ for $j = 0, 1, \ldots, d - 1.$ Since $A$ is separably representable (using [Lemma \[L-CrPrdRight\]]{}(\[L-CrPrdRight-5\]) and Proposition \[P-SepImpSepRep\]), we may apply [Lemma \[L-UniqLd\]]{} to conclude that ${\tau}$ extends to an injective isometric [homomorphism]{} ${\sigma}\colon {{\mathcal{O}}_{d}^{p}} \to f_0 A f_0.$ It remains only to prove that ${\sigma}$ is surjective, and for this it suffices to prove that its range ${{\mathrm{ran}}}({\sigma})$ is dense in $f_0 A f_0.$ By Theorem \[T-UPropFull\](\[T-UPropFull-3\]), it is enough to show that $$\label{Eq:ForDRan} f_0 \big( e_{l, m} \otimes 1_{\leq r - 1} \otimes e_{j, k} \otimes 1_{> r} \big) u_n f_0 \in {\mathrm{ran}} ({\sigma})$$ whenever $r \in {{\mathbb{Z}}_{> 0}},$ $n \in {{\mathbb{Z}}},$ $j, k \in \{ 0, 1, \ldots, d - 1 \},$ and $l, m \in {{\mathbb{Z}}_{\geq 0}}.$ We first claim that (\[Eq:ForDRan\]) holds when $n = 0.$ In this case, the expression is zero unless $l = m = 0.$ For $r \in {{\mathbb{Z}}_{> 0}}$ and $j, k \in \{ 0, 1, \ldots, d - 1 \},$ we use Lemma \[L-CrPrdRight\](\[L-CrPrdRight\_uv\]) and Lemma \[L-ActionIsRight\](\[L-ActionIsRight-4\]) to get $$\begin{aligned} \lefteqn{ {\sigma}(s_0) \big[ e_{0, 0} \otimes e_{0, 0} \otimes \cdots \otimes e_{0, 0} \otimes e_{j, k} \otimes 1_{> r} \big] {\sigma}(t_0)} \\ & \hspace*{5em} \mbox{} = e_{0, 0} \otimes e_{0, 0} \otimes \cdots \otimes e_{0, 0} \otimes e_{0, 0} \otimes e_{j, k} \otimes 1_{> r + 1}\end{aligned}$$ (shifting the tensor factor $e_{j, k}$ one space to the right). Therefore ${\mathrm{ran}} ({\sigma})$ contains $$e_{0, 0} \otimes e_{j, k} \otimes 1_{> 1} = {\sigma}(s_j t_k), \,\,\,\, e_{0, 0} \otimes e_{0, 0} \otimes e_{j, k} \otimes 1_{> 2}, \,\,\,\, e_{0, 0} \otimes e_{0, 0} \otimes e_{0, 0} \otimes e_{j, k} \otimes 1_{> 3}, \,\,\,\, \ldots.$$ The closed subalgebra that these elements generate is $\{ e_{0, 0} \otimes a \colon a \in D \},$ and the claim for $n = 0$ follows. We next claim that for $n \in {{\mathbb{Z}}_{> 0}}$ we have $u_n f_0 = f_0 (u_1 f_0)^n$ and $f_0 u_{- n} = (f_0 u_{-1})^n f_0.$ The proof is by induction on $n,$ and the case $n = 0$ is trivial. For the induction step for the first, use Lemma \[L-CrPrdRight\](\[L-CrPrdRight\_uv\]) and Lemma \[L-ActionIsRight\](\[L-ActionIsRight-5\]) at the third step, and [Lemma \[L-ActionIsRight\]]{}(\[L-ActionIsRight-5b\]) at the last step, to get $$f_0 (u_1 f_0)^{n + 1} = u_n f_0 u_1 f_0 = u_{n + 1} (u_1^{-1} f_0 u_1) f_0 = u_{n + 1} f_1 f_0 = u_{n + 1} f_0.$$ Similarly, $$(f_0 u_{-1})^{n + 1} f_0 = f_0 u_{-1} f_0 u_{-n} = f_0 (u_1^{-1} f_0 u_1) u_{- n - 1} = f_0 f_1 u_{- n - 1} = f_0 u_{- n - 1}.$$ This proves the claim. Now let $n \in {{\mathbb{Z}}_{> 0}}.$ Then $$f_0 \big( e_{l, m} \otimes 1_{\leq r - 1} \otimes e_{j, k} \otimes 1_{> r} \big) u_n f_0 = \big[ f_0 \big( e_{l, m} \otimes 1_{\leq r - 1} \otimes e_{j, k} \otimes 1_{> r} \big) f_0 \big] \cdot (u_1 f_0)^n.$$ The first factor is in ${{\mathrm{ran}}}({\sigma})$ by the case already done, and $(u_1 f_0)^n = {\sigma}(s_0)^n \in {{\mathrm{ran}}}({\sigma}),$ so (\[Eq:ForDRan\]) holds. Also, using Lemma \[L-CrPrdRight\](\[L-CrPrdRight\_uv\]) and Lemma \[L-ActionIsRight\](\[L-ActionIsRight-5\]), $$f_0 \big( e_{l, m} \otimes 1_{\leq r - 1} \otimes e_{j, k} \otimes 1_{> r} \big) u_{-n} f_0 = f_0 \big( e_{l, m} \otimes 1_{\leq r - 1} \otimes e_{j, k} \otimes 1_{> r} \big) f_n u_{-n}.$$ If $l \neq 0$ or $m \geq d^n,$ this expression is zero, hence in ${{\mathrm{ran}}}({\sigma}).$ So we only need to consider $f_0 \big( e_{0, m} \otimes 1_{\leq r - 1} \otimes e_{j, k} \otimes 1_{> r} \big) u_{-n} f_0,$ and under the assumption that there are $q_0, q_1, \ldots, q_{n - 1} \in \{ 0, 1, \ldots, d - 1 \}$ such that $m = \sum_{i = 0}^{n - 1} q_i d^i.$ Then, using Lemma \[L-CrPrdRight\](\[L-CrPrdRight\_uv\]) and Lemma \[L-ActionIsRight\](\[L-ActionIsRight-4\]) repeatedly at the first step, and $f_0 u_{- n} = (f_0 u_{-1})^n f_0$ at the second step, $$\begin{aligned} & f_0 \big( e_{l, m} \otimes 1_{\leq r - 1} \otimes e_{j, k} \otimes 1_{> r} \big) u_{-n} f_0 \\ & \hspace*{3em} {\mbox{}} = f_0 u_{-n} \big( e_{0, 0} \otimes e_{0, q_{n - 1}} \otimes e_{0, q_{n - 2}} \otimes e_{0, q_0} \otimes 1_{\leq r - 1} \otimes e_{j, k} \otimes 1_{> r + n} \big) f_0 \\ & \hspace*{3em} {\mbox{}} = {\sigma}(t_0)^n \cdot \big[ f_0 \big( e_{0, 0} \otimes e_{0, q_{n - 1}} \otimes e_{0, q_{n - 2}} \otimes e_{0, q_0} \otimes 1_{\leq r - 1} \otimes e_{j, k} \otimes 1_{> r + n} \big) f_0 \big].\end{aligned}$$ The case $n = 0$ of (\[Eq:ForDRan\]), which we have already proved, and the fact that ${{\mathrm{ran}}}({\sigma})$ is an algebra, show that this expression is in ${{\mathrm{ran}}}({\sigma}).$ Thus (\[Eq:ForDRan\]) holds for all $j, k, l, m, r, n.$ This completes the proof of surjectivity. \[N-DIForIntTw\] Fix a [${\sigma}$-finite measure space]{}  ${(Y, {\mathcal{C}}, \nu)}$ such that $L^p (Y, \nu)$ is separable, and a unital isometric [homomorphism]{} ${\alpha}_0 \colon {{\mathcal{O}}_{d}^{p}} \to L (L^p (Y, \nu))$ whose restriction to $L_d$ is spatial. For $n \in {{\mathbb{Z}}_{> 0}},$ define $${\alpha}_n \colon (M_d)^{\otimes n} \otimes_{\mathrm{alg}} {{\mathcal{O}}_{d}^{p}} \to L \big( L^p (Z^n \times Y, \, {\lambda}^n \times \nu) \big)$$ to be the tensor product of $n$ copies of the standard isomorphism $M_d \to L (L^p (Z, {\lambda}))$ with ${\alpha}_0.$ Make $(M_d)^{\otimes n} \otimes_{\mathrm{alg}} {{\mathcal{O}}_{d}^{p}}$ into an $L^p$ operator algebra by defining $\| a \| = \| {\alpha}_n (a) \|$ for $a \in (M_d)^{\otimes n} \otimes_{\mathrm{alg}} {{\mathcal{O}}_{d}^{p}},$ and write $(M^p_d)^{\otimes n} \otimes_p {{\mathcal{O}}_{d}^{p}}$ for $(M_d)^{\otimes n} \otimes_{\mathrm{alg}} {{\mathcal{O}}_{d}^{p}}$ equipped with this norm. Let ${\psi}_0 \colon {{\mathcal{O}}_{d}^{p}} \to M^p_d \otimes_p {{\mathcal{O}}_{d}^{p}}$ be the map ${\psi}$ of [Lemma \[L-OdIsoMdOd\]]{}. Set ${\eta}_0 = {{\mathrm{id}}}_{{{\mathcal{O}}_{d}^{p}}}.$ Let ${\sigma}_0 \colon {{\mathcal{O}}_{d}^{p}} \to A$ be the map ${\sigma}$ of [Lemma \[L-MapToCorner\]]{}, followed by the inclusion of $f_0 A f_0$ in $A.$ Define ${\varepsilon}_0 \colon {{\mathcal{O}}_{d}^{p}} \to M^p_d \otimes_p {{\mathcal{O}}_{d}^{p}}$ by ${\varepsilon}_0 (a) = e_{0, 0} \otimes a$ for $a \in {{\mathcal{O}}_{d}^{p}}.$ For $m \in {{\mathbb{Z}}_{> 0}},$ we adapt standard notation by writing ${{\mathrm{Ad}}}(u_m)$ for the automorphism of $A$ given by $a \mapsto u_m a u_{- m}$ (even though $u_m$ is not in $A$). Now, for $n \in {{\mathbb{Z}}_{> 0}},$ inductively define (justifications in [Lemma \[L-DIIntTw\]]{} below) $${\psi}_{n} = {{\mathrm{id}}}_{M^p_d} \otimes_p {\psi}_{n - 1} \colon (M^p_d)^{\otimes n} \otimes_p {{\mathcal{O}}_{d}^{p}} \to (M^p_d)^{\otimes (n + 1)} \otimes_p {{\mathcal{O}}_{d}^{p}},$$ $${\eta}_{n} = {\psi}_{n - 1} \circ {\eta}_{n - 1} \colon {{\mathcal{O}}_{d}^{p}} \to (M^p_d)^{\otimes n} \otimes_p {{\mathcal{O}}_{d}^{p}},$$ $${\sigma}_{n} = {{\mathrm{Ad}}}(u_{-1}) \circ {\sigma}_{n - 1} \circ {\psi}_{n - 1}^{-1} \colon (M^p_d)^{\otimes n} \otimes_p {{\mathcal{O}}_{d}^{p}} \to A,$$ and $${\varepsilon}_{n} = {\psi}_{n} \circ {\varepsilon}_{n - 1} \circ {\psi}_{n - 1}^{-1} \colon (M^p_d)^{\otimes n} \otimes_p {{\mathcal{O}}_{d}^{p}} \to (M^p_d)^{\otimes (n + 1)} \otimes_p {{\mathcal{O}}_{d}^{p}}.$$ \[L-DIIntTw\] For every $m \in {{\mathbb{Z}}},$ the map ${{\mathrm{Ad}}}(u_m)$ in Notation \[N-DIForIntTw\] is a well defined isometric automorphism of $A.$ Moreover, for all $n \in {{\mathbb{Z}}_{\geq 0}},$ the maps defined in Notation \[N-DIForIntTw\] have the following properties: 1. \[L-DIIntTw-1\] ${\psi}_n$ is an isometric isomorphism. 2. \[L-DIIntTw-3\] ${\eta}_{n + 1} = ( {{\mathrm{id}}}_{M^p_d} \otimes {\eta}_{n} ) \circ {\psi}_0.$ 3. \[L-DIIntTw-2\] ${\eta}_{n + 1}$ is an isometric isomorphism. 4. \[L-DIIntTw-4\] ${\sigma}_n$ is isometric and ${\operatorname{ran}} ({\sigma}_n) = f_n A f_n.$ 5. \[L-DIIntTw-5\] ${\varepsilon}_n$ is an isometric [homomorphism]{}. 6. \[L-DIIntTw-6\] ${\sigma}_{n + 1} \circ {\varepsilon}_{n} = {\sigma}_{n}.$ 7. \[L-DIIntTw-7\] For all $a \in (M^p_d)^{\otimes n} \otimes_p {{\mathcal{O}}_{d}^{p}},$ we have ${\varepsilon}_n (a) = e_{0, 0} \otimes a.$ The part about ${{\mathrm{Ad}}}(u_m)$ follows from Lemma \[L-NormOfMultU\]. We prove the remaining statements simultaneously by induction on $n.$ For $n = 0,$ part (\[L-DIIntTw-1\]) is [Lemma \[L-OdIsoMdOd\]]{}, part (\[L-DIIntTw-4\]) is [Lemma \[L-MapToCorner\]]{}, part (\[L-DIIntTw-5\]) follows from Remark \[R-LpTens\], and part (\[L-DIIntTw-7\]) is the definition of ${\varepsilon}_0.$ For parts (\[L-DIIntTw-3\]) and (\[L-DIIntTw-2\]), we use ${\eta}_0 = {{\mathrm{id}}}_{{{\mathcal{O}}_{d}^{p}}}$ to get $${\eta}_1 = {\psi}_0 \circ {\eta}_0 = {\psi}_0 = ( {{\mathrm{id}}}_{M^p_d} \otimes {\eta}_{0} ) \circ {\psi}_0,$$ as desired for (\[L-DIIntTw-3\]). Also, ${\eta}_1$ is isometric since ${\psi}_0$ is. We now prove part (\[L-DIIntTw-6\]). By Lemma 2.11 of [@PhLp1], it suffices to prove that $({\sigma}_1 \circ {\varepsilon}_0) (s_j) = {\sigma}_0 (s_j)$ for $j = 0, 1, \ldots, d - 1.$ Fix $j \in \{ 0, 1, \ldots, d - 1 \}.$ We have ${\psi}_0 (s_0 s_j t_0) = e_{0, 0} \otimes s_j$ by [Lemma \[L-OdIsoMdOd\]]{}, so $({\psi}_0^{-1} \circ {\varepsilon}_0) (s_j) = s_0 s_j t_0.$ Therefore, using the formula for ${\sigma}_0$ at the second step, and Lemma \[L-CrPrdRight\](\[L-CrPrdRight\_uv\]) and Lemma \[L-ActionIsRight\](\[L-ActionIsRight-5\]) at the third step, $$\begin{aligned} ({\sigma}_1 \circ {\varepsilon}_0) (s_j) & = \big( {{\mathrm{Ad}}}(u_{-1}) \circ {\sigma}_0 \circ {\psi}_0^{-1} \circ {\varepsilon}_0 \big) (s_0 s_j t_0) = (e_{0, 0} \otimes 1_{> 0}) u_1 (e_{j, 0} \otimes 1_{> 0}) \\ & = u_1 \left( {{{{\textstyle{ {{{\displaystyle{\sum}}}}_{l = 0}^{d - 1} }}}}} e_{l, l} \otimes 1_{> 0} \right) (e_{j, 0} \otimes 1_{> 0}) \\ & = u_1 (e_{j, 0} \otimes 1_{> 0}) = {\sigma}_0 (s_j).\end{aligned}$$ This completes the proof of the case $n = 0.$ Now let $n \in {{\mathbb{Z}}_{> 0}},$ and assume that all parts are known for $n - 1.$ The map ${\psi}_{n}$ is bijective because ${\psi}_{n - 1}$ is bijective and $M^p_d$ is [finite dimensional]{}. The map ${\eta}_{n + 1}$ is bijective because ${\psi}_{n}$ and ${\eta}_{n}$ are. Now, to prove part (\[L-DIIntTw-3\]), we use the induction hypothesis at the second step to get $${\eta}_{n + 1} = {\psi}_{n} \circ {\eta}_n = \big( {{\mathrm{id}}}_{M^p_d} \otimes_p {\psi}_{n - 1} \big) \circ \big( {{\mathrm{id}}}_{M^p_d} \otimes {\eta}_{n - 1} \big) \circ {\psi}_0 = \big( {{\mathrm{id}}}_{M^p_d} \otimes {\eta}_{n} \big) \circ {\psi}_0,$$ as desired. Now we prove that ${\eta}_{n + 1}$ is isometric, which will finish the proof of part (\[L-DIIntTw-2\]). We know that ${\eta}_{n}$ is isometric. Therefore ${\alpha}_{n} \circ {\eta}_{n}$ is isometric. Proposition \[P\_3922\_Spt\] implies that $({\alpha}_{n} \circ {\eta}_{n}) |_{L_d}$ is spatial. For $j = 0, 1, \ldots, d - 1,$ one checks that $$\begin{aligned} ({\alpha}_{n + 1} \circ {\eta}_{n + 1}) (s_j) & = \big[ {\alpha}_{n + 1} \circ ({{\mathrm{id}}}_{M^p_d} \otimes {\eta}_{n}) \circ {\psi}_0 \big] (s_j) \\ & = \sum_{l = 0}^{d - 1} e_{j, l} \otimes ({\alpha}_{n} \circ {\eta}_{n}) (s_l) \in L \big( L^p (Z^{n + 1} \times Y, \, {\lambda}^{n + 1} \times \nu) \big).\end{aligned}$$ This operator is a sum of spatial partial isometries with disjoint domain supports and disjoint range supports. By Lemma 3.8 of [@PhLp2b], it is a spatial partial isometry whose reverse is the sum of the reverses of the summands, that is, $$\sum_{l = 0}^{d - 1} e_{l, j} \otimes ({\alpha}_{n} \circ {\eta}_{n}) (t_l) = \big[ {\alpha}_{n + 1} \circ ({{\mathrm{id}}}_{M^p_d} \otimes {\eta}_{n}) \circ {\psi}_0 \big] (t_j) = ({\alpha}_{n + 1} \circ {\eta}_{n + 1}) (t_j).$$ Thus, $({\alpha}_{n + 1} \circ {\eta}_{n + 1}) |_{L_d}$ is a spatial representation of $L_d.$ Theorem 8.7 of [@PhLp1] now implies that ${\alpha}_{n + 1} \circ {\eta}_{n + 1}$ is isometric. Therefore ${\eta}_{n + 1}$ is isometric, as desired. It is now clear that ${\psi}_{n} = {\eta}_{n + 1} \circ {\eta}_{n}^{-1}$ is isometric, which finishes part (\[L-DIIntTw-1\]). The maps ${\sigma}_{n}$ and ${\varepsilon}_{n}$ are isometric, since they are compositions of isometric maps. The statement about the range of ${\sigma}_{n}$ follows from the induction hypothesis and $u_{-1} f_{n - 1} u_1 = f_{n}$ (Lemma \[L-CrPrdRight\](\[L-CrPrdRight\_uv\]) and Lemma \[L-ActionIsRight\](\[L-ActionIsRight-5\])). We now prove part (\[L-DIIntTw-6\]). Using the induction hypothesis at the third step, we get $$\begin{aligned} {\sigma}_{n + 1} \circ {\varepsilon}_{n} & = \big( {{\mathrm{Ad}}}(u_{-1}) \circ {\sigma}_n \circ {\psi}_n^{-1} \big) \circ \big( {\psi}_{n} \circ {\varepsilon}_{n - 1} \circ {\psi}_{n - 1}^{-1} \big) \\ & = {{\mathrm{Ad}}}(u_{-1}) \circ {\sigma}_n \circ {\varepsilon}_{n - 1} \circ {\psi}_{n - 1}^{-1} = {{\mathrm{Ad}}}(u_{-1}) \circ {\sigma}_{n - 1} \circ {\psi}_{n - 1}^{-1} = {\sigma}_{n}.\end{aligned}$$ It remains to prove part (\[L-DIIntTw-7\]). We compute: $$\begin{aligned} {\varepsilon}_{n} (a) & = \big( {\psi}_{n} \circ {\varepsilon}_{n - 1} \circ {\psi}_{n - 1}^{-1} \big) (a) = \big( ( {{\mathrm{id}}}_{M^p_d} \otimes_p {\psi}_{n - 1} ) \circ {\varepsilon}_{n - 1} \circ {\psi}_{n - 1}^{-1} \big) (a) \\ & = \big( {{\mathrm{id}}}_{M^p_d} \otimes_p {\psi}_{n - 1} \big) \big( e_{0, 0} \otimes {\psi}_{n - 1}^{-1} (a) \big) = e_{0, 0} \otimes a,\end{aligned}$$ as desired. \[C\_3924\_ContrIsIso\] Let ${(X, {\mathcal{B}}, \mu)}$ be a [${\sigma}$-finite measure space]{} such that $L^p (X, \mu)$ is separable. Let ${\rho}\colon A \to {L (L^p (X, \mu))}$ be a nonzero but not necessarily unital contractive [homomorphism]{}. Then ${\rho}$ is isometric. We first claim that ${\rho}(f_n) \neq 0$ for all $n \in {{\mathbb{Z}}_{\geq 0}}.$ From Lemma \[L-CrPrdRight\](\[L-CrPrdRight\_uv\]) and Lemma \[L-ActionIsRight\](\[L-ActionIsRight-6\]), we get $f_n = (u_{-n} f_0) f_0 (f_0 u_n)$ and $f_0 = (u_n f_n) f_n (f_n u_{-n}).$ Since $u_{-n} f_0,$ $f_0 u_n,$ $u_n f_n,$ and $f_n u_{-n}$ are all in $A,$ we see that if ${\rho}(f_n) = 0$ for some $n \in {{\mathbb{Z}}_{\geq 0}},$ then ${\rho}(f_0) = 0,$ and then ${\rho}(f_n) = 0$ for all $n \in {{\mathbb{Z}}_{\geq 0}}.$ Lemma \[L-CrPrdRight\](\[L-CrPrdRight-4\]) and continuity of ${\rho}$ then imply that ${\rho}= 0.$ For all $n \in {{\mathbb{Z}}_{\geq 0}},$ it follows that ${\rho}|_{f_n A f_n}$ is a nonzero contractive [homomorphism]{}. Lemma \[L-DIIntTw\](\[L-DIIntTw-2\]) implies that $f_n A f_n$ is isometrically isomorphic to ${{\mathcal{O}}_{d}^{p}}.$ So ${\rho}|_{f_n A f_n}$ is isometric by [Lemma \[L-UniqLd\]]{}. Since $n$ is arbitrary, ${\rho}$ is isometric by Lemma \[L-CrPrdRight\](\[L-CrPrdRight-4\]). \[C-FullToRed\] The map ${\kappa}_{\mathrm{r}} \colon A = F^p ({{\mathbb{Z}}}, B, {\beta}) \to F^p_{\mathrm{r}} ({{\mathbb{Z}}}, B, {\beta})$ is an isometric bijection. By [Lemma \[L-CompOfCrPrd\]]{}, this map is contractive and has dense range. The algebra $F^p_{\mathrm{r}} ({{\mathbb{Z}}}, B, {\beta})$ is clearly separable, so separably representable by Proposition \[P-SepImpSepRep\]. Apply Corollary \[C\_3924\_ContrIsIso\]. Although we won’t need this, it also follows (using Lemma \[L-CrPrdRight\](\[L-CrPrdRight-2\])) that the [representation]{} $\pi$ of Notation \[N-DICrPrd\] is isometric. \[T\_3924\_Iso\] There is an isometric isomorphism ${\gamma}\colon {{\overline}{M}}^p_{{{\mathbb{Z}}_{\geq 0}}} \otimes_{p} {{\mathcal{O}}_{d}^{p}} \to A$ such that ${\gamma}(e_{0, 0} \otimes 1) = f_0$ and ${\gamma}(e_{0, 0} \otimes a) = {\sigma}_0 (a)$ for all $a \in {{\mathcal{O}}_{d}^{p}}.$ For $n \in {{\mathbb{Z}}_{\geq 0}},$ set $T_n = \{ 0, 1, \ldots, d^n - 1 \} {\subset}{{\mathbb{Z}}_{\geq 0}}.$ Define $g_n \colon Z^n \to T_n$ by $$g_n (j_1, j_2, \ldots, j_n) = \sum_{l = 1}^n j_l d^{n - l}$$ for $j_1, j_2, \ldots, j_n \in {{\mathbb{Z}}}.$ Lemma \[L-MatpSMap\] provides isometric homomorphisms $${\gamma}_{g_n, {{\mathcal{O}}_{d}^{p}}} \colon M^p_{Z^n} \otimes_{p} {{\mathcal{O}}_{d}^{p}} \to M^p_{T_n} \otimes_{p} {{\mathcal{O}}_{d}^{p}} {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}\xi_n \colon M^p_{T_n} \otimes_{p} {{\mathcal{O}}_{d}^{p}} \to M^p_{T_{n + 1}} \otimes_{p} {{\mathcal{O}}_{d}^{p}},$$ the first being an isomorphism and the second coming from the inclusion of $T_n$ in $T_{n + 1}.$ There is an isometric isomorphism ${\iota}_n \colon (M^p_d)^{\otimes n} \otimes_p {{\mathcal{O}}_{d}^{p}} \to M^p_{Z^n} \otimes_{p} {{\mathcal{O}}_{d}^{p}}$ which comes from the identification of $L^p (Z^n, {\lambda}^n)$ with $L^p (Z, {\lambda})^{\otimes n},$ such that for $a \in {{\mathcal{O}}_{d}^{p}}$ and $j_1, j_2, \ldots, j_n, k_1, k_2, \ldots, k_n \in {{\mathbb{Z}}},$ we have $${\iota}_n \big( e_{(j_1, j_2, \ldots, j_n), \, (k_1, k_2, \ldots, k_n)} \otimes a \big) = e_{j_1, k_1} \otimes e_{j_2, k_2} \otimes \cdots \otimes e_{j_n, k_n} \otimes a.$$ Set ${\gamma}_n = {\gamma}_{g_n, {{\mathcal{O}}_{d}^{p}}} \circ {\iota}_n.$ One checks easily that $\xi_n \circ {\gamma}_n = {\gamma}_{n + 1} \circ {\varepsilon}_n.$ Using Lemma \[L-DIIntTw\](\[L-DIIntTw-4\]) and Lemma \[L-DIIntTw\](\[L-DIIntTw-6\]) for the bottom part, we now get a commutative diagram, in which the maps in the bottom row are the inclusions, all maps are isometric, and all vertical maps are bijective: $$\xymatrix{ M_{T_0}^p \otimes_p {{\mathcal{O}}_{d}^{p}} \ar[r]^{\xi_0} & M_{T_1}^p \otimes_p {{\mathcal{O}}_{d}^{p}} \ar[r]^{\xi_1} & M_{T_2}^p \otimes_p {{\mathcal{O}}_{d}^{p}} \ar[r]^{\xi_2} & \cdots \\ {{\mathcal{O}}_{d}^{p}} \ar[r]^{{\varepsilon}_0} \ar[d]_{{\sigma}_0} \ar[u]^{{\gamma}_0} & M_d^p \otimes_p {{\mathcal{O}}_{d}^{p}} \ar[r]^{{\varepsilon}_1} \ar[d]_{{\sigma}_1} \ar[u]^{{\gamma}_1} & (M_d^p)^{\otimes 2} \otimes_p {{\mathcal{O}}_{d}^{p}} \ar[r]^{{\varepsilon}_2} \ar[d]_{{\sigma}_2} \ar[u]^{{\gamma}_2} & \cdots \\ f_0 A f_0 \ar[r] & f_1 A f_1 \ar[r] & f_2 A f_2 \ar[r] & \cdots. }$$ Therefore the vertical maps induce isometric isomorphisms of the direct limits. By Corollary \[C-ClUIsDLim\] and Example \[E-MatpS\], the direct limit of the top row is ${{\overline}{M}}^p_{{{\mathbb{Z}}_{\geq 0}}} \otimes_{p} {{\mathcal{O}}_{d}^{p}}.$ By Corollary \[C-ClUIsDLim\] and Lemma \[L-CrPrdRight\](\[L-CrPrdRight-4\]), the direct limit of the bottom row is $A.$ Thus we get isometric isomorphisms $${\gamma}_{{\infty}} \colon {\varinjlim}_n (M_d^p)^{\otimes n} \otimes_p {{\mathcal{O}}_{d}^{p}} \to {{\overline}{M}}^p_{{{\mathbb{Z}}_{\geq 0}}} \otimes_{p} {{\mathcal{O}}_{d}^{p}} {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}{\sigma}_{{\infty}} \colon {\varinjlim}_n (M_d^p)^{\otimes n} \otimes_p {{\mathcal{O}}_{d}^{p}} \to A.$$ Set ${\gamma}= {\sigma}_{{\infty}} \circ {\gamma}_{{\infty}}^{-1}.$ \[C-KThyA\] The map ${\kappa}_{\mathrm{r}} \circ {\sigma}_0 \colon {{\mathcal{O}}_{d}^{p}} \to F^p_{\mathrm{r}} ({{\mathbb{Z}}}, B, {\beta})$ is an isomorphism on K-theory such that $({\kappa}_{\mathrm{r}} \circ {\sigma}_0)_* ([1]) = [{\kappa}_{\mathrm{r}} (f_0)].$ It follows from Theorem \[T\_3924\_Iso\] and [Lemma \[L-KthyTPM\]]{} that ${\sigma}_0 \colon {{\mathcal{O}}_{d}^{p}} \to A$ is an isomorphism on K-theory, and clearly $({\sigma}_0)_* ([1]) = [f_0].$ Apply Corollary \[C-FullToRed\]. \[T-KThyOpd\] Let $p \in [1, {\infty})$ and let $d \in \{ 2, 3, \ldots \}.$ Then $K_1 \big( {{\mathcal{O}}_{d}^{p}} \big) = 0$ and there is an isomorphism $K_0 \big( {{\mathcal{O}}_{d}^{p}} \big) \to {{\mathbb{Z}}}/ (d - 1) {{\mathbb{Z}}}$ which sends $[1] \in K_0 \big( {{\mathcal{O}}_{d}^{p}} \big)$ to the standard generator $1 + (d - 1) {{\mathbb{Z}}}.$ It follows from [Lemma \[L-KthyTPM\]]{} and Theorem \[P-KStdUHF\] that $K_1 (B) = 0$ and that there is an isomorphism ${\eta}\colon {{\mathbb{Z}}}\big[ \tfrac{1}{d} \big] \to K_0 (B)$ such that ${\eta}(1) = [f_0].$ Therefore ${\eta}(d) = [f_1].$ We have ${\beta}(f_1) = f_0$ by [Lemma \[L-ActionIsRight\]]{}(\[L-ActionIsRight-5\]), so $({\beta}^{-1})_*$ is multiplication by $d.$ The exact sequence (\[Eq:PVLp\]) of Theorem \[T-PV\] thus becomes $$0 \longrightarrow K_1 \big( F^p_{\mathrm{r}} ({{\mathbb{Z}}}, B, {\beta}) \big) \longrightarrow {{\mathbb{Z}}}\big[ \tfrac{1}{d} \big] \xrightarrow{\,\, 1 - d \,\,} {{\mathbb{Z}}}\big[ \tfrac{1}{d} \big] \longrightarrow K_0 \big( F^p_{\mathrm{r}} ({{\mathbb{Z}}}, B, {\beta}) \big) \longrightarrow 0.$$ Therefore $K_1 \big( F^p_{\mathrm{r}} ({{\mathbb{Z}}}, B, {\beta}) \big) = 0$ and there is an isomorphism from ${{\mathbb{Z}}}\big[ \tfrac{1}{d} \big] / (d - 1) {{\mathbb{Z}}}\big[ \tfrac{1}{d} \big]$ to $K_0 \big( F^p_{\mathrm{r}} ({{\mathbb{Z}}}, B, {\beta}) \big)$ which sends $1$ to $[f_0].$ Now apply Corollary \[C-KThyA\] and use the fact that the map ${{\mathbb{Z}}}\to {{\mathbb{Z}}}\big[ \tfrac{1}{d} \big]$ induces an isomorphism $${{\mathbb{Z}}}/ (d - 1) {{\mathbb{Z}}}\to {{\mathbb{Z}}}\big[ \tfrac{1}{d} \big] / (d - 1) {{\mathbb{Z}}}\big[ \tfrac{1}{d} \big].$$ This completes the proof. Some open problems {#Sec_Problems} ================== There are many open problems suggested by the general theory. We single out just a few. \[Q-FullVsRed\] Let $p \in [1, {\infty}) {\setminus}\{ 2 \},$ let $(G, A, {\alpha})$ be a nondegenerately ${\sigma}$-finitely representable isometric $G$-$L^p$ operator algebra. (See Definition \[D:Action\] and Definition \[D-LpRep\].) Is the map ${\kappa}_{\mathrm{r}} \colon F^p (G, A, {\alpha}) \to F^p_{\mathrm{r}} (G, A, {\alpha})$ of [Lemma \[L-CompOfCrPrd\]]{} necessarily surjective? If $G$ is amenable, is this map necessarily injective? Surjective? Isometric? (In any of these questions, does it help to assume that $G$ is discrete, $G = {{\mathbb{Z}}},$ or $A = {{\mathbb{C}}}$?) If $G$ is finite, does it follow that ${\kappa}_{\mathrm{r}}$ is isometric? (In Remark \[R-FGpCP\], we showed that ${\kappa}_{\mathrm{r}}$ is bijective, but not that it is isometric.) Positive results may well hold only in fairly special circumstances. \[Q-SimplMinHme\] Let $X$ be a [compact metric space]{}, and let $h \colon X \to X$ be a [minimal homeomorphism]{}. Define ${\alpha}\in {{\mathrm{Aut}}}(C (X))$ by ${\alpha}(f) = f \circ h^{-1}$ for $f \in C (X).$ As in Notation \[N-TransfGpLpAlg\], abbreviate $F^p ({{\mathbb{Z}}}, \, C (X), \, {\alpha})$ to $F^p ({{\mathbb{Z}}}, X, h)$ and $F^p_{\mathrm{r}} ({{\mathbb{Z}}}, \, C (X), \, {\alpha})$ to $F^p_{\mathrm{r}} ({{\mathbb{Z}}}, X, h).$ Is $F^p ({{\mathbb{Z}}}, X, h)$ simple? (The algebra $F^p_{\mathrm{r}} ({{\mathbb{Z}}}, X, h)$ is simple by Theorem \[T-FreeMinSimp\], and we do not know whether it is different from $F^p ({{\mathbb{Z}}}, X, h).$) Can there ever be a nonzero [continuous]{}  [homomorphism]{} from $F^{p_1} ({{\mathbb{Z}}}, X_1, h_1)$ to $F^{p_2} ({{\mathbb{Z}}}, X_2, h_2)$ or to $F^{p_2}_{\mathrm{r}} ({{\mathbb{Z}}}, X_2, h_2)$ with $p_1 \neq p_2$ and $h_1$ and $h_2$ both minimal? \[Q-InfoOnMin\] Let $h \colon X \to X,$ $F^p ({{\mathbb{Z}}}, X, h),$ and $F^p_{\mathrm{r}} ({{\mathbb{Z}}}, X, h)$ be as in Question \[Q-SimplMinHme\]. What information about $h$ can one recover from the isomorphism class or isometric isomorphism class of $F^p ({{\mathbb{Z}}}, X, h)$ and $F^p_{\mathrm{r}} ({{\mathbb{Z}}}, X, h)$? \[Q\_3217\_Tsr\] Let $h \colon X \to X,$ $F^p ({{\mathbb{Z}}}, X, h),$ and $F^p_{\mathrm{r}} ({{\mathbb{Z}}}, X, h)$ be as in Question \[Q-SimplMinHme\]. Suppose $X$ is the Cantor set. Does it follow that the invertible elements of $F^p_{\mathrm{r}} ({{\mathbb{Z}}}, X, h)$ are dense? That is, does $F^p_{\mathrm{r}} ({{\mathbb{Z}}}, X, h)$ have stable rank one? This is true for $p = 2,$ by [@Pt2]. If $X = S^1$ and $h$ is an irrational rotation, does it follow that the invertible elements of $F^p_{\mathrm{r}} ({{\mathbb{Z}}}, X, h)$ are dense? This is true for $p = 2,$ by [@Pt1]. In the case $p = 2,$ stable rank one holds much more generally. For $p = 2,$ both the special examples in Question \[Q\_3217\_Tsr\] also have real rank zero. A unital C\*-algebra has real rank zero [if and only if]{} it is an exchange ring, by Theorem 7.2 of [@AGOP], and the definition of an exchange ring (see the beginning of Section 1 of [@AGOP]) makes sense for general unital rings. So it seems reasonable to ask the following: \[Q\_3218\_CPExch\] In the examples of Question \[Q\_3217\_Tsr\], is $F^p_{\mathrm{r}} ({{\mathbb{Z}}}, X, h)$ an exchange ring? \[Q\_3216\_K1Inv0\] Let $A$ be a purely infinite simple unital Banach algebra. Let ${{\mathrm{inv}}}(A)$ denote its invertible group, and let ${{\mathrm{inv}}}_0 (A)$ denote the connected component of ${{\mathrm{inv}}}(A)$ which contains $1.$ Does it follow that the map ${{\mathrm{inv}}}(A) / {{\mathrm{inv}}}_0 (A) \to K_1 (A)$ is an isomorphism? The corresponding result for $K_0$ is true, by an argument in the proof of Corollary 5.15 of [@PhLp2a]. \[Q-Takai\] Let $p \in [1, {\infty}).$ Let ${\alpha}\colon G \to {{\mathrm{Aut}}}(A)$ be an isometric action of a second countable locally compact abelian group on a separable nondegenerately representable $L^p$ operator algebra. In Theorem \[T-DualpFull\] and Theorem \[T-DualpRed\], we have constructed dual actions $${\widehat{{\alpha}}} \colon {\widehat{G}} \to {{\mathrm{Aut}}}\big( F^p (G, A, {\alpha}) \big) {\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,}{\widehat{{\alpha}}} \colon {\widehat{G}} \to {{\mathrm{Aut}}}\big( F^p_{\mathrm{r}} (G, A, {\alpha}) \big).$$ Is there an analog of Takai duality [@Tki] for the crossed products by these actions? [33]{} P. Ara, K. R.  Goodearl, K. C.  O’Meara, and E. Pardo, [*Separative cancellation for projective modules over exchange rings*]{}, Israel J.  Math.  [**105**]{}(1998), 105–137. B. Blackadar, [*K-Theory for Operator Algebras*]{}, 2nd ed., MSRI Publication Series [**5**]{}, Cambridge University Press, Cambridge, New York, Melbourne, 1998. D. P.  Blecher and C. Le Merdy, [*Operator Algebras and their Modules—Ñan Operator Space Approach*]{}, London Mathematical Society Monographs, New Series, no. 30. Oxford Science Publications. The Clarendon Press, Oxford University Press, Oxford, 2004. D. P.  Blecher, Z.-J.  Ruan, and A. M. Sinclair, [*A characterization of operator algebras*]{}, J. Funct.  Anal.  [**89**]{}(1990), 188–201. J. Cuntz, [*Simple C\*-algebras generated by isometries*]{}, Commun.  Math.  Phys.  [**57**]{}(1977), 173–185. J. Cuntz, [*K-theory for certain C\*-algebras*]{}, Ann.  Math.  [**113**]{}(1981), 181–197. K. R.  Davidson, [*C\*-Algebras by Example*]{}, Fields Institute Monographs no. 6, Amer.  Math.  Soc., Providence RI, 1996. A. Defant and K. Floret, [*Tensor Norms and Operator Ideals*]{}, North-Holland Mathematics Studies no. 176, North-Holland Publishing Co., Amsterdam, 1993. S. Dirksen, M. de Jeu, and M. Wortel, [*Crossed products of Banach algebras. I*]{}, Dissertationes Math., to appear. (arXiv:1104.5151v2 \[math.OA\].) T. Figiel, T. Iwaniec, and A. Pe[ł]{}czyński, [*Computing norms and critical exponents of some operators in $L^p$-spaces*]{}, Studia Math.  [**79**]{}(1984), 227–274. K. K.  Jensen, [*Foundations of an equivariant cohomology theory for Banach algebras. I*]{}, Adv.  Math.  [**117**]{}(1996), 52–146. K. K.  Jensen, [*Foundations of an equivariant cohomology theory for Banach algebras. II*]{}, Adv.  Math.  [**147**]{}(1999), 173–259. H. E.  Lacey, [*The Isometric Theory of Classical Banach Spaces*]{}, Grundlehren der mathematischen Wissenschaften no. 208, Springer-Verlag, Berlin, Heidelberg, New York, 1974. J. Lamperti, [*On the isometries of certain function-spaces*]{}, Pacific J.  Math.  [**8**]{}(1958), 459–466. C. Le Merdy, [*Representation of a quotient of a subalgebra of $B (X)$*]{}, Math.  Proc.  Cambridge Philos.  Soc.  [**119**]{}(1996), 83–90. G. K.  Pedersen, [*C\*-Algebras and their Automorphism Groups*]{}, Academic Press, London, New York, San Francisco, 1979. N. C.  Phillips, [*Equivariant K-Theory and Freeness of Group Actions on C\*-Algebras*]{}, Springer-Verlag Lecture Notes in Math.  no. 1274, Springer-Verlag, Berlin, Heidelberg, New York, London, Paris, Tokyo, 1987. N. C.  Phillips, [*K-theory for Fréchet algebras*]{}, International J.  Math.  [**2**]{}(1991), 77–129. N. C.  Phillips, [*Analogs of Cuntz algebras on $L^p$ spaces*]{}, preprint (arXiv: 1201.4196 \[math.FA\]). N. C.  Phillips, [*Simplicity of UHF and Cuntz algebras on $L^p$ spaces*]{}, preprint (arXiv: 1309.0115 \[math.FA\]). N. C.  Phillips, [*Isomorphism, nonisomorphism, and amenability of $L^p$ UHF algebras*]{}, preprint (arXiv: 1309.3694v2 \[math.FA\]). N. C.  Phillips and L. B.  Schweitzer, [*Representable K-theory of smooth crossed products by ${\mathbb{R}}$ and ${\mathbb{Z}}$*]{}, Trans.  Amer.  Math.  Soc.  [**344**]{}(1994), 173–201. M. Pimsner and D. Voiculescu, [*Exact sequences for K-groups and Ext-groups of certain cross-product C\*-algebras*]{}, J. Operator Theory [**4**]{}(1980), 93–118. S. Pooya and S. Hejazian, [*Simplicity of reduced $L^p$ operator crossed products with Powers groups*]{}, in preparation. I. F.  Putnam, [*The invertible elements are dense in the irrational rotation C\*-algebras*]{}, J. reine angew.  Math.  [**410**]{}(1990), 160–166. I. F.  Putnam, [*On the topological stable rank of certain transformation group C\*-algebras*]{}, Ergod.  Th.  Dynam.  Sys.  [**10**]{}(1990), 197–207. J. Rosenberg, [*Amenability of crossed products of C\*-algebras*]{}, Comm.  Math.  Phys.  [**57**]{}(1977), 187–191. V. Runde, [*Lectures on Amenability*]{}, Springer-Verlag Lecture Notes in Math.  no. 1774, Springer-Verlag, Berlin, 2002. L. B.  Schweitzer, [*A short proof that $M_n (A)$ is local if $A$ is local and Fréchet*]{}, International J.  Math.  [**3**]{}(1992), 581–589. L. B.  Schweitzer, [*Spectral invariance of dense subalgebras of operator algebras*]{}, International J.  Math.  [**4**]{}(1993), 289–317. L. B.  Schweitzer, [*Dense $m$-convex Fréchet subalgebras of operator algebra crossed products by Lie groups*]{}, International J.  Math.  [**4**]{}(1993), 601–673. H. Takai, [*On a duality for crossed products of C\*-algebras*]{}, J. Funct.  Anal.  [**19**]{}(1975), 25–39. D. P.  Williams, [*Crossed Products of C\*-Algebras*]{}, Mathematical Surveys and Monographs no. 134, American Mathematical Society, Providence RI, 2007.
--- abstract: 'For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to completely act against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. In this paper, we study the effect of controlling the decoding delay to reduce the completion time below its currently best known solution in persistent erasure channels. We first derive the decoding-delay-dependent expressions of the users’ and overall completion times. Although using such expressions to find the optimal overall completion time is NP-hard, we design two novel heuristics that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. We, then, extend our study to the limited feedback scenario. Simulation results show that our new algorithms achieves both a lower mean completion time and mean decoding delay compared to the best known heuristic for completion time reduction. The gap in performance becomes significant for harsh erasure scenarios.' author: - bibliography: - 'references.bib' title: Decoding Delay Controlled Reduction of Completion Time in Instantly Decodable Network Coding --- Instantly decodable network coding; Minimum completion time, Decoding delay. Introduction {#sec:intro} ============ *Network Coding (NC)* gained much attention in the past decade after its first introduction in the seminal paper [@850663]. In the last lustrum, an important subclass network coding, namely the Instantly Decodable Network Coding (IDNC) was an intensive subject of research [@ref2; @6655395; @letterarxiv; @ref3; @6766433; @ref4; @xiao1; @ref5; @6725590; @ref6; @6120247; @ref8; @xiao2; @6620795; @ref17; @ref18; @arg2; @refsameh; @5753573; @refahmed; @refjournal] thanks to its several benefits, such as the use of simple binary XOR to encode/decode packets, no buffer requirement, and fast progressive decoding of packets. which is much favorable in many applications (e.g. roadside to vehicle safety messages, satellite networks and IPTV) compared to the long buffering time needed in other NC approaches before decoding. For as long as the research on IDNC has existed, there were two main metrics that were considered in the literature as measures of its quality, namely the *completion time* [@ref4] and the *decoding delay* [@ref2]. The former measures how fast the sender can complete the delivery/recovery of requested packets whereas the latter measures how far the sender is from being able to serve all the unsatisfied receivers in each and every transmission. For quite some time, these two metrics were considered for optimization separately in many works. Though both proved to be NP-hard parameters to minimize, many heuristics has been developed to solve them in many scenarios [@ref4; @ref2; @ref3; @ref5; @refjournal; @refsameh; @refahmed], but again separately. In fact, it can be easily inferred from [@ref4] and [@ref2] that the policies derived so far to optimize one usually degrades the other. It was not until very recently until one work has aimed to derive a policy that can balance between these two metric and achieve an intermediate performance for both of them [@nada]. Nonetheless, there does not exist, to the best of our knowledge, any work that aims to explore how these two metrics can be controlled together in order to achieve an even better performance than the currently best known solutions. For instance, every time an unsatisfied user receives a coded packet that is not targeting it, its decoding delay increases and so does its individual completion time. Although this fact was noted for erasure-less transmissions in [@nada], it was used to strike a balance in performance between both metrics and not to investigate whether a smart control of such decoding delay effects will further reduce the overall completion time compared to its current best achievable performance. In wireless networks, packet loss occurs due to many phenomena related to the mobility and the propagation environment and they are seen as packet erasures at higher communication layers [@1208721]. This erasure nature of the links affects the delivery of meaningful data and thus affects the ability of users to synchronously decode the information flow. As a consequence a better use of the channel and network does not mean an effective better throughput at higher communication layers [@1208721]. Numerous research has been dedicated to understand the different delay aspects in NC. In our previous work [@confarxiv], we considered a prompt and perfect reception of the feedback. This assumption is not realistic in practice because of the impairments in the feedback channel. In this paper, we aim to extend the work in [@confarxiv] by studying the completion time reduction problem of G-IDNC in the persistent erasure channel (PEC) model on both the forward and backward channels, and in the presence of lossy feedback intermittence. In this paper, we aim to design a new completion time reduction algorithm through decoding delay control in persistent erasure channels. We first derive more general expressions of the individual and overall completion times over erasure channels as a function of the users’ decoding delays. Since finding the optimal schedule of coded packets to minimize the overall completion time is NP-hard [@arg1], we design two greedy heuristic that aims to minimize the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission. In the first heuristic, we show that this process can be done by partitioning the *IDNC graph* into layers with descending order of user completion time criticality before each transmission. The coding combination for this transmission is then designed by going through these descending order layers sequentially and selecting the combination that minimizes the probability of any decoding delay increments within each layer. This is done while maintaining the instant decodability constraint of the overall coding combination for the targeted users in the more critical layer(s). In the second heuristic, we use a binary optimization algorithm with multi-layer objective function. We, then, extend our study to the limited feedback environment. Finally, we compare through simulations the performance of our designed algorithm to the best known completion time and decoding delay reduction algorithms. The rest of this paper is divided as follows: In [Section \[sec:sys\]]{}, we present our system model and in [Section \[sec:ch\]]{} we present the channel and feedback model. The problem formulation using the decoding delay is presented in [Section \[sec:formulation\]]{}. Analytic formulation of the sub optimal solution at each transmission is provided in [Section \[sec:comp\]]{}. In [Section \[sec:alg\]]{}, algorithms to solve the former problem are presented. [Section \[sec:ext\]]{} presents the extension of the study to the limited feedback scenario. Before concluding in [Section \[sec:conclusion\]]{}, simulations results are illustrated in [Section \[sec:sim\]]{}. System Model and Parameters {#sec:sys} =========================== The model we consider in this paper consists of a wireless sender that is required to deliver a frame (denoted by $\mathcal{N}$) of $N$ source packets to a set (denoted by $\mathcal{M}$) of $M$ users. Each user is interested in receiving the $N$ packets of $\mathcal{N}$. In an *initial phase*, The sender transmits the $N$ packets of the frame uncoded. Each user listens to all transmitted packets and feeds back to the sender an acknowledgement for each successfully received packet. After the *initial phase*, two sets of packets are attributed to each user $i$ at the sender: - The *Has* set (denoted by $\mathcal{H}_i(t)$) is defined as the set of packets successfully received by user $i$. - The *Wants* set (denoted by $\mathcal{W}_i(t)$) is defined as the set of packets that are lost by user $i$. In other words,we have $\mathcal{W}_i = \mathcal{N} \setminus \mathcal{H}_i$. The BS saves the information obtained after the transmission at time $(t-1)$ in a *feedback matrix* (FM) $\mathbf{F}(t) = [f_{ij}(t)],~ \forall~ i \in \mathcal{M},~ \forall~j \in \mathcal{N},~ \forall~t > 0$ such that: $$\begin{aligned} f_{ij}(t) = \begin{cases} 0 \hspace{0.9 cm}& \text{if } j \in \mathcal{H}_i(t) \\ 1 \hspace{0.9 cm}& \text{if } j \in \mathcal{W}_i(t). \end{cases}\end{aligned}$$ For ease of notation, we will assume that the time index $t$ denotes the transmission number within the recovery phase and thus $t=0$ refers to its beginning. Therefore, the sets $\mathcal{H}_i(0), \mathcal{W}_i(0)$ and $\mathcal{U}_i(0)$ refers to the sets at the beginning of the recovery phase (i.e. the sets obtained after the initial transmissions). After this initial transmission, the recovery phase starts at time $t=1$. In this phase, the BS uses binary XOR to encode the source packets to be send. The packet combination is chosen using the information available in the FM and the expected erasure patterns of the links. In this phase, the transmitted coded packets can be one of the following three options for each user $i$: - *Non-innovative:* A packet is non-innovative for user $i$ if it does not bring new useful information. In other words, all the source packets encoded in it were successfully received and decoded previously. - *Instantly Decodable:* A packet is instantly decodable for user $i$ if the encoded packet contain at most one packet the user do not have so far. In other words, it contains *only one packet* from $\mathcal{W}_i$. - *Non-Instantly Decodable:* A packet is non instantly decodable for user $i$ if it contains more than one source packet the user do not have so far. In other words, it contains at least two packets from $\mathcal{W}_i$. After the *initial phase*, the *recovery phase* begins. In this phase, the sender exploits the diversity in received packets at the different users to transmit network coded combinations of the source packets. After each transmission, users update the sender in case they receive the coded packet and decode missing source packets from it. This process is repeated until all users complete the reception of all frame packets. We define, like in [@letterarxiv], the targeted users by a coded packet (or a transmission) as the users to whom the BS indented the packet combination when encoding it. Given a schedule $S$ of coded packets transmitted by the sender, we define the individual completion time, overall completion time and the decoding delay, like in [@refsameh; @ref2; @ref5; @ref6; @nada], as follows: The individual completion time $\mathcal{S}_i(S)$ of user $i$ is the number of recovery transmissions required until this user obtained all its requested packets. The overall completion time $\mathcal{S}(S)$ of a frame is the number of recovery transmissions required until all users obtain all their requested packets. It easy to infer that $\mathcal{S}(S) = \max_{i\in\mathcal{M}} \mathcal{S}_i(S)$. At any recovery phase transmission a time $t$, a user $i$, with non-empty Wants set, experiences a one unit increase of decoding delay if it successfully receives a packet that is either non-innovative or non-instantly decodable. Consequently, the decoding delay $D_i(S)$ experienced by user $i$ given a schedule $S$ is the number of received coded packets by $i$ before its individual completion, which are non-innovative or non-instantly decodable. Forward Channel Model {#sec:ch} ===================== Following the same model as [@ref3; @ref17; @refjournal], the forward channel (from the BS to the users) is modeled as a Gilbert-Elliott channel (GEC) [@4607216]. The good and bad states are designed by G and B, respectively. In the original version of the Gilbert-Elliott channel [@BLTJ:BLTJ955], the good state was assumed to be error free and the bad state always in error (i.e. $p=0$ and $q=1$ in [Figure \[fig:GEC\]]{}). This zero error probability in the good state allowed the computation in close form of the capacity [@45284]. This model was then extended to incorporate non zero error probability in the good state. This formulation allow the modeling of multiple fading scenarios [@4607216]. In this paper, this extended channel form will be used (see [Figure \[fig:GEC\]]{}). ![The two state Gilbert-Elliott channel.[]{data-label="fig:GEC"}](./channel.pdf "fig:"){width="1\linewidth"}\ Due to the persistent erasure of the Markov chain, the channel has memory that depends on the transition probabilities between the good and bad states. Let $C_i^f$ denote the state of the forward channels. For each user $i$, the transition probability, from time $t-1$ to $t$, are: $$\begin{aligned} &\mathds{P} (C_i^f(t) = G | C_i^f(t-1) = B) = g_i^f \nonumber \\ &\mathds{P} (C_i^f(t) = B | C_i^f(t-1) = B) = 1-g_i^f \nonumber \\ &\mathds{P} (C_i^f(t) = B | C_i^f(t-1) = G) = b_i^f \nonumber \\ &\mathds{P} (C_i^f(t) = G | C_i^f(t-1) = G) = 1-b_i^f,\end{aligned}$$ where the superscript $f$ indicates the transmission (forward) channel. The use of a superscript ($f$ herein) will be motivated in [Section \[sec:ext\]]{} when studying the limited feedback scenario. The probabilities, for user $i$, for a packet to be erased in the good and bad state are respectively $p_i^f$ and $q_i^f$. Since this Markov chain is time-homogeneous (the process can be described by a single, time-independent matrix), we express the probabilities to be in Good or Bad state (steady-state probabilities) for both the transmission and feedback channel, as: $$\begin{aligned} &\mathcal{P}_{G_i^f} = \mathds{P} (C_i^f = G) = \cfrac{g_i^f}{g_i^f+b_i^f} \nonumber \\ &\mathcal{P}_{B_i^f} = \mathds{P} (C_i^f = B) = \cfrac{b_i^f}{g_i^f+b_i^f},\end{aligned}$$ We define the memory factor as an indicator of the correlation between the states at different times. A high value of this factor means that the channel is likely to stay in the same state during the following transmissions. In contrast, a small value indicates that the state of the channel changes in an independent manner. For each user $i$, the memory factor of the forward channel $\mu_i$ can be formulated as: $$\begin{aligned} \mu_i = 1 - g_i^f - b_i^f \end{aligned}$$ The persistent erasure channel is more likely to stay in the state during the next transmission than switching states. Therefore, we have $0 \leq \mu_i \leq 1,~ \forall~i \in \mathcal{M}$. Let $\mu = \cfrac{\sum\limits_{i \in \mathcal{M}} \mu_i }{ M}$ be the average memory for the forward link. The memory-less channels are a special case of this persistent erasure channel that can be obtained by setting the memory factor for each user to $0$. In other words, by setting $\mu=0$. We assume that each user is seen through a channel that is independent from all the other users and thus no correlation exists between the different channels. The state transition probabilities of all the users are know by the BS and used when choosing the packet to be encoded. Problem Formulation using Decoding-Delay-Dependent Expressions {#sec:formulation} ============================================================== Let $d_i(\kappa(t))$ be the decoding delay increase for user $i$, at time $t$, after the transmission $\kappa(t)$. Define $D_i(t)$ as the total decoding delay experienced by user $i$ until the transmission at time $t$ (i.e. $D_i(t) = \sum\limits_{n=1}^t d_i(\kappa(t))$). Let $\mathcal{C}_i(S)$ denote the completion time for user $i$ given a certain schedule $S$ of coded packets. In other words, $\mathcal{C}_i$ is the total number of transmissions required, since the beginning of the *recovery phase*, so that user $i$ successfully receives all its missing packets. The completion time for the whole session, denoted by $\mathcal{C}$, is the required time, from the beginning of the *recovery phase*, to serve all users. Therefore, $\mathcal{C}$ is controlled by the largest user completion time. In other words, we have: $$\begin{aligned} \mathcal{C} = \underset{i \in \mathcal{M}}{\text{max }} \mathcal{C}_i.\end{aligned}$$ The following theorem introduces a decoding-delay-dependent expression for the individual completion time of user $i$ and the overall completion time, given the transmission of schedule $S$ from the sender over erasure channels. For a relatively large number of packets $N$, and a schedule $S$ of transmitted packets transmitted by the sender until overall completion occurs to all users, the individual completion time for user $i$ can be approximated by: $$\label{eq:ICT} \mathcal{C}_i(S) \approx \frac{\left|\mathcal{W}_i(0)\right| + D_i(S)-\alpha_i}{1-\alpha_i}$$ where $$\begin{aligned} \alpha_i = \cfrac{g_i^fp_i^f+q_i^fb_i^f}{g_i^f+b_i^f}\end{aligned}$$ Consequently, the overall completion time for the same schedule $S$ can be expressed as: $$\mathcal{C}(S) \approx \max_{i\in\mathcal{M}}\left\{\frac{\left|\mathcal{W}_i(0)\right| + D_i(S)-\alpha_i}{1-\alpha_i}\right\}$$ The proof can be found in Appendix B. In the rest of the paper, we will use the approximation with equality as it indeed holds for large $N$. We can thus formulate the minimum completion time problem as finding the schedule of coded packet $S^*$, such that: $$\begin{aligned} \label{eq:opt} S^* &= \arg\min_{S\in\mathcal{S}} \left\{\mathcal{C}(S)\right\} \nonumber \\ &= \arg\min_{S\in\mathcal{S}}\left\{ \max_{i\in\mathcal{M}}\left\{ \frac{|\mathcal{W}_i(0)| + D_i(S)-\alpha_i}{1-\alpha_i} \right\}\right\}\;,\end{aligned}$$ where $\mathcal{S}$ is the set of all possible transmission schedules of coded packets. Clearly, finding this optimal schedule at time $t=0$ through the above optimization formulation is very difficult. This is true due to the dynamic nature of erasures and the dependence of the optimal schedule of their effect, which makes the above equations anti-causal (i.e. current result depends on input from the future). Moreover, we know from the literature that optimizing the completion time over the whole *recovery phase* is intractable [@refsameh] even for the erasure-less scenario [@nada]. On the other hand, this formulation shows that the only terms affected by the schedule in the individual and overall completion time expressions are the decoding delay terms of the different users. Consequently, controlling such decoding delays in a smart way throughout the selection of the coded packet schedule can indeed affect the reduction of the completion time significantly. We will thus design a new heuristic algorithm in the next section that takes this fact into consideration. In the rest on this paper, we will refer to the best packet combination that reduce the completion time at each time step as the optimal solution. Completion Time Reduction {#sec:comp} ========================= Critical Criterion ------------------ From [(\[eq:opt\])]{}, we can see that the optimal schedule is the one that achieves the minimum overall growth in the individual completion time expressions in [(\[eq:ICT\])]{} $\forall~i\in\mathcal{M}$. Since we know that finding such schedule for the entire recovery phase, prior to its start, is intractable, we will design our heuristic algorithm such that, in each transmission a time $t>0$, it minimizes the probability of increase of the maximum of such expressions over all users compared to their state before this transmission. To formally express this criterion, let us first define $\mathcal{C}_i(t)$ as: $$\label{eq:Ct} \mathcal{C}_i(t) = \frac{|\mathcal{W}_i(0)| + D_i(t) - \alpha_i}{1-\alpha_i}$$ In other words, $C_i(t)$ is the anticipated individual completion time of user $i$ if it experiences no further decoding delay increments starting from time $t$. Thus, the philosophy of our proposed heuristic algorithm is to transmit the coded packet $\kappa(t)$ at time $t$ such that: $$\begin{aligned} \label{eq:heuristic-criterion} \kappa^{*}&(t) \nonumber \\ &= \arg\min_{\kappa(t)\in\mathcal{G}(t)} \left\{\mathds{P}\left(\max_{i\in\mathcal{M}} \left\{\mathcal{C}_i(t)\right\} > \max_{i\in\mathcal{M}} \left\{\mathcal{C}_i(t-1)\right\}\right) \right\}\end{aligned}$$ We will refer to [(\[eq:heuristic-criterion\])]{} as the critical criterion. Let $\mathcal{P}(t)$ be the set of users that can potentially increase $\max_{i\in\mathcal{M}} \left\{\mathcal{C}_i(t)\right\}$ at time $t$ compared to $\max_{i\in\mathcal{M}} \left\{\mathcal{C}_i(t-1)\right\}$ if they are not targeted by $\kappa(t)$. The set can be mathematically defined as follows: $$\begin{aligned} \label{eq:critical-set} \mathcal{P}(t) = \Bigg\{i\in\mathcal{M} \;\Biggm|\; &\frac{|\mathcal{W}_i(0)| + \left(D_i(t-1)+1\right) - \alpha_i}{1-\alpha_i} \nonumber\\ & \qquad > \frac{|\mathcal{W}_j(0)| + D_j(t-1)-\alpha_j}{1-\alpha_j}\Bigg\}\;,\end{aligned}$$ where $j = \arg\max_{k \in \mathcal{M}} \left\{ \frac{|\mathcal{W}_k(0)| + D_k(t-1) - \alpha_k}{1-\alpha_k}\right\}$. We will refer to this set as the “highly critical set”. Optimization Problem -------------------- Let $e_i(t)$ be the probability that user $i$ loose the transmission at time $t$. This formulation of this probability is: $$\begin{aligned} e_i(t) = \begin{cases} p_i \hspace{2cm} \text{ if } C_i^f(t)=G\\ q_i \hspace{2cm} \text{ otherwise } \end{cases}\end{aligned}$$ Define $\tau(\kappa(t))$ as the set of users that are targeted by the transmission $\kappa(t)$. The following theorem defines a maximum weight clique algorithm that can satisfy the critical criterion. \[th:critical-criterion\] The critical criterion in [(\[eq:heuristic-criterion\])]{} can be achieved by selecting $\kappa^*(t)$ according to the following optimization problem: $$\begin{aligned} \label{eq:criterion-optimization} &\kappa^{*}(t) = \underset{\kappa(t) \in \mathcal{G}}{\text{argmax}} \left\{ \sum_{i \in (\mathcal{P}(t) \cap \tau ) } \text{log}\left( \cfrac{1}{ e_i(t)}\right) \right\}.\end{aligned}$$ In other words, the transmission $\kappa(t)$ than can satisfy the critical criterion can be selected using a maximum weight clique problem in which the weight of each vertex $v_{ij}$ in $\mathcal{P}(t)$ can be expressed as: $$\begin{aligned} \label{eq:weights} w_{ij}^* = \text{log}\left( \cfrac{1}{ e_i(t)}\right).\end{aligned}$$ The proof can be found in [@confarxiv]. Proposed Heuristics Algorithms {#sec:alg} ============================== Maximum Weight Clique Solution ------------------------------ ### Instantly Decodable Network Coding Graph To look for possible packet combinations the base station can make, we use the G-IDNC graph representation. To construct this G-IDNC graph $\mathcal{G} (\mathcal{V},\mathcal{E})$, we first create a vertex $v_{ij} \in \mathcal{V}$ for each packet $j \notin \mathcal{H}_i,~ \forall~ i \in \mathcal{M}$. We then connect with an edge all $v_{ij}$ and $v_{kl}$ in $\mathcal{V}$ if one of the two following conditions is true: - $j=l \Rightarrow$ Packet $j$ is needed by both clients $i$ and $k$. - $j \in \mathcal{H}_k$ and $l \in \mathcal{H}_i \Rightarrow$ The packet combination $j\oplus l$ is instantly decodable for both clients $i$ and $k$. Given this graph formulation and according to the analysis done in [@arg1], the set of all packet feasible combinations in G-IDNC is represented by all maximal cliques in $\mathcal{G}$. To generate a packet combination, binary XOR is applied to all the packets identified by the vertices of a selected maximal clique $\kappa$ in $\mathcal{G}$. The targeted clients by this transmission $\kappa$ are those identified by the vertices of the selected maximal clique. ### Multi-layer Greedy Algorithm Despite the importance of the satisfaction of the critical criterion in order to minimize the probability of increase of the maximum individual completion time, it may not fully exploit the power of IDNC. In other words, once a clique is chosen according to [(\[eq:criterion-optimization\])]{} from among the users in the highly critical set $\mathcal{P}(t)$, there may exist vertices belonging to other users that can form an even bigger clique. Thus, adding this vertex to the clique and serving this user will benefit them without affecting the IDNC constraint for the users belonging to $\mathcal{P}(t)$. To schedule such vertices and their users, let $\mathcal{G}_1,\mathcal{G}_2,...\mathcal{G}_h$ (with $h \in \mathds{N}$) be the sets of vertices of $\mathcal{G}(t)$, such that $v_{ik} \in \mathcal{G}_n$ if the following conditions are true: - $\mathcal{C}_i(t-1)+ \cfrac{n}{1-\alpha_i} > \mathcal{C}_j(t-1)$. - $\mathcal{C}_i(t-1)+ \cfrac{n-1}{1-\alpha_i} \leq \mathcal{C}_j(t-1)$. where $j = \underset{ i \in \mathcal{M}}{\text{argmax }} \left\{ \mathcal{C}_i(t-1)\right\}$. Consequently, the IDNC graph at time $t$ is partitioned into $h$ layers with descending order of criticality. By examining the above condition, the vertices of the users of $\mathcal{P}(t)$ are all in layer $\mathcal{G}_1$. Moreover, the $n$-th layer of the graph includes the vertices of the users who may eventually increase $\max_{i\in\mathcal{M}}\left\{\mathcal{C}_i(t+n)\right\}$ if they experience $n$ decoding delay increments in the subsequent $n$ transmissions. Consequently, a user with vertices belonging to $\mathcal{G}_i$ is more critical than another with vertices belonging to $\mathcal{G}_j$, $j>i$, as the former has a higher chance to increase the overall completion time. In order to guarantee the satisfaction of the critical criterion, our proposed algorithm first finds the maximum weight clique $\kappa^*$ in layer $\mathcal{G}_1$ as mandated by Theorem \[th:critical-criterion\]. We then construct $\mathcal{G}_2(\kappa^*)$ including each vertex in $\mathcal{G}_2$ that is adjacent to all vertices in $\kappa^*$ (i.e. forms a bigger clique with $\kappa^*$). After assigning the same weights defined in [(\[eq:weights\])]{}, the maximal weight clique in $\mathcal{G}_2(\kappa^*)$ is found and added to $\kappa^*$. This process is repeated for each layer $\mathcal{G}_i, i\leq h$ of the graph to find the selected maximal weight clique $\kappa^*\in\mathcal{G}(t)$ to be transmitted at time $t$. Since finding the maximum weight clique in the G-IDNC graph is NP-hard [@arg1] we will use the same simple vertex search approach with modified weights introduced in [@arg2] after tailoring the weights to the ones defined in [(\[eq:weights\])]{}. Let $w_{ij}$ be the modified weights, which can be expressed as: $$\begin{aligned} w_{ij} = (w_{ij}^* + 1) \times \sum_{v_{kl} \in \nu(v_{ij})} w_{kl}^*\;, \label{omegamax}\end{aligned}$$ where $\nu(v_{ij})$ is the set of adjacent vertices of $v_{ij}$ within its layer. $\mathbf{F}$, $p_i \text{ and } C_{i}(t-1),~\forall~ i\in\mathcal{M}$. Initialize $\kappa^* =\varnothing$. Construct $\mathcal{G}_1\left(\mathcal{V}_1,\mathcal{E}_1\right), \mathcal{G}_2\left(\mathcal{V}_2,\mathcal{E}_2\right), ..., \mathcal{G}_h\left(\mathcal{V}_h,\mathcal{E}_h\right)$. $\mathcal{G} \leftarrow \mathcal{G}_l$. Sets $\mathcal{G} \leftarrow \mathcal{G}(\kappa^*)$. Compute $w_{ij}^*$ and $w_{ij}$ using [(\[eq:weights\])]{} and [(\[omegamax\])]{}. Select $v^* =\underset{v_{ij}\in\mathcal{G}}{\text{argmax}} \left\{w_{ij}\right\}$. Sets $\kappa^* \leftarrow \kappa^* \cup v^*$. Sets $\mathcal{G} \leftarrow \mathcal{G}(\kappa^*)$. Binary Particle Swarm Optimization Solution ------------------------------------------- Particle swarm Optimization (PSO) is a population based search algorithm based on the simulation of the social behavior of animals. It was introduced in [@488968; @494215; @Kennedy] by Eberhart and Kennedy for continuous functions. To each individual in the multidimensional space is associated two vectors: the position vector and the velocity vector. The velocity vector determines in which direction the position vector should evolve. In [@637339], the authors extended their algorithm to the binary optimization and in [@4433821], Khanesar proposed a novel binary PSO. This algorithm is based on a new definition for the velocity vector of binary PSO. In order to state the outlines of this algorithm, we first introduce the following quantities: Let $X_i$ be the position of particle $i$, $P_{ibest}$ is the best position particle $i$ visited and $P_{gbest}$ the best position visited by any particle. Let $L$ be the total number of particles and $T$ the number of iteration of the BPSO algorithm. Let $V_{ij}^1$ and $V_{ij}^0$ be the probabilities that a bit $j$ of the particle $i$ changes from $0$ to $1$ and from $1$ to $0$, respectively. These probabilities are computed using the following expressions: $$\begin{aligned} V_{ij}^1 = wV_{ij}^1+d_{ij,1}^1+d_{ij,2}^1 \\ V_{ij}^0 = wV_{ij}^0+d_{ij,1}^0+d_{ij,2}^0,\end{aligned}$$ where $w$ is a random number in the range $[-1,1]$ chosen at the beginning of the BPSO algorithm and $d_{ij,1}$, $d_{ij,2}$, $d_{ij,1}^0$, and $d_{ij,2}^0$ are extracted using the rule below: $$\begin{aligned} \text{If } P_{ibest}^j=1 \text{ Then } d_{ij,1}^1=c_1r_1 \text{ and } d_{ij,1}^0=-c_1r_1 \nonumber \\ \text{If } P_{ibest}^j=0 \text{ Then } d_{ij,1}^0=c_1r_1 \text{ and } d_{ij,1}^1=-c_1r_1 \nonumber \\ \text{If } P_{ibest}^j=1 \text{ Then } d_{ij,2}^1=c_2r_2 \text{ and } d_{ij,2}^0=-c_2r_2 \nonumber \\ \text{If } P_{ibest}^j=1 \text{ Then } d_{ij,2}^0=c_2r_2 \text{ and } d_{ij,2}^0=-c_2r_2\end{aligned}$$ where $c_1$ and $c_2$ are fixed factor determined by user and $r_1$ and $r_2$ are two random variable in the range $[0,1]$ chosen at each iteration of the BPSO algorithm. The velocity $V_{ij}^c$ of bit $j$ of particle $i$ is defined as: $$\begin{aligned} V_{ij}^c = \begin{cases} V_{ij}^1 \text{ if } x_{ij} = 0 \\ V_{ij}^0 \text{ if } x_{ij} = 1 \end{cases}\end{aligned}$$ The normalized velocity is obtained using the sigmoid function. This function is defined as follows: $$\begin{aligned} \text{sig}(x) = \cfrac{1}{1+e^{-x}}\end{aligned}$$ Let $V_{ij}^\prime$ be the normalized velocity (i.e. $V_{ij}^\prime= \text{sig}(V_{ij}^c)$). The position update of the bit $j$ of particle $i$ is computed as follows: $$\begin{aligned} x_{ij}(t+1) = \begin{cases} \overline{x}_{ij}(t)\text{if }& r_{ij}< V_{ij}^\prime \\ x_{ij}(t) \text{if }& r_{ij} > V_{ij}^\prime \end{cases} \end{aligned}$$ where $\overline{x}$ is the binary complementary of $x$ and $r_{ij}$ is a random number in the range $[0,1]$ chosen at each iteration and at each bit of the particle. Let $\mathcal{F}$ be the function of interest that we want to minimize. The outline of the BPSO algorithm are the following: 1. Initialize the swarm $X_i$, the position of particles are randomly initialized within the hyper-cube. 2. Evaluate the performance $\mathcal{F}$ of each particle, using its current position $X_i(t)$. 3. compare the performance of each individual to its best performance so far: if $(\mathcal{F}(X_i(t)) < \mathcal{F}(P_{ibest}))$):\ $\mathcal{F}(P_{ibest}) = \mathcal{F}(X_i(t))$\ $P_{ibest} =X_i(t)$ 4. Compare the performance of each particle to the global best particle:if $(\mathcal{F}(X_i(t)) < \mathcal{F}(P_{gbest}))$):\ $\mathcal{F}(P_{gbest}) = \mathcal{F}(X_i(t))$\ $P_{gbest} =X_i(t)$ 5. change the velocity of the particle, $V_i^0$ and $V_i^1$. 6. Calculate the velocity of change of the bits, $V_i^c$. 7. Generate the random variable $r_{ij}$ in the range: (0,1). Move each particle to a new position. 8. Go to step 2, and repeat $(T-1)$ times. Let $\phi_i(\kappa,t)$ be the weight of user $i$ when sending the packet combination $\kappa$ at time $t$. Following the result of the previous section, $f_i(\kappa,t)$ is expressed as: $$\begin{aligned} &\phi_i(\kappa,t) = \\ &\begin{cases} \text{log}\left( \cfrac{1}{ e_i(t)}\right)&\text{if } i \in \tau \cap \mathcal{P}(t)\\ 0 & \text{otherwise } \end{cases} \nonumber\end{aligned}$$ As for the multi-layer solution in the maximum weight clique problem, we want to include users that are not in the critical layer under the condition that their inclusion to not disturb the instantaneous decodability of the targeted users in the critical layer. This inclusion is done using layer prioritization i.e. the inclusion of a user in layer $\mathcal{P}_m$ should not bother the instant dependability of users in layer $\mathcal{P}_n, n>m$. To reproduce this concept of prioritization using a single real function, we use the sigmoid function. The new objective function to maximize is the following: $$\begin{aligned} \phi_i^\prime(\kappa,t) = \sum_{i \in M_w )} \text{sig}(\tilde{\phi}_i(\kappa,t))+M*(h-P(i)) \end{aligned}$$ where $h$ is the total number of layers, $P(i)$ is the index of the layer of user $i$ and $\tilde{\phi}_i$ defined as follows: $$\begin{aligned} \tilde{\phi}_i(\kappa,t) = \text{log}\left( \cfrac{1}{ e_i(t)}\right)\end{aligned}$$ The sigmoid function is an increasing function between $[0 ,1]$ and therefore the utility of each user is shifted according to the number of the layer he is in. Since no more that $M$ players can be simultaneously on the same layer, then all these layers are not overlapping and a single element of one layer will yield a better objective that all the sum of all the objective of user in layer below him. This function respects the prioritization as the multi-layer graph. The function of interest to minimize , at each time instant $t$, can be written as $\mathcal{F}(\kappa) = \phi_i^\prime(\kappa,t)$. The following theorem gives the convergence of the whole system in our cases: For a number of particle equal to the number of packets (i.e. $L=N$) and a sparse initialisation vector, the overall system will converge The proof of this Theorem can be found in Appendix C. Extension To The limited Feedback Scenario {#sec:ext} ========================================== In this section, we extend our previous analysis to the limited feedback scenario. First, we introduce the system model, the backward channel model and the feedback protocol. We, then, derive the expression of the optimal packet combination to reduce the completion time in such lossy feedback environment. Finally, we propose modified version of previously introduced algorithm to effectively reduce the completion time. System Model ------------ Due to the feedback loss that can occur in the system, at the end of the *initial phase*, three sets of packets are attributed to each user $i$ at the sender: - The *Has* set (denoted by $\mathcal{H}_i(t)$) is defined as the set of packets successfully received by user $i$. - The *Wants* set (denoted by $\mathcal{W}_i(t)$) is defined as the set of packets that are lost by user $i$. In other words,we have $\mathcal{W}_i = \mathcal{N} \setminus \mathcal{H}_i$. - The Uncertain set (denoted by $\mathcal{U}_i(t)$). It is the sets of packets which the BS is not certain if either the packet or the feedback were erased. We have $\mathcal{U}_i(t) \subseteq \mathcal{W}_i(t)$. The BS saves the information obtained after the transmission at time $(t-1)$ in a *feedback matrix* (FM) $\mathbf{F}(t) = [f_{ij}(t)],~ \forall~ i \in \mathcal{M},~ \forall~j \in \mathcal{N},~ \forall~t > 0$ such that: $$\begin{aligned} f_{ij}(t) = \begin{cases} 0 \hspace{0.9 cm}& \text{if } j \in \mathcal{H}_i(t) \\ 1 \hspace{0.9 cm}& \text{if } j \in \mathcal{W}_i(t) \setminus \mathcal{U}_i(t) \\ x \hspace{0.9 cm}& \text{if } j \in \mathcal{W}_i(t) \cap \mathcal{U}_i(t). \end{cases}\end{aligned}$$ Backward Channel Model and Feedback Protocol -------------------------------------------- We model the backward channel (from the users to the BS), as for the forward channel, by a Gilbert-Elliott channel. The parameters of the backward channel are defined in the same way as those of the forward channel with a superscript $b$. For example, $C_i^b$ will denote the state of the forward channels. For each user $i$, the memory factor of the backward channel $\psi_i$ and the average memory can be formulated as: $$\begin{aligned} \psi_i = 1 - g_i^b - b_i^b \nonumber\\ \psi = \cfrac{\sum\limits_{i \in \mathcal{M}} \psi }{ M}.\end{aligned}$$ When both the forward and the backward channel have the same transitions probabilities (i.e. $g_i^f = g_i^b$ and $b_i^f = b_i^b, \forall~ i \in \mathcal{M}$), the channel is said to be identically distributed and when they are experiencing the same channel realization (i.e $C_i^f(t) = C_i^b(t),\forall~ t >0$) the channel is said to be reciprocal. Before transmitting the data packet, the BS first transmits $N$ bits representing the source packets that are combined in the encoded packet (i.e. if the packet is in the combination the corresponding bit is set to $1$ otherwise it is set to $0$). Each user, after listening to the header, continues to listen if the packet combination is instantly decodable for him. Otherwise, energy is saved by ignoring the next transmission. The targeted users by a transmission are the users that can instantly decode a packet from that transmission. Time is divided into frames of same length equal to $T_f$ time slots. Each of these frame is composed of a downlink sub-frame and an uplink sub-frame. The length of the downlink and uplink sub-frame are respectively $T_d$ and $T_u$. In other words, we have $T_f = T_d+T_u$. The BS transmits the packet combinations during the downlink sub-frame and do not get any feedback the meanwhile. During the uplink sub-frame, the BS listens to the feedbacks sent from the different users and do not transmit packets. After each downlink sub-frame, users that received and managed to decode the packet combination during that downlink sub-frame acknowledges its reception by sending a feedback during the uplink sub-frame. Define $T_{u_i}$ as the time-slot in the uplink sub-frame in which user $i$ is able to send feedback (i.e. $1 \leq T_{u_i} \leq T_u ,~ \forall~ i \in \mathcal{M}$). In other words, user $i$ can transmit acknowledgement, during frame number $n$, at time $t=nT_f-T_u+T_{u_i}$. Only targeted users during the downlink sub-frame will send acknowledgement. In other words, if a feedback from one of the targeted user is lost, the BS will not get any feedback from this user until the next transmission in which it is targeted. Each feedback sent from a user consists of $N$ bits indicating all previously received/lost packets. As a consequence, after receiving feedback from an arbitrary user, the states of all its packet in the FM will be certain (i.e. $\mathcal{U}_i = \varnothing$). The BS uses the feedback to update the feedback matrix. This process is repeated until all users report that they obtained all the wanted packets. In order to be able to accurately estimate the state of the forward/backward channels for each user, we impose, as in [@refjournal], that for each targeted user, from the last time a feedback is heard from that user, there will be at least one packet which is attempted only once. This constraint becomes unnecessary for user that still need only a single source packet. An additional $log_2(T_d)$ are send with the feedback indication the decoding delay encountered by the user during the downlink sub-frame. Transmission/Feedback Loss Probabilities at Time $t$ ---------------------------------------------------- In this section, we compute the probability to loose the transmission $e_i(t) \triangleq\mathds{P}(C_i^p(t) = B) ,~ \forall~ i \in \mathcal{M}$ and to loose the feedback $f_i(t) \triangleq \mathds{P}(C_i^q(t) = B) ,~ \forall~ i \in \mathcal{M}$, at time slot $t$. In order to compute these probabilities, we first introduce the following variables: Let $n_i^{(-1)}$ and $n_i^{(0)}$ ($n_i^{(-1)} < n_i^{(0)}$) be the indices of the most recent frame where the sender heard a feedback from user $i$. Let $\lambda_{ij}(n)$ be the set of the time indices when packet $j$ was attempted to user $i$ during frame number $n$. Define $j_i^{(0)}$ as the last sent packet among those which were attempted only once between the two frames $(n_i^{(-1)}+1)$ and $n_i^{(0)}$ to user $i$. This variable can be mathematically defined as: $$\begin{aligned} j_i^{(0)} &= \underset{j \in \mathcal{W}_i(n_i^{(0)} \times T_f)}{\text{argmax}} \bigcup_{k=n_i^{(-1)}+1}^{n_i^{(0)}} \lambda_{ij}(k) \nonumber \\ & \quad \text{subject to } \left| \bigcup_{k=n_i^{(-1)}+1}^{n_i^{(0)}} \lambda_{ij}(k) \right| = 1 ,\end{aligned}$$ where $\cup_{x \in X}A_x $ is the union of the sets $A_x,~ \forall~ x \in X$. Let $t_i^{(0)}$ be the time where packet $j_i^{(0)}$ was attempted to receiver $i$ and $t_i^*$ the last time the a feedback was heard from user $i$. In other words: $$\begin{aligned} t_i^{(0)} &= \bigcup_{k=n_i^{(-1)}+1}^{n_i^{(0)}} \lambda_{ij_i^{(0)}}(k) \\ t_i^* &= n_i^{(0)} \times T_f - T_u + T_{u_i}.\end{aligned}$$ Given these definitions, we can introduce the following theorem regarding the loss probabilities of the forward $e_i(t)$ and feedback $f_i(t)$ transmissions at any given time $t$. The probabilities $e_i(t)$ of loosing a transmission from receiver $i$ at time $t>t_i^*$ can be expressed as: $$\begin{aligned} &e_i = \\ &\begin{cases} &\cfrac{p_i^fg_i^f}{p_i^fg_i^f+q_i^fb_i^f}(p_i^f+(q_i^f-p_i^f)b_i^f\sum\limits_{i=0}^{t-t_i^{(0)}-1} \mu^{i}) \nonumber \\ & + \cfrac{q_i^fb_i^f}{p_i^fg_i^f+q_i^fb_i^f}(q_i^f+(p_i^f-q_i^f)g\sum\limits_{i=0}^{t-n^0-1} \mu^{i}) \nonumber \\ & \qquad \text{ if } f_{ij^0} = 1 \nonumber \\ &\cfrac{(1-p_i^f)g_i^f}{(1-p_i^f)g_i^f+(1-q_i^f)b_i^f} \nonumber \\ & \hspace{2cm} \times (p_i^f+(q_i^f-p_i^f)b_i^f\sum\limits_{i=0}^{t-t_i^{(0)}-1} \mu^{i}) \nonumber \\ & + \cfrac{(1-q_i^f)b_i^f}{(1-p_i^f)g_i^f+(1-q_i^f)b_i^f} \nonumber \\ & \hspace{2cm} \times (q_i^f+(p_i^f-q_i^f)g_i^f\sum\limits_{i=0}^{t-t_i^{(0)}-1} \mu^{i}) \nonumber \\ & \qquad \text{ if } f_{ij^0} = 0 \end{cases}\end{aligned}$$ The probabilities $f_i(t)$ of loosing a feedback from user $i$ at time $t>t_i^*$ can be expressed as: $$\begin{aligned} &f_i = \cfrac{(1-p_i^b)g_i^b}{(1-p_i^b)g_i^b+(1-q_i^b)b_i^b} (p_i^b+(q_i^b-p_i^b)b_i^b\sum_{i=0}^{t-t_i^*-1} \psi^{i}) \nonumber \\ & + \cfrac{(1-q_i^b)b_i^b}{(1-p_i^b)g_i^b+(1-q_i^b)b_i^b}(q_i^b+(p_i^b-q_i^b)g_i^b\sum_{i=0}^{t-t_i^*-1} \psi^{i}) \end{aligned}$$ \[sffrh\] The proof can be found in Appendix D. Problem Formulation ------------------- In order to state the optimization problem we first define this two probability : - The innovative probability $p_{i,n}(j,t)$: probability that packet $j$ is innovative for user $i$ at time $t$. - The finish probability $p_{i,f}(t)$: probability that user $i$ successfully received all his primary packets but $\mathcal{W}_i(t) \neq \varnothing$ at time $t$. The expressions of these probabilities is available in the Appendix E. The following theorem defines a maximum weight clique algorithm that can satisfy the critical criterion. The critical criterion in [(\[eq:heuristic-criterion\])]{} can be achieved by selecting $\kappa^*(t)$ according to the following optimization problem: $$\begin{aligned} &\kappa^{*}(t) \nonumber \\ &= \underset{\kappa(t) \in \mathcal{G}}{\text{argmax}} \left\{ \sum_{i \in (\mathcal{P}(t) \cap \tau ) } \text{log}\left( 1 + \cfrac{p_{i,n}(\kappa_i,t)}{ \cfrac{e_i(t)}{1-e_i(t)}+p_{i,f}(t)}\right) \right\}.\end{aligned}$$ In other words, the transmission $\kappa(t)$ than can satisfy the critical criterion can be selected using a maximum weight clique problem in which the weight of each vertex $v_{ij}$ in $\mathcal{P}(t)$ can be expressed as: $$\begin{aligned} \label{degwe} w_{ij}^* = \text{log}\left( 1 + \cfrac{p_{i,n}(j,t)}{ \cfrac{e_i(t)}{1-e_i(t)}+p_{i,f}(t)}\right).\end{aligned}$$ The proof can be found in Appendix F. Proposed Algorithm ------------------ ### Maximum Weight Clique Solution In order to minimize the completion time in G-IDNC, we look for all possible combinations of source packets then take the combination of these packets that guarantees the minimum delay for this transmission. To represent all the feasible packet combinations, we use the graph model introduced in [@letterarxiv] and called the lossy D-IDNC graph (LG-IDNC). To construct the LG-IDNC graph, we first introduce the expected decoding delay increase $d_{ij,kl}(j \oplus l)$ for two distinct arbitrary users $i$ and $k$ after sending the packet combination $j \oplus l$: $$\begin{aligned} \label{joplusl} &d_{ij,kl}(j \oplus l) = \\ & (1-e_i)(p_{n,i}(j)p_{n,i}(l)+ (1-p_{n,i}(j)(1-p_{n,i}(l)))\overline{p}_{f,i}+ \nonumber \\ & (1-e_k)(p_{n,k}(j)p_{n,k}(l)+ (1-p_{n,k}(j)(1-p_{n,k}(l)))\overline{p}_{f,k},\nonumber\end{aligned}$$ where $\overline{p}_{f,i} = 1 - {p}_{f,i}$. To obtain the expected decoding delay increase $d_{ij,kl}(j)$ for these users after sending packet $j$, we replace $l$ by $0$ in [(\[joplusl\])]{} and take $p_{n,i}(0) =0,~\forall~i \in \mathcal{M}$. To construct the LG-IDNC graph $\mathcal{G} (\mathcal{V},\mathcal{E})$, we first create a vertex $v_{ij} ,~\forall~ i \in \mathcal{M},~\forall~ j \in \mathcal{W}_i$. We then connect two users, if they either need the same packet or if the expected decoding delay increase is lower when sending the packet combination than when sending only one packet. In other words, we connect by an edge two vertices $v_{ij}$ and $v_{kl}$ if one of the following conditions is true: - C1: $j=l \Rightarrow$ Packet $j$ is needed by both the users $i$ and $k$. - C2: $d_{ij,kl}(j \oplus l) \leq min (d_{ij,kl}(j) ,d_{ij,kl}(l) ) \Rightarrow$ The packet combination $j \oplus l$ guarantees a lower decoding delay to the users $i$ and $k$ than packets $j$ and $l$ individually. Unlike condition C1 that does not require packet combination, C2 involves the combination of packet $j$ and $l$. It was shown in [@arg1] that in the perfect feedback scenario, all possible packet combinations is equivalent to a maximal weight clique in the LG-IDNC graph. Therefore the combination that reduces the best the completion time for the current transmission is the maximum weight clique in the LG-IDNC graph. This result can be extended to the lossy intermittent feedback. The BS generate the encoded packet by taking the binary XOR of packet represented by the maximal weight clique in the LG-IDNC graph. Users targeted by this transmission are those represented by this maximal weight clique. After definition of the LG-IDNC graph, we use the multi-layer algorithm developed in the previous [Section \[sec:alg\]]{} with the new weights developed in [(\[degwe\])]{}. ### BPSO Algorithm The same line of thinking used in [Section \[sec:alg\]]{} applies in the case of the limited feedback scenario. We objective function is defined using the new weights [(\[degwe\])]{} to reflect the uncertainties in the system. The new objective function to maximize is the following: $$\begin{aligned} \phi_i^\prime(\kappa,t) = \sum_{i \in M_w )} \text{sig}(\tilde{\phi}_i(\kappa,t))+M*(h-P(i)) \end{aligned}$$ where $h$ is the total number of layers, $P(i)$ is the index of the layer of user $i$ and $\tilde{\phi}_i$ defined as follows: $$\begin{aligned} \tilde{\phi}_i(\kappa,t) = \text{log}\left( 1 + \cfrac{p_{i,n}(\kappa,t)}{ \cfrac{e_i(t)}{1-e_i(t)}+p_{i,f}(t)}\right)\end{aligned}$$ Blind Graph Policies Solution ----------------------------- In the lossy intermittent feedback, uncertainties about the reception state of the different packets and the decodability conditions make the algorithm proposed in [@vtc] non effective to actually reduce the completion time. To solve this problem, we introduce three partially blind algorithms that estimate all the uncertain packets with a predefined policy, update the graph accordingly and finally perform packet selection using the algorithm proposed in [@vtc]. These graph update approaches are the generalization of the update methods proposed in [@ref13] in the context of reducing the completion time with lossy feedback. ### Pessimist Graph Update In this approach, all packets that are not fed back by users are considered erased rather than assuming that their feedback is erased. Reconsidering these packets in the following transmissions gives them a greater chance to be reattempted rapidly. Since no acknowledgement is expected to be heard during the downlink sub frame, packets attempted meanwhile are systematically not reconsidered in the following transmissions. If a feedback is heard in the uplink sub frame, the state of the user is updated. Otherwise, all the uncertain packet of that user are reconsidered. In the pessimist graph update approach, uncertain vertices are removed from the graph during the downlink sub frame and reconsidered in the uplink sub frame if no acknowledgement is heard from the concerned user. ### Optimist Graph Update In this approach, all packets that are not fed back by users are considered received and their corresponding feedback erased. Not reconsidering these packets in the following transmissions gives a greater chance to non attempted packets to be transmitted. Since no feedback can be heard from a user having all its packets in an uncertain state, unless this user is targeted. therefore, users with full uncertain Wants set are reconsidered after the uplink sub frame. In the optimist graph update approach, uncertain vertices are removed from the graph and reconsidered in the uplink sub frame if the user have full uncertain Wants set. ### Realistic Graph Update In this approach, all packets that are not fed back by users are probabilistically considered received and their acknowledgement erased and reciprocally. This approach tends to stochastically balance between reattempting packets with unheard feedback and transmitting new packets. Since during the downlink sub frame no acknowledgement is expected to be heard, then packets are reconsidered with probability $\mathcal{P}_{B_i^f}$ and discarded with probability $\mathcal{P}_{G_i^f}$. In the uplink frame, all the uncertain packets are reconsidered with probability $\mathcal{P}_{B_i^b}$ and removed with probability $\mathcal{P}_{G_i^b}$. In the realist graph update approach, uncertain vertices are removed from the graph with probability $\mathcal{P}_{G_i^f}$ in the downlink frame and with probability $\mathcal{P}_{G_i^b}$ in the uplink frame. Simulation Results {#sec:sim} ================== ![Mean completion time for G-IDNC versus number of users $M$.[]{data-label="fig:M"}](./CTM-eps-converted-to.pdf "fig:"){width="1\linewidth"}\ ![Mean decoding delay for G-IDNC versus number of users $M$.[]{data-label="fig:M2"}](./SDDM-eps-converted-to.pdf "fig:"){width="1\linewidth"}\ ![Mean completion time for G-IDNC versus number of packets $N$.[]{data-label="fig:N"}](./CTN-eps-converted-to.pdf "fig:"){width="1\linewidth"}\ ![Mean decoding delay for G-IDNC versus number of packets $N$.[]{data-label="fig:N2"}](./SDDN-eps-converted-to.pdf "fig:"){width="1\linewidth"}\ ![Mean delays for G-IDNC versus packet erasure probability $P$.[]{data-label="fig:P"}](./P-eps-converted-to.pdf "fig:"){width="1\linewidth"}\ ![Average completion time versus the number of iteration of BPSO $T$.[]{data-label="fig:ITSWAP"}](./ITSWAP-eps-converted-to.pdf "fig:"){width="1\linewidth"}\ ![Average completion time versus number of users $M$.[]{data-label="fig:MGSWAP"}](./MGSWAP-eps-converted-to.pdf "fig:"){width="1\linewidth"}\ ![Average completion time versus chanel memory $\mu$ and $\psi$.[]{data-label="fig:MUSWAP"}](./MUSWAP-eps-converted-to.pdf "fig:"){width="1\linewidth"}\ ![Average completion time versus the number of users $M$.[]{data-label="fig:MG"}](./MG-eps-converted-to.pdf "fig:"){width="1\linewidth"}\ ![Average completion time versus chanel memory $\mu$ and $\psi$.[]{data-label="fig:MUG"}](./MUG-eps-converted-to.pdf "fig:"){width="1\linewidth"}\ In this section, we first present the simulation results comparing the completion time and the decoding delay achieved by the different policies to optimize each in perfect feedback and independent erasure channels. We, then, present the completion time achieved by our policy against the blind policies for a lossy feedback and persistent erasure channels. In the first part, we compare, through extensive simulations the sum decoding delay (denoted by SDD) and the completion time achieved by [@ref4] (denoted by Min-CT) and the completion time achieved by our algorithm (denoted by P-CT) for perfect feedback while using the policy to reduce the sum decoding delay [@ref2] and the policy [@ref4] and our policy to reduce the completion time [@letterarxiv]. In the second part, we first compare our two heuristics to reduce the completion time for perfect feedback and persistent erasure channels. We, then, compare the completion time achieved by our policy and the blind policies in lossy feedback environment. In all the simulations, the different delays are computed by frame then averaged over a large number of iterations. We assume that the packet and the feedback erasure probability of all the users change from frame to frame while the average packet erasure probability remain constant. We further assume the symmetric channels for both the forward and backward links. In other words, the erasure probability on the forward and the backward link are the same (the probabilities only and not the channel realizations). [Figure \[fig:M\]]{} depicts the comparison of the mean completion achieved by the policy to reduce the sum decoding delay (SDD), [@ref4] policy and our one to reduce the completion time (Min-CT and P-CT) for a perfect feedback and independent erasure channels against $M$ for $N=60$ and $P=0.25$ and $P=0.5$ receptively, where $P$ refers to the average packet erasure probability in the independent erasure channels. [Figure \[fig:M2\]]{} illustrates the comparison of the decoding delay for the same inputs. [Figure \[fig:N\]]{} and [Figure \[fig:N2\]]{} depicts the comparison of the aforementioned delay aspects against $N$ for $M=60$ and $P=0.25$ and $P=0.5$ receptively and [Figure \[fig:P\]]{} illustrates this comparison against the erasure probability $P$ for $M=60$ and $N=30$. [Figure \[fig:ITSWAP\]]{} illustrate the comparison of the completion time achieved by the one layered algorithm (denoted by -Graph) and the BPSO algorithm (denoted by -BPSO) for both our policy to reduce the completion time [@confarxiv] using the decoding delay control (denoted by DDC) and the [@ref4] policy using the shortest stochastic path (denoted by SSP) for a perfect feedback and persistent erasure channels against the number of iteration $T$ for $M=60$, $N=30$, $L=30$, $P=0.1$, $Q=0.8$ and $\mu=0.2$. [Figure \[fig:MGSWAP\]]{} and [Figure \[fig:MUSWAP\]]{} depicts the same comparison against the number of users and the memory of the channel respectively for $\mu=0.2$($M=60$), $N=30$, $L=30$, $P=0.1$ and $Q=0.8$. [Figure \[fig:MG\]]{} shows the comparison between the different blind algorithm (denoted by NVE for the pessimist, FVE of the optimist and SVE for the stochastic policy) when using the original formulation of the weights proposed in [@ref4] and our multi-layer graph algorithm using the decoding delay control (denoted by DDC) against the number of users $M$ for $N=30$, $P=0.1$, $Q=0.8$, and $\mu=0.2$. [Figure \[fig:MUG\]]{} depicts the same comparison against the channel memory $\mu$ and $\psi$ for $M=60$, $N=30$, $P=0.1$, and $Q=0.8$. From all the figures, we can clearly see that our proposed completion time algorithm outperforms the completion time policy proposed in [@ref4] and the blind policies. Moreover it gives the best agreement among the sum decoding delay and the completion in G-IDNC. The completion time policy offers, in average, the minimum sum of all the delay aspects in all situations. [Figure \[fig:M\]]{}.a and [Figure \[fig:N\]]{}.a depicts the completion time when applying the sum decoding delay policy, the completion time policy and the our completion time policy against $M$ and $N$ for a low packet erasure probability. We see that the performance of P-CT and Min-CT are very close. Whereas in [Figure \[fig:M2\]]{} and [Figure \[fig:N2\]]{} where the sum decoding delay is computed for the same inputs, the performance of P-CT is much better than Min-CT one. As the channel conditions become harsher (high packet erasure probability), our policy to reduce the completion time minimize the completion time better than the Min-CT. We can see from [Figure \[fig:M\]]{}.b, [Figure \[fig:M2\]]{}.b, [Figure \[fig:N\]]{}.ab and [Figure \[fig:N2\]]{}.b that P-CT outperforms Min-CT in minimizing both the sum decoding delay and the completion time. [Figure \[fig:P\]]{}.a shows that for $P>0.3$, P-CT achieves a significant improvement in the completion time. This can be explained by the light of the P-CT policy characteristics. In the P-CT policy, the number of the erased packets is estimated using the law of large numbers. This approximation can be effective when the erasure of the channel or the input (number of packets and users) are high enough. From [Figure \[fig:ITSWAP\]]{}, we clearly see that the BPSO algorithm achieves a lower completion time for a low number of iteration ($5$ iterations). This algorithm have a fixed complexity unlike the multi-layer graph algorithm which have a worst complexity of $MN$ ($60$ in the figure). This fixed complexity property along with its performance to effectively reduce the completion time make this algorithm more reliable and more suitable to be used. [Figure \[fig:MG\]]{} and [Figure \[fig:MUG\]]{} shows that our algorithm to control the completion time using the decoding delay in lossy feedback scenario outperforms largely the blind algorithms specially as the channel is more and more persistent ($\mu$ increases). The optimist approach (FVE) achieves a reasonable degradation for a low channel persistence. However, this degradation become more severe as the memory of the channel increases. The pessimist approach (NVE) can be seen as the complementary of the optimist approach since it perform better in high memory channel and less in near independent channel. The stochastic approach (SVE) achieves an intermediate result and degrades as the channel is near independent or highly correlated. Conclusion {#sec:conclusion} ========== In this paper, we studied the effect of controlling the decoding delay to reduce the completion time below its currently best known solution for persistent channel. We first derived the decoding-delay-dependent completion time expressions. We then employed these expressions to design two new heuristic. The first decides on coded packets by reducing the probability of decoding delay increase on a new layering of the IDNC graph based on user criticality in increasing the overall completion time and the second uses binary optimization with multi-layer objective function that preserves prioritization. We, then, extended our study to the limited feedback environment. Simulation results showed that this new algorithm achieves a lower mean completion time and mean decoding delay compared to the best known completion time heuristics, with significant gains in harsh erasure scenarios. Auxiliary Theorems ================== In this appendix we provide auxiliary theorem that we will use to proof Theorem \[sffrh\]. The following theorem provides the expression of the probability to be in a state of the channel given that the channel was in a particular state at a previous time instant. Let $(X_n)_{n\geq1}$ be a two state ($x$ and $y$) Markov chain, with $P_{tr_{ x\rightarrow y}}$ and $P_{tr_{y\rightarrow x}}$ the transition probability from state $x$ to $y$ and $y$ to $x$, respectively. Let $\mu = (1-P_{tr_{ x\rightarrow y}}-P_{tr_{y\rightarrow x}})$ be the memory of the chain. Define $f(n) = \mathds{P}(X_n = y | X_{n^0} = x),~ \forall~ n \geq n^0$. We have: $$\begin{aligned} f(n) = P_{tr_{ x\rightarrow y}} \times \langle \sum_{i=0}^{n-n^0-1} \mu^{i} \rangle \end{aligned}$$ where $$\begin{aligned} \langle \sum_{x \in X} (.) \rangle = \begin{cases} \sum_{x \in X} (.) &\text{if } X \neq \varnothing \\ 0 &\text{if } X = \varnothing. \end{cases} \end{aligned}$$ \[regj\] The proof can be found in [@refjournal] Appendix A. The following theorem gives the expression of the probability to be in a particular state of the channel given the channel realization for the same time instant. Lets consider the channel defined in [Figure \[fig:GEC\]]{}. For notation simplicity, we will not consider the superscripts in this theorem. Let $X_i(n)$ be a random variable that take the value $1$ if the transmission at time $n$ is erased and $0$ otherwise. The probability of the state of the channel at time $n$ conditioned by the realization $X_i(n)$ at the same time can be expressed as: $$\begin{aligned} &\mathds{P}(\mathcal{C}(n)=y|X_i(n)=x) = \\ &\begin{cases} \cfrac{p\mathcal{P}_G}{p\mathcal{P}_G+q\mathcal{P}_B} &\text{ if } x = 1, y = G\\ \cfrac{(1-p)\mathcal{P}_G}{(1-p)\mathcal{P}_G+(1-q)\mathcal{P}_B} &\text{ if } x = 0, y = G\\ \cfrac{q\mathcal{P}_B}{p\mathcal{P}_G+q\mathcal{P}_B} &\text{ if } x = 1, y = B\\ \cfrac{(1-q)\mathcal{P}_B}{(1-p)\mathcal{P}_G+(1-q)\mathcal{P}_B} &\text{ if } x = 0, y = B \end{cases} \nonumber\end{aligned}$$ \[regj2\] We first note that using the total probability theorem we have: $$\begin{aligned} \mathds{P}(X_i(n)=x) = \begin{cases} p\mathcal{P}_G+q\mathcal{P}_B &\text{ if } x = 1 \\ (1-p)\mathcal{P}_G+(1-q)\mathcal{P}_B &\text{ if } x = 0 \end{cases}\end{aligned}$$ We now use the Bayes theorem and write: $$\begin{aligned} &\mathds{P}(\mathcal{C}(n)=y|X_i(n)=x) = \nonumber \\ &\qquad \cfrac{\mathds{P}(\mathcal{C}(n)=y)}{\mathds{P}(X_i(n)=x)}~\mathds{P}(X_i(n)=x|\mathcal{C}(n)=y) \end{aligned}$$ By simple substitution in the previous expression we obtain: $$\begin{aligned} &\mathds{P}(\mathcal{C}(n)=y|X_i(n)=x) = \\ &\begin{cases} \cfrac{p\mathcal{P}_G}{p\mathcal{P}_G+q\mathcal{P}_B} &\text{ if } x = 1, y = G\\ \cfrac{(1-p)\mathcal{P}_G}{(1-p)\mathcal{P}_G+(1-q)\mathcal{P}_B} &\text{ if } x = 0, y = G\\ \cfrac{q\mathcal{P}_B}{p\mathcal{P}_G+q\mathcal{P}_B} &\text{ if } x = 1, y = B\\ \cfrac{(1-q)\mathcal{P}_B}{(1-p)\mathcal{P}_G+(1-q)\mathcal{P}_B} &\text{ if } x = 0, y = B \end{cases} \nonumber\end{aligned}$$ Proof of Theorem 1 ================== Let us first define $\mathcal{E}_i(t)$ as the cumulative number of transmitted packets from the sender that were erased at user $i$ until time $t$. It is easy to infer that the reception completion event at time $t=C_i(S)$ of a user $i$ will occur when it receives an instantly decodable packet in the $C_i(S)$-th recovery transmission from the sender. Consequently, $\forall~t<=\mathcal{C}_i(S)-1$, the transmission at time $t$ following the schedule $S$ can be one of the following options: - The packet can be erased at user $i$ $\Rightarrow$ The transmission will increase $\mathcal{E}_i(t)$ $\left(\mbox{i.e. } \mathcal{E}_i(t)=\mathcal{E}_i(t-1)+1\right)$. - The packet can be successfully received by the user $\Rightarrow$ Two cases can occur types: - The packet is instantly decodable for user $i$. Note that user $i$ needs to receive $|\mathcal{W}(0)|-1$ of those packets until time $t=C_i(S)-1$ in order to complete its reception by the last missing source packet from the transmitted packet at time $t=C_i(S)$. Consequently, the number of such packets received by user $i$ until time $t=C_i(S)$ is equal to $|\mathcal{W}(0)|$. - The packet is either non-innovative or non instantly decodable $\Rightarrow$ This will increase the value of $D_i(S)$ by one each time it occurs until the reception completion for this user. Consequently, the number of recovery transmission sent by the sender following schedule $S$ until user $i$ complete its reception of the frame packets (i.e. completion time of user $i$) can be expressed as follows: $$\begin{aligned} \mathcal{C}_i(S) = |\mathcal{W}_i(0)| + D_i(S) + \mathcal{E}_i(\mathcal{C}_i(S)-1)\;. \label{cti}\end{aligned}$$ Let $\mathcal{X}_i(t)$ be the number of time instant, from the beginning of the *recovery phase*, until the time $t$, in which the channel was in Good state and let $\mathcal{Y}_i(t)$ be number in which it was in the bad state. Using the limit distribution of the Markov chain, we can write: $$\begin{aligned} \mathcal{X}_i(t) \approx t\mathcal{P}_{G_i^f} \\ \mathcal{Y}_i(t) \approx t\mathcal{P}_{B_i^f}\end{aligned}$$ $\mathcal{E}_i^g(t)$ and $\mathcal{E}_i^b(t)$ be the number of erased transmission in the Good and Bad state respectively from the beginning of the *recovery phase*, until the time $t$. Let Using the law of large number in each of the states of the Markov chain, we have: $$\begin{aligned} \mathcal{E}_i^g(t) \approx \mathcal{X}_i(t)p_i^f \approx t\mathcal{P}_{G_i^f}p_i^f \\ \mathcal{E}_i^b(t) \approx \mathcal{Y}_i(t)q_i^f \approx t\mathcal{P}_{B_i^f}q_i^f\end{aligned}$$ For large enough frame size $N$, the completion time $C_i(S)$ would also be large enough and thus $\mathcal{E}_i(\mathcal{C}_i(S)-1)$ can be approximated using the law of large numbers as follows: $$\begin{aligned} \mathcal{E}_i(\mathcal{C}_i(S)-1) = \mathcal{E}_i(t)^g + \mathcal{E}_i(t)^b \approx \alpha_i(\mathcal{C}_i(S)-1),\end{aligned}$$ where: $$\begin{aligned} \alpha_i = \cfrac{g_i^fp_i^f+q_i^fb_i^f}{g_i^f+b_i^f}\end{aligned}$$ Substituting the previous expression in [(\[cti\])]{} and re-arranging the terms, the completion time for user $i$ can be finally expressed as: $$\begin{aligned} \mathcal{C}_i(S) \approx \cfrac{|\mathcal{W}_i(0)| + D_i(S) - \alpha_i}{1-\alpha_i}.\end{aligned}$$ Thus, the expression for the overall completion time can be expressed as: $$\begin{aligned} \mathcal{C}(S) \approx \max_{i\in\mathcal{M}}\left\{\frac{\left|\mathcal{W}_i(0)\right| + D_i(S)-\alpha_i}{1-\alpha_i}\right\}\end{aligned}$$ Proof of Theorem 3 ================== We first proof that the algorithm, as stated in the original paper, will poorly perform in our system. Let $L$ be the number of particle and $T$ the number of iterations. Assume there is a user $i$ who is missing all the packets (i.e. $\mathcal{H}_i = \varnothing$). Further assume that all users expect of user $i$ received all their packets. Therefore, the only packet combination that can reduce the Wants set of user $i$, is a packet combination where only one packet is included. We will refer to such packet combination as sparse packet combination. For a random initialisation of one of the particle, the probability that a particle will be sparse is $N\left(\cfrac{1}{2}\right)^N$. Thus, the probability that at least one of the $L$ particle is spare is: $$\begin{aligned} 1-\left(1-N\left(\cfrac{1}{2}\right)^N\right)^L\end{aligned}$$ For a large number of packets $N$, with high probability none of the initial value of the $L$ particles will be sparse. For a non-sparse particle, the merit function will be $0$ since no user will be targeted. As a consequence, the update of the particles will be random since almost all direction will result in a non-sparse particle and thus a $0$ merit. Therefore the second iteration of the algorithm can be seen as another initialization of the $L$ particle. The probability to move one particle in a spare configuration after the $T$ iterations of the algorithm is: $$\begin{aligned} 1-\left(1-N\left(\cfrac{1}{2}\right)^N\right)^{L+T}\end{aligned}$$ For a small number of iterations $T$, with high probability, the algorithm will end with a non sparse particles and therefore no update will be made in the system. This process will result in a very poor performance of the overall system. By setting the number of particle equal to the number of packets $L=N$ and using a sparse initialisation of each particle different from the other particles, we can guarantee a decrease in the merit function each time the algorithm is run. Therefore, at each time instant, unless the packet is erased, we can ensure a reduction by at least one packet from the Wants set of user of interest. This conclude to the overall convergence of the system independently of the number of iterations $T$ of the algorithm. Proof of Theorem 4 ================== To compute the probability to loose a transmission $e_i(t)$ or the loose a feedback $f_i(t)$ at time instant $t$, lets consider the channel defined in [Figure \[fig:GEC\]]{}. For notation simplicity, we will not consider the superscripts in this theorem unless it is necessary to specify the forward and backward channels. We first compute the following probability: $$\begin{aligned} \mathds{P}(X_i(n) = 1 |X_i(n^0) = x),\end{aligned}$$ where $X_i(n)$ is a random variable that take the value $1$ if the transmission at time $n$ is erased and $0$ otherwise and $n \geq n_0$. Using the total probability theorem, we write the previous probability as: $$\begin{aligned} &\mathds{P}(X_i(n)=1|X_i(n^0)=x) = \nonumber \\ &\mathds{P}(X_i(n)=1|\mathcal{C}(n)=G,X_i(n^0)=x) \nonumber \\ & \hspace{2cm} \times \mathds{P}(\mathcal{C}(n)=G|X_i(n^0)=x) +\nonumber \\ & \mathds{P}(X_i(n)=1|\mathcal{C}(n)=B,X_i(n^0)=x) \nonumber \\ & \hspace{2cm} \times \mathds{P}(\mathcal{C}(n)=B|X_i(n^0)=x) \end{aligned}$$ By definition of the Markov chain we have: $$\begin{aligned} &\mathds{P}(X_i(n)=1|\mathcal{C}(n)=G,X_i(n^0)=x) = \nonumber \\ & \qquad \mathds{P}(X_i(n)=1|\mathcal{C}(n)=G) = p \\ &\mathds{P}(X_i(n)=1|\mathcal{C}(n)=B,X_i(n^0)=x) = \nonumber \\ & \qquad \qquad \mathds{P}(X_i(n)=1|\mathcal{C}(n)=B) = q\end{aligned}$$ Using the total probability theorem, we can write first term of the previous expression as: $$\begin{aligned} &\mathds{P}(\mathcal{C}(n)=G|X_i(n^0)=x) = \nonumber \\ &\mathds{P}(\mathcal{C}(n)=G|\mathcal{C}(n^0)=G,X_i(n^0)=x) \nonumber \\ & \hspace{2cm} \times \mathds{P}(\mathcal{C}(n^0)=G|X_i(n^0)=x) + \nonumber \\ &\mathds{P}(\mathcal{C}(n)=G|\mathcal{C}(n^0)=B,X_i(n^0)=x) \nonumber \\ & \hspace{2cm} \times \mathds{P}(\mathcal{C}(n^0)=B|X_i(n^0)=x) \end{aligned}$$ We can further reduce the previous expressions: $$\begin{aligned} &\mathds{P}(\mathcal{C}(n)=G|X_i(n^0)=x) = \nonumber \\ &\mathds{P}(\mathcal{C}(n)=G|\mathcal{C}(n^0)=G)\mathds{P}(\mathcal{C}(n^0)=G|X_i(n^0)=x) + \nonumber \\ &\mathds{P}(\mathcal{C}(n)=G|\mathcal{C}(n^0)=B)\mathds{P}(\mathcal{C}(n^0)=B|X_i(n^0)=x) \end{aligned}$$ Similarly, we apply the same development to the second term: $$\begin{aligned} &\mathds{P}(\mathcal{C}(n)=B|X_i(n^0)=x) = \nonumber \\ &\mathds{P}(\mathcal{C}(n)=B|\mathcal{C}(n^0)=G)\mathds{P}(\mathcal{C}(n^0)=G|X_i(n^0)=x) + \nonumber \\ &\mathds{P}(\mathcal{C}(n)=B|\mathcal{C}(n^0)=B)\mathds{P}(\mathcal{C}(n^0)=B|X_i(n^0)=x)\end{aligned}$$ Using Theorem \[regj\] and Theorem \[regj2\], we can express the previous probability according to the value of $x$: $$\begin{aligned} &\mathds{P}(X_i(n)=1|X_i(n^0)=x) = \\ & \begin{cases} &\cfrac{p\mathcal{P}_G}{p\mathcal{P}_G+q\mathcal{P}_B}(p+(q-p)b\sum\limits_{i=0}^{n-n^0-1} \mu^{i}) \nonumber \\ & + \cfrac{q\mathcal{P}_B}{p\mathcal{P}_G+q\mathcal{P}_B}(q+(p-q)g\sum\limits_{i=0}^{n-n^0-1} \mu^{i}) \nonumber \\ & \qquad \text{ if } x = 1 \nonumber \\ &\cfrac{(1-p)\mathcal{P}_G}{(1-p)\mathcal{P}_G+(1-q)\mathcal{P}_B} (p+(q-p)b\sum\limits_{i=0}^{n-n^0-1} \mu^{i}) \nonumber \\ & + \cfrac{(1-q)\mathcal{P}_B}{(1-p)\mathcal{P}_G+(1-q)\mathcal{P}_B}(q+(p-q)g\sum\limits_{i=0}^{n-n^0-1} \mu^{i}) \nonumber \\ & \qquad \text{ if } x = 0 \end{cases}\end{aligned}$$ We now apply our framework to compute the probability to loose a transmission and to loose the feedback. At time $t_i^0$, packet $j^0$ was attempted to user $i$. Therefore, the probability to loose the transmission can be seen as: $$\begin{aligned} &e_i(t) = \mathds{P}(X_i(n) = 1 |X_i(t_i^0) = x) = \nonumber \\ &\begin{cases} \mathds{P}(X_i(n) = 1 |X_i(t_i^0) = 1) \text{ if } f_{ij^0} = 1 \\ \mathds{P}(X_i(n) = 1 |X_i(t_i^0) = 0) \text{ if } f_{ij^0} = 0 \end{cases}\end{aligned}$$ Using the expression derived above, we can write this probability as: $$\begin{aligned} &e_i = \\ &\begin{cases} &\cfrac{p_i^fg_i^f}{p_i^fg_i^f+q_i^fb_i^f}(p_i^f+(q_i^f-p_i^f)b_i^f\sum\limits_{i=0}^{t-t_i^{(0)}-1} \mu^{i}) \nonumber \\ & + \cfrac{q_i^fb_i^f}{p_i^fg_i^f+q_i^fb_i^f}(q_i^f+(p_i^f-q_i^f)g\sum\limits_{i=0}^{t-n^0-1} \mu^{i}) \nonumber \\ & \qquad \text{ if } f_{ij^0} = 1 \nonumber \\ &\cfrac{(1-p_i^f)g_i^f}{(1-p_i^f)g_i^f+(1-q_i^f)b_i^f} \nonumber \\ & \hspace{2cm} \times (p_i^f+(q_i^f-p_i^f)b_i^f\sum\limits_{i=0}^{t-t_i^{(0)}-1} \mu^{i}) \nonumber \\ & + \cfrac{(1-q_i^f)b_i^f}{(1-p_i^f)g_i^f+(1-q_i^f)b_i^f} \nonumber \\ & \hspace{2cm} \times (q_i^f+(p_i^f-q_i^f)g_i^f\sum\limits_{i=0}^{t-t_i^{(0)}-1} \mu^{i}) \nonumber \\ & \qquad \text{ if } f_{ij^0} = 0 \end{cases}\end{aligned}$$ At time $t_i^*$ the feedback was successfully received from user $i$. Thus, the probabilities $f_i(t)$ of loosing a feedback from user $i$ at time $t>t_i^*$ can be expressed as: $$\begin{aligned} &f_i = \mathds{P}(X_i(n) = 1 |X_i(t_i^*) = 0) \nonumber \\ & = \cfrac{(1-p_i^b)g_i^b}{(1-p_i^b)g_i^b+(1-q_i^b)b_i^b} (p_i^b+(q_i^b-p_i^b)b_i^b\sum_{i=0}^{t-t_i^*-1} \psi^{i}) \nonumber \\ & + \cfrac{(1-q_i^b)b_i^b}{(1-p_i^b)g_i^b+(1-q_i^b)b_i^b}(q_i^b+(p_i^b-q_i^b)g_i^b\sum_{i=0}^{t-t_i^*-1} \psi^{i}) \end{aligned}$$ Note that we can obtain the expressions derived in [@refjournal] by setting $p=0$ and $q=1$: $$\begin{aligned} &e_i = \begin{cases} 1-g_i^f\sum\limits_{i=0}^{n-t_i^0-1} \mu^{i}&\text{ if } f_{ij^0} = 1 \\ b_i^f\sum\limits_{i=0}^{n-t_i^0-1} \mu^{i} &\text{ if } f_{ij^0} = 0 \end{cases} \\ &f_i = b_i^b\sum\limits_{i=0}^{n-t_i^*-1} \psi^{i}\end{aligned}$$ Expressions of Innovative and Finish Probabilities ================================================== Let $\mathcal{K}_{ij}$ be the set of indexes of the frames in which packet $j$ was attempted to user $i$ since the last time the BS received feedback from this user, excluding the current frame. Define $\mathcal{U}_i^d(n)$ as the following: $$\begin{aligned} \mathcal{U}_i^d(n) = \bigcup_{j \in \mathcal{W}_i(n \times T_f)}\lambda_{ij}(n),~ \forall~ n \in \mathds{N}^+.\end{aligned}$$ Given these definitions, the probability $p_{i,n}(j,t)$ that packet $j$ is innovative for user $i$ can be expressed as: $$\begin{aligned} &p_{i,n}(j,t) = \langle \prod_{k \in \lambda_{ij}(n^+(t))} e_i(k) \rangle \nonumber\\ & \times \langle \prod_{k \in \mathcal{K}_{ij}} \rangle \left\{ \left( \prod_{s \in \mathcal{U}_i^d(k)} e_i(s) + \prod_{s \in \lambda_{ij}(k)} e_i(s) \right. \right. \nonumber\\ & \left. {} \left. {} \times (1- \prod_{s \in \mathcal{U}_i^d(k) \setminus \lambda_{ij}(k)} e_i(s) )f_i(u_i(k)) \right) \right. \nonumber\\ & \left. {} \times \left( \prod_{s \in \mathcal{U}_i^d(k)} e_i(s) + (1- \prod_{s \in \mathcal{U}_i^d(k)} e_i(s) )f_i(u_i(k)) \right)^{-1} \right\} \end{aligned}$$ The probability $p_{i,f}(t)$ that user $i$ successfully received all his primary packets but $\mathcal{W}_i(t) \neq \varnothing$, at time $t$ is the following: $$\begin{aligned} p_{i,f}(t) = \prod_{j \in \mathcal{W}_i(t)} \left( 1- p_{i,n}(j,t) \right),\end{aligned}$$ where $u_i(n)=n*T_f-T_u+T_{u_i}$ and $$\begin{aligned} \langle \prod_{x \in X} (.) \rangle = \begin{cases} \prod\limits_{x \in X} (.) &\text{if } X \neq \varnothing \\ 1 &\text{if } X = \varnothing . \end{cases} \end{aligned}$$ The proof can be found in [@refjournal] Appendix D. Proof of Theorem 5 ================== Let $\mathds{M}(t)$ be the event that $\max_{i\in\mathcal{M}}\left\{\mathcal{C}_i(t)\right\} > \max_{i\in\mathcal{M}}\left\{\mathcal{C}_i(t-1)\right\}$ at time after a transmission $\kappa(t)$ at time $t$. The probability of this event can be expressed as: $$\begin{aligned} \mathds{P}(\mathds{M}(t)) &= \mathds{P}\left(\max_{i\in\mathcal{M}}\left\{\mathcal{C}_i(t)\right\} > \max_{i\in\mathcal{M}}\left\{\mathcal{C}_i(t-1)\right\}\right) \nonumber \\ &= 1 - \mathds{P}\left(\max_{i\in\mathcal{M}}\left\{\mathcal{C}_i(t)\right\} = \max_{i\in\mathcal{M}}\left\{\mathcal{C}_i(t-1)\right\}\right).\end{aligned}$$ Users $j \in \mathcal{M} \setminus \mathcal{P}(t)$ are unable to increase $\max_{i\in\mathcal{M}}\left\{\mathcal{C}_i(t)\right\}$ compared to $\max_{i\in\mathcal{M}}\left\{\mathcal{C}_i(t-1)\right\}$ with probability 1, even if they experience a decoding delay. This is true since the set $\mathcal{P}(t)$ is constructed such that it contains all users that have non-zero probabilities of increasing the completion time. According the definition of $\mathcal{C}_i(t)$ in [(\[eq:Ct\])]{}, $\forall~i\in\mathcal{M}$, users $i \in \mathcal{P}(t)$ will not increase $\max_{i\in\mathcal{M}}\left\{\mathcal{C}_i(t)\right\}$ after the transmission $\kappa(t)$ only if they do not experience a decoding delay increment in this transmission. Consequently, we get: $$\begin{aligned} & \mathds{P}\left(\max_{i\in\mathcal{M}}\left\{\mathcal{C}_i(t)\right\} = \max_{i\in\mathcal{M}}\left\{\mathcal{C}_i(t-1)\right\}\right) \nonumber \\ & \qquad = \mathds{P}\left(\max_{i\in\mathcal{P}(t)}\left\{\mathcal{C}_i(t)\right\} = \max_{i\in\mathcal{M}}\left\{\mathcal{C}_i(t-1)\right\}\right) \nonumber \\ & \qquad = \mathds{P} \left(D_i(t)-D_i(t-1)=0, \forall~i\in\mathcal{P}(t)\right) \nonumber \\ & \qquad = \prod_{i \in \mathcal{P}(t)} \mathds{P} \left(D_i(t)-D_i(t-1)=0\right).\end{aligned}$$ According to the analysis done in [@refjournal], the probability of the decoding delay increase for user $i$ is given by the following theorem: The probability that user $i$ does not experience a decoding delay at time $t$, after the transmission $\kappa$ is: $$\begin{aligned} &\mathds{P}(d_i(\kappa,t) = 0) \nonumber \\ &= \begin{cases} e_i(t)& i \in (\widehat{\tau} \cap \overline{F}) \\ e_i(t)+p_{i,f}(t)-e_i(t)p_{i,f}(t) &i \in (\widehat{\tau} \cap F) \\ 1& i \in (\tau \cap \overline{U}) \\ e_i(t)+p_{i,n}(\kappa_i,t)& i \in (\tau \cap (U \setminus F)) \\ \qquad -e_i(t)p_{i,n}(\kappa_i,t) \\ e_i(t)+(1-e_i(t))(p_{i,n}(\kappa_i,t)& i \in (\tau \cap F) \\ \qquad +p_{i,f}(t)) , \end{cases}\end{aligned}$$ where $\widehat{\tau}$ is set of users not targeted and having non-empty Wants sets (i.e. $\widehat{\tau}= M_w \setminus \tau$), $\kappa_i$ is the intended packet for user $i$ in the transmission $\kappa$, $F$ is the set of users having all their remaining packets in an uncertain state and $U$ is the set of users having the intended packet for them in an uncertain state. The notation $\overline{X}$ refers to the set complementary to the set $X$. The proof can be found in [@refjournal] Appendix A. In other words, the completion time does not increase only if all the users having the completion time so far do not experience a decoding delay increase in the next transmission. Using the expression of the decoding delay increase, the probability of event $\mathds{M}(t)$ to occur can be expressed as follows: $$\begin{aligned} \mathds{P}&(\mathds{M}(t)) = 1 - \prod_{i \in \mathcal{P}(t)} \mathds{P} (d_i(\kappa,t) = 0) \nonumber \\ & \hspace{1cm} = 1 - \prod_{i \in (\mathcal{P}(t) \cap \widehat{\tau} \cap \overline{F}) } e_i(t) \nonumber \\ &\prod_{i \in (\mathcal{P}(t) \cap \widehat{\tau} \cap F) } e_i(t)+p_{i,f}(t)-e_i(t)p_{i,f}(t) \nonumber \\ &\prod_{i \in (\mathcal{P}(t) \cap \tau \cap (U \setminus F)) } e_i(t)+p_{i,n}(\kappa_i,t)-e_i(t)p_{i,n}(\kappa_i,t) \nonumber \\ &\prod_{i \in (\mathcal{P}(t) \cap \tau \cap F) } e_i(t)+(1-e_i(t))(p_{i,n}(\kappa_i,t)+p_{i,f}(t)).\end{aligned}$$ From the expressions of the completion time increment, we can express the minimum completion time problem as a maximum weight clique problem, such that: $$\begin{aligned} \kappa^{*}&(t) = \underset{\kappa(t) \in \mathcal{G}}{\text{argmin}}\left\{ \mathds{P}(\mathds{M}(t))\right\} \nonumber \\ &= \underset{\kappa(t) \in \mathcal{G}}{\text{argmin}} \left\{ 1 - \prod_{i \in (\mathcal{P}(t) \cap \widehat{\tau} \cap \overline{F}) } e_i(t) \right. \nonumber \\ & \left. {} \prod_{i \in (\mathcal{P}(t) \cap \widehat{\tau} \cap F) } e_i(t)+p_{i,f}(t)-e_i(t)p_{i,f}(t) \right. \nonumber \\ & \left. {} \prod_{i \in (\mathcal{P}(t) \cap \tau \cap (U \setminus F)) } e_i(t)+p_{i,n}(\kappa_i,t)-e_i(t)p_{i,n}(\kappa_i,t) \right. \nonumber \\ & \left. {} \prod_{i \in (\mathcal{P}(t) \cap \tau \cap F) } e_i(t)+(1-e_i(t))(p_{i,n}(\kappa_i,t)+p_{i,f}(t)) \right\} \nonumber \\ &= \underset{\kappa(t) \in \mathcal{G}}{\text{argmax}} \left\{\prod_{i \in (\mathcal{P}(t) \cap \widehat{\tau} \cap \overline{F}) } e_i(t) \right. \nonumber \\ & \left. {} \prod_{i \in (\mathcal{P}(t) \cap \widehat{\tau} \cap F) } e_i(t)+p_{i,f}(t)-e_i(t)p_{i,f}(t) \right. \nonumber \\ & \left. {} \prod_{i \in (\mathcal{P}(t) \cap \tau \cap (U \setminus F)) } e_i(t)+p_{i,n}(\kappa_i,t)-e_i(t)p_{i,n}(\kappa_i,t) \right. \nonumber \\ & \left. {} \prod_{i \in (\mathcal{P}(t) \cap \tau \cap F) } e_i(t)+(1-e_i(t))(p_{i,n}(\kappa_i,t)+p_{i,f}(t)) \right\} .\end{aligned}$$ Since the function $log(.)$ is an increasing function, then the problem can be expressed as: $$\begin{aligned} &\kappa^{*}(t) = \underset{\kappa(t) \in \mathcal{G}}{\text{argmin }} \text{log}\left\{ \mathds{P}(\mathds{M}(t))\right\} \nonumber \\ &= \underset{\kappa(t) \in \mathcal{G}}{\text{argmax}} \left\{ \sum_{i \in (\mathcal{P}(t) \cap \widehat{\tau} \cap \overline{F}) } \text{log}( e_i(t)) \right. \nonumber \\ & \left. {} +\sum_{i \in (\mathcal{P}(t) \cap \widehat{\tau} \cap F) } \text{log}( e_i(t) +p_{i,f}(t)-e_i(t)p_{i,f}(t)) \right. \nonumber \\ & \left. {} +\sum_{i \in (\mathcal{P}(t) \cap \tau \cap (U \setminus F)) } \text{log}( e_i(t)+p_{i,n}(\kappa_i,t)-e_i(t)p_{i,n}(\kappa_i,t)) \right. \nonumber \\ & \left. {} +\sum_{i \in (\mathcal{P}(t) \cap \tau \cap F) } \text{log}( e_i(t)+(1-e_i(t))(p_{i,n}(\kappa_i,t)+p_{i,f}(t))) \right\} .\end{aligned}$$ If user $i$ does not have all its wanted packets in an uncertain state, then the probability than he finished receiving all its wanted packet is $0$. Thus $p_{i,f}(t)=0,~\forall~ i \in \overline{F}$. Therefore, we have: $$\begin{aligned} &\sum_{i \in (\mathcal{P}(t) \cap \widehat{\tau} \cap \overline{F}) } \text{log}( e_i(t)) \nonumber\\ &= \sum_{i \in (\mathcal{P}(t) \cap \widehat{\tau} \cap \overline{F}) } \text{log}( e_i(t)+p_{i,f}(t)-e_i(t)p_{i,f}(t)). \label{q1}\end{aligned}$$ Using [(\[q1\])]{}, the expression below can be simplified as: $$\begin{aligned} & \sum_{i \in (\mathcal{P}(t) \cap \widehat{\tau} \cap \overline{F}) } \text{log}( e_i(t)) \nonumber \\ & +\sum_{i \in (\mathcal{P}(t) \cap \widehat{\tau} \cap F) } \text{log}( e_i(t)+p_{i,f}(t)-e_i(t)p_{i,f}(t)) \nonumber \\ & = \sum_{i \in (\mathcal{P}(t) \cap \widehat{\tau} )} \text{log}( e_i(t)+p_{i,f}(t)-e_i(t)p_{i,f}(t)) .\end{aligned}$$ It is clear that $(U \setminus F) \subseteq \overline{F}$, then $p_{i,f}(t)=0,~\forall~ i \in (U \setminus F)$. We then obtain: $$\begin{aligned} & \sum_{i \in (\mathcal{P}(t) \cap \tau \cap (U \setminus F)) } \text{log}( e_i(t)+p_{i,n}(\kappa_i,t)-e_i(t)p_{i,n}(\kappa_i,t)) \nonumber \\ &= \sum_{i \in (\mathcal{P}(t) \cap \tau \cap (U \setminus F)) }\text{log}( e_i(t) \nonumber \\ & +(1-e_i(t))(p_{i,n}(\kappa_i,t)+p_{i,f}(t))). \label{q2}\end{aligned}$$ Therefore, using [(\[q2\])]{}, we can simplify the below expression: $$\begin{aligned} & \sum_{i \in (\mathcal{P}(t) \cap \tau \cap (U \setminus F)) } \text{log}( e_i(t)+p_{i,n}(\kappa_i,t)-e_i(t)p_{i,n}(\kappa_i,t)) \nonumber \\ & +\sum_{i \in (\mathcal{P}(t) \cap \tau \cap F) } \text{log}( e_i(t)+(1-e_i(t))(p_{i,n}(\kappa_i,t)+p_{i,f}(t))) \nonumber \\ &= \sum_{i \in (\mathcal{P}(t) \cap \tau \cap U) } \text{log}( e_i(t)+(1-e_i(t))(p_{i,n}(\kappa_i,t)+p_{i,f}(t))).\end{aligned}$$ Given the above simplifications, the maximum weight clique problem can be written as follows: $$\begin{aligned} &\kappa^{*}(t) = \underset{\kappa(t) \in \mathcal{G}}{\text{argmin }} \text{log}\left\{ \mathds{P}(\mathds{M}(t))\right\} \nonumber \\ & = \underset{\kappa(t) \in \mathcal{G}}{\text{argmax}} \left\{ \sum_{i \in (\mathcal{P}(t) \cap \widehat{\tau} ) } \text{log}( e_i(t)+p_{i,f}(t)-e_i(t)p_{i,f}(t)) \right. \nonumber \\ & \left. {} +\sum_{i \in (\mathcal{P}(t) \cap \tau \cap U) } \text{log}( e_i(t) +(1-e_i(t))(p_{i,n}(\kappa_i,t)+p_{i,f}(t))) \right\} \nonumber \\ & = \underset{\kappa(t) \in \mathcal{G}}{\text{argmin}} \left\{ \sum_{i \in (\mathcal{P}(t) \cap \tau ) } \text{log}( e_i(t)+p_{i,f}(t)-e_i(t)p_{i,f}(t)) \right. \nonumber \\ & \left. {} -\sum_{i \in (\mathcal{P}(t) \cap \tau \cap U) } \text{log}( e_i(t)+(1-e_i(t))(p_{i,n}(\kappa_i,t)+p_{i,f}(t))) \right\} .\end{aligned}$$ Note that if the targeted packet $\kappa_i$ of user $i$ is not an uncertain packet (i.e. $i \in \overline{U}$), then the packet is certainly innovative. Since that this user have at least one certain wanted packet then he surely still needs packets. In other words, we have $i \in \overline{U} \Rightarrow p_{i,n}(\kappa_i,t)=1$ and $p_{i,f}(t)=0$. We write the following expression as: $$\begin{aligned} &\sum_{i \in (\mathcal{P}(t) \cap \tau) } \text{log}( e_i(t)+(1-e_i(t))(p_{i,n}(\kappa_i,t)+p_{i,f}(t))) \nonumber \\ &= \sum_{i \in (\mathcal{P}(t) \cap \tau \cap U} \text{log}( e_i(t)+(1-e_i(t))(p_{i,n}(\kappa_i,t)+p_{i,f}(t))) \nonumber \\ &+ \sum_{i \in (\mathcal{P}(t) \cap \tau \cap \overline{U}} \text{log}( e_i(t)+(1-e_i(t))(p_{i,n}(\kappa_i,t)+p_{i,f}(t))) \nonumber \\ &= \sum_{i \in (\mathcal{P}(t) \cap \tau \cap U} \text{log}( e_i(t)+(1-e_i(t))(p_{i,n}(\kappa_i,t)+p_{i,f}(t))).\end{aligned}$$ Giving all the above simplifications, we now express the maximum weight clique problem as: $$\begin{aligned} &\kappa^{*}(t) = \underset{\kappa(t) \in \mathcal{G}}{\text{argmin }} \text{log}\left\{ \mathds{P}(\mathds{M}(t))\right\} \nonumber \\ &= \underset{\kappa(t) \in \mathcal{G}}{\text{argmin}} \left\{ \sum_{i \in (\mathcal{P}(t) \cap \tau ) } \text{log}( e_i(t)+p_{i,f}(t)-e_i(t)p_{i,f}(t)) \right. \nonumber \\ & \left. {} -\sum_{i \in (\mathcal{P}(t) \cap \tau ) } \text{log}( e_i(t)+(1-e_i(t))(p_{i,n}(\kappa_i,t)+p_{i,f}(t))) \right\} \nonumber \\ &= \underset{\kappa(t) \in \mathcal{G}}{\text{argmin}} \nonumber \\ & \left\{ \sum_{i \in (\mathcal{P}(t) \cap \tau ) } \text{log}\left(\cfrac{ e_i(t)+p_{i,f}(t)-e_i(t)p_{i,f}(t)}{e_i(t)+(1-e_i(t))(p_{i,n}(\kappa_i,t)+p_{i,f}(t))} \right) \right\} \nonumber \\ \end{aligned}$$ $$\begin{aligned} &= \underset{\kappa(t) \in \mathcal{G}}{\text{argmax}} \nonumber \\ &\left\{ \sum_{i \in (\mathcal{P}(t) \cap \tau ) } \text{log}\left(\cfrac{e_i(t)+(1-e_i(t))(p_{i,n}(\kappa_i,t)+p_{i,f}(t))}{ e_i(t)+p_{i,f}(t)-e_i(t)p_{i,f}(t)}\right) \right\} \nonumber \\ &= \underset{\kappa(t) \in \mathcal{G}}{\text{argmax}} \nonumber \\ &\left\{ \sum_{i \in (\mathcal{P}(t) \cap \tau ) } \text{log}\left(1 + \cfrac{(1-e_i(t))p_{i,n}(\kappa_i,t)}{ e_i(t)+p_{i,f}(t)-e_i(t)p_{i,f}(t)}\right) \right\} \nonumber \\ &= \underset{\kappa(t) \in \mathcal{G}}{\text{argmax}} \left\{ \sum_{i \in (\mathcal{P}(t) \cap \tau ) } \text{log}\left( 1 + \cfrac{p_{i,n}(\kappa_i,t)}{ \cfrac{e_i(t)}{1-e_i(t)}+p_{i,f}(t)}\right) \right\}.\end{aligned}$$ In other words, the transmission $\kappa(t)$ than can satisfy the critical criterion can be selected using a maximum weight clique problem in which the weight of each vertex $v_{ij}$ in $\mathcal{P}(t)$ can be expressed as: $$\begin{aligned} w_{ij}^* = \text{log}\left( 1 + \cfrac{p_{i,n}(j,t)}{ \cfrac{e_i(t)}{1-e_i(t)}+p_{i,f}(t)}\right).\end{aligned}$$
--- abstract: | The Aharonov-Bohm (AB) effect in non-commutative quantum mechanics (NCQM) is studied. First, by introducing a shift for the magnetic vector potential we give the Schr$\ddot{o}$dinger equations in the presence of a magnetic field on NC space and NC phase space, respectively. Then by solving the Schr$\ddot{o}$dinger equations, we obtain the Aharonov-Bohm (AB) phase on NC space and NC phase space, respectively. PACS number(s): 11.10.Nx, 03.65.-w author: - 'Kang Li$^{a,c}$' - 'Sayipjamal Dulat$^{b,c}$' title: 'The Aharonov-Bohm Effect in Noncommutative Quantum Mechanics' --- Introduction: NC space and NC phase space ========================================= Recently, there has been much interest in the study of physics on noncommutative(NC) space[@SW]-[@Scho], not only because the NC space is necessary when one studies the low energy effective theory of D-brane with B field background, but also because in the very tiny string scale or at very high energy situation, the effects of non commutativity of both space-space and momentum-momentum may appear. There are many papers devoted to the study of various aspects of quantum mechanics [@DJT]-[@Likang]on noncommutative space with usual (commutative) time coordinate. In the usual $n$ dimensional commutative space , the coordinates and momenta in quantum mechanics have the following commutation relations: $$\begin{aligned} \label{Eq:cmr1} \begin{array}{l} ~[x_{i},x_{j}]=0,\\~ [p_{i},p_{j}]=0,\hspace{2cm} i,j = 1,2, ...,n, \\~ [x_{i},p_{j}]=i \hbar\delta_{ij}. \end{array}\end{aligned}$$ At very tiny scales, say string scale, not only space-momentum does not commute, but also space-space may not commute anymore. Therefore the NC space is a space where coordinate and momentum operators satisfy the following commutation relations $$\label{Eq:nmr2} ~[\hat{x}_{i},\hat{x}_{j}]=i\Theta_{ij},~~~ [\hat{p}_{i},\hat{p}_{j}]=0,~~~[\hat{x}_{i},\hat{p}_{j}]=i \hbar\delta_{ij},$$ where $\hat{x}_i$ and $\hat{p}_i$ are the coordinate and momentum operators on NC space. Ref.[@Likang] showed that $\hat{p}_i=p_i$ and $\hat{x}_i$ have the representation form $$\label{Eq:Rep.1} \hat{x}_{i}= x_{i}-\frac{1}{2\hbar }\Theta_{ij}p_{j}, \hspace{1cm} i,j = 1,2,...,n.$$ The case of both space-space and momentum-momentum noncommuting [@zhang][@Likang] is different from the case of only space-space noncommuting. Thus the NC phase space is a space where momentum operator in Eq. (\[Eq:nmr2\]) satisfies the following commutation relations $$\label{Eq:nmr3} [\hat{p}_{i},\hat{p}_{j}]=i\bar{\Theta}_{ij},\hspace{2cm} i,j = 1,2,...,n.$$ Here $\{\Theta_{ij}\}$ and $\{\bar{\Theta}_{ij}\}$ are totally antisymmetric matrices which represent the noncommutative property of the coordinate and momentum on noncommutative space and phase space, respectively, and play analogous role to $\hbar$ in the usual quantum mechanics. On NC phase space the representations of $\hat{x}$ and $\hat{p}$ in term of $x$ and $p$ were given in ref.[@Likang] as follows $$\label{Eq:Rep.4} \begin{array}{ll} \hat{x}_{i}&= \alpha x_{i}-\frac{1}{2\hbar\alpha}\Theta_{ij}p_{j},\\ ~&~\\ \hat{p}_{i}&=\alpha p_{i}+\frac{1}{2\hbar\alpha}\bar{\Theta}_{ij}x_{j}, \hspace{1cm} i,j = 1,2,...,n. \end{array}$$ The $\alpha$ here is a scaling constant related to the noncommutativity of phase space.When $\bar{\Theta}=0$, it leads $\alpha =1$[@Likang], the NC phase space returns to the NC space, which is extensively studied in the literature, where the space-space is non-commuting, while momentum-momentum is commuting. Given the NC space or NC phase space, one should study its physical consequences. It appears that the most natural places to search the noncommutativity effects are simple quantum mechanics (QM) system. So far many interesting topics in NCQM such as hydrogen atom spectrum in an external magnetic field [@nair; @CST2] and Aharonov-Bohm(AB) effect [@CST1] in the presence of the magnetic field, as well as the Aharonov-Casher effects [@MZ] have been studied extensively. The purpose of this paper is to do further study on the Aharonov-Bohm effect on NC space and NC phase space, respectively, where the space-space noncommutativity and both space-space and momentum-momentum noncommutativity could produce additional phase difference. This paper is organized as follows: In section 2, we study the Aharonov-Bohm effect on NC space. First, the Schr$\ddot{o}$dinger equation in the presence of magnetic field is given, and the magnetic Aharonov-Bohm phase expression is derived. In two dimensions, our result agrees with the result of Ref. [@CST1]. The general AB phase on NC space is also given in the presence of electromagnetic field. In section 3, we investigate the Aharonov-Bohm effect on NC phase space. By solving the Schr$\ddot{o}$dinger equation in the presence of magnetic field, the additional AB phase related to the momentum-momentum noncommutativity is obtained explicitly. Conclusions and some remarks are given in section 4. The Aharonov-Bohm effect on NC space ===================================== Let $H(x,p)$ be the Hamiltonian operator of the usual quantum system, then the static Schr$\ddot{o}$dinger equation on NC space is usually written as $$\label{sdeq1} H(x,p)\ast\psi = E\psi,$$ where the Moyal-Weyl (or star) product between two functions is defined by, $$\label{star} (f \ast g)(x) = e^{ \frac{i}{2} \Theta_{ij} \partial_{x_i} \partial_{x_j} }f(x_i)g(x_j) = f(x)g(x) + \frac{i}{2}\Theta_{ij} \partial_i f \partial_j g\big|_{x_i=x_j},$$ here $f(x)$ and $g(x)$ are two arbitrary functions. On NC space the star product can be replaced by a Bopp’s shift [@CFZ], i.e. the star product can be changed into the ordinary product by replacing $H(x,p)$ with $H(\hat{x},\hat{p})$. Thus the Schr$\ddot{o}$dinger equation can be written as, $$\label{GBShift} H(\hat{x}_i,\hat{p}_i)\psi=H( x_{i}-\frac{1}{2\hbar}\Theta_{ij}p_{j},p_{i})\psi = E\psi.$$ Here $x_i$ and $p_i$ are coordinate and momentum operators in usual quantum mechanics. Thus the Eq.(\[GBShift\]) is actually defined on commutative space, and the noncommutative effects can be evaluated through the $\Theta$ related terms. Note that the $\Theta$ term always can be treated as a perturbation in QM , since $\Theta_{ij}<<1$. When magnetic field is involved, the Schr$\ddot{o}$dinger equation (\[sdeq1\]) becomes $$\label{sdeq2} H( x_i,p_i,A_i)\ast\psi = E\psi.$$ To replace the star product in Eq.(\[sdeq2\]) with a usual product, first we need to replace $x_i$ and $ p_i$ with a Bopp’s shift, then we also need to replace the vector potential $A_i$ with a shift given as follows $$\begin{aligned} \label{L-D} A_i &\rightarrow& A_i +\frac{1}{2}\Theta_{lj}p_l\partial_j A_i,\end{aligned}$$ Thus the Schr$\ddot{o}$dinger Eq.(\[sdeq2\]) in the presence of magnetic field becomes $$\label{sdeq3} H\big( x_{i}-\frac{1}{2\hbar}\Theta_{ij}p_{j},p_{i}, A_i +\frac{1}{2}\Theta_{lj}p_l\partial_j A_i \big)\psi = E\psi.$$ We should emphasize that the Bopp’s shift and the shift Eq. (\[L-D\]) are equivalent to the star product in the Schr$\ddot{o}$dinger Eq.(\[sdeq2\]). Now let us consider a particle of mass $m$ and charge $q$ moving in a magnetic field with magnetic potential $A_i$, then the Schr$\ddot{o}$dinger equation is (we choose unit of $\hbar=c=1$), $$\label{sdeq3} \frac{1}{2m} \Big(p_i- q A_i - \frac{1}{2}q\Theta_{lj}p_l\partial_j A_i \Big)^2\psi = E\psi.$$ In an analogous way as in usual quantum mechanics, the solution for (\[sdeq3\]) reads $$\label{solution1} \psi=\psi_0 \exp \big[iq\int_{x_0}^x( A_i +\frac{1}{2}\Theta_{lj}p_l\partial_j A_i)dx_i\big],$$ where $\psi_0$ is the solution of (\[sdeq3\]) when $A_i=0$. The phase term of (\[solution1\]) is so called AB phase. If we consider a charged particle to pass a double slits, then the integral runs from the source $x_0$ through one of the two slits to the screen $x$, the coherent pattern will depend on the phase difference of two paths. Thus the total phase shift for the AB effect is $$\label{AB-sphase1} \Delta \Phi_{AB}=\delta \Phi_0 + \delta\Phi_\theta^{NC} = iq\oint A_idx_i +\frac{iq}{2} \oint\Theta_{lj}(m v_l + q A_l)\partial_j A_i dx_i,$$ where the relation $m v_l=p_l -q A_l+O(\Theta)$ has been applied [^1], and we omitted the second order terms of the $\Theta$; the first term is the AB phase in usual quantum mechanics, the second term is the correction to the usual AB phase due to space-space noncommutativity; the line integral runs from the source through one of the two slits to the screen and returns to the source through the other slit. In three dimensional NC space, i.e. $i,j=1,2,3$, we can define a vector $\mathbf{\theta}=(\theta_1,\theta_2,\theta_3)$ with $\theta_i$ satisfies $\Theta_{ij}=\epsilon_{ijk}\theta_k$, or $\theta_i=\frac{1}{2}\epsilon_{ijk}\Theta_{jk}$. Then the second and third terms in Eq.(\[AB-sphase1\]) have the form $$\label{1} \frac{i }{2}q\oint \Theta_{lj} m v_l \partial_j A_i dx_i=\frac{i}{2}q \oint \epsilon_{lmk}\theta_k m v_l\partial_j A_i dx_i = \frac{i}{2}q m \oint\mathbf{\theta}\cdot (\mathbf{v}\times \nabla A_i)dx_i,$$ and $$\label{2} \frac{i}{2}q^2\oint\Theta_{lj}A_l\partial_j A_i dx_i=\frac{i}{2}q^2 \oint \epsilon_{lmk}\theta_k A_l\partial_j A_i dx_i=\frac{i}{2}q^2 \oint \mathbf{\theta}\cdot(\mathbf{A }\times\nabla A_i ) dx_i.$$ Using Eqs. (\[1\]) and (\[2\]), we can write the AB phase as $$\label{AB-sphase2} \Delta \Phi_{AB}=iq\oint A_idx_i +\frac{i}{2}q \oint\big( m \mathbf{\theta} \cdot(\mathbf{v}\times \nabla A_i)+ q \mathbf{\theta}\cdot(\mathbf{A }\times\nabla A_i )\big)dx_i.$$ In two dimensional NC plane $(i,j=1,2)$, if we consider an electron $(q=-e)$ moving in a magnetic field, then the vector $\mathbf{\theta}$ defined above just has the third component $\theta_3$ and $\Theta_{ij}=\theta_3 \epsilon_{ij}$, $\epsilon_{12}=-\epsilon_{21}=1, \epsilon_{11}=\epsilon_{22}=0$, then we have $$\label{AB-sphase3} \Delta \Phi_{AB}=-ie\oint A_idx_i - \frac{i}{2}e \mathbf{\theta}_3\oint\big( m (\mathbf{v}\times \nabla A_i)_3 - e(\mathbf{A }\times\nabla A_i )_3 \big)dx_i.$$ We should emphasize that, in two dimensional NC plane, our result (\[AB-sphase3\]) is exactly the same as in Ref. [@CST1]. The AB phase expression (\[AB-sphase1\]) give us a hint that when a charged particle moves in an electromagnetic field with four dimensional potential $A_\mu$, then the corresponding AB phase will have the following general expression, $$\label{AB-sphase4} \Delta \Phi_{AB}=iq\oint ( A_\mu +\frac{1}{2}\Theta_{\alpha\beta}(mv_\alpha +q A_\alpha) \partial_\beta A_\mu )dx^\mu .$$ The second term is the consequence of space-space non-commutativity. The Aharonov-Bohm effect on NC Phase space =========================================== The Bose-Einstein statistics in NCQM requires both space-space and momentum-momentum non-commutativity. Thus we should also consider the momentum-momentum non-commutativity when we deal with physical problems. On NC phase space the non-commuting coordinates $\hat{x_i}$ and momentum $\hat{p_i}$ were given in Eq. (\[Eq:Rep.4\]). On NC phase space the star product in Eqs. (\[star\]) becomes, $$(f \ast g)(x,p) = e^{ \frac{i}{2\alpha^2} \Theta_{ij} \partial_i^x \partial_j^x+\frac{i}{2\alpha^2}\bar{\Theta}_{ij} \partial_i^p \partial_j^p} f(x,p)g(x,p) = f(x,p)g(x,p) + \frac{i}{2\alpha^2}\Theta_{ij} \partial_i^x f \partial_j^x g\big|_{x_i=x_j} + \frac{i}{2\alpha^2}\bar{\Theta}_{ij} \partial_i^p f \partial_j^p g\big|_{p_i=p_j}.$$ To replace the star product in Schr$\ddot{o}$dinger Eq.(\[sdeq2\]) with a usual product, first we need to replace $x_i$ and $ p_i$ with a generalized Bopp’s shift as $$\begin{aligned} \label{gbshift1} x_i\rightarrow x_{i}-\frac{1}{2\hbar\alpha^2}\Theta_{ij}p_{j},\nonumber\\ p_i\rightarrow p_{i}+\frac{1}{2\hbar\alpha^2}\bar{\Theta}_{ij}x_{j},\end{aligned}$$ and then we need to replace $A_i$ with the generalized shift as $$\begin{aligned} \label{L-D1} A_i \rightarrow \alpha A_i +\frac{1}{2\alpha}\Theta_{lj}p_l\partial_j A_i.\end{aligned}$$ Thus on NC phase space the Schr$\ddot{o}$dinger Eq.(\[sdeq2\]) becomes, $$\label{sdeq4} H\Big( x_{i}-\frac{1}{2\hbar\alpha^2}\Theta_{ij}p_{j}, p_{i}+\frac{1}{2\hbar\alpha^2}\bar{\Theta}_{ij}x_{j}, A_i +\frac{1}{2\alpha^2}\Theta_{lj}p_l\partial_j A_i \Big)\psi = E\psi.$$ One may note that the Eq.(\[gbshift1\]) is different from the Eq.(\[Eq:Rep.4\]), by Eq.(\[gbshift1\]), the other physical quantities may also be shifted, for example, mass may be replaced with $m\rightarrow m/\alpha^2 $ and the electric charge $q$ may be replaced with $q/\alpha$. Here, again, we consider a particle of mass $m$ and electric charge $q$ moving in a magnetic field. On NC phase space, the Hamiltonian have the form, $$\begin{aligned} \hat{H}=\frac{1}{2m} \Big(\alpha p_i+\frac{1}{2\alpha}\bar{\Theta}_{ij}x_j- q (\alpha A_i +\frac{1}{2\alpha}\Theta_{lj}p_l\partial_j A_i) \Big)^2\nonumber\\ =\frac{1}{2m'} \Big( p_i+\frac{1}{2\alpha^2 }\bar{\Theta}_{ij}x_j- q(A_i + \frac{1}{2\alpha^2}\Theta_{lj}p_l\partial_j A_i) \Big)^2,\end{aligned}$$ with $m'=m/\alpha^2 $. Thus the total phase shift for the AB effect including the contribution due to both space-space and momentum-momentum non-commutativity on 3-dimensional NC phase space is [^2] $$\begin{aligned} \label{AB-phase2} \Delta \Phi_{AB}= \delta\Phi_{NCPS}&=&iq\oint A_idx_i +\frac{iq}{2\alpha^2} \oint\big( m' \mathbf{\theta} \cdot(\mathbf{v}\times \nabla A_i)+ q\mathbf{\theta}\cdot(\mathbf{A }\times\nabla A_i )\big)dx_i - \frac{i}{2\alpha^2 }\oint\bar{\Theta}_{ij}x_jdx_i\nonumber\\ &=&iq\oint A_idx_i +\frac{i}{2}q \oint\big( m \mathbf{\theta} \cdot(\mathbf{v}\times \nabla A_i)+ q\mathbf{\theta}\cdot(\mathbf{A }\times\nabla A_i\big)dx_i +\delta\Phi^{NCPS}_{\bar\theta}.\end{aligned}$$ Where the $\delta\Phi^{NCPS}_{\bar\theta}$ is the first order modification term due to momentum-momentum non-commutativity, and it has the form $$\begin{aligned} \label{AB-phase3} \delta\Phi^{NCPS}_{\bar\theta}= -\frac{i}{2\alpha^2 }\oint \bar{\Theta}_{ij}x_jdx_i + \frac{i}{2\alpha^2}(1-\alpha^2 ) \oint q\mathbf{\theta}\cdot(\mathbf{A }\times\nabla A_i\big)dx_i + \frac{i}{2\alpha^4}(1-\alpha^4 )q \oint\big( m \mathbf{\theta} \cdot(\mathbf{v}\times \nabla A_i)dx_i.\end{aligned}$$ It is obvious from (\[AB-phase3\]) that, when $\alpha=1$, then we have $\bar{\Theta}_{ij}=0$ as well as $\delta\Phi^{NCPS}_{\bar\theta} =0$, so the AB phase returns to its expression Eq.(\[AB-sphase2\]) on NC space. Conclusion Remarks ================== In this article we study the Aharonov-Bohm effect in NCQM. The consideration of the NC space(NC phase space) produces an additional phase difference. In order to obtain the NC space correction to the usual Aharonov-Bohm phase difference, in section two, first, we give the Schr$\ddot{o}$dinger equation in the presence of magnetic field, by solving the equation we derive the magnetic Aharonov-Bohm phase expression. Note that the non-commutative effects of the space (phase space) in the usual Schr$\ddot{o}$dinger equation can be realized in two steps. First step is to replace the coordinate and momentum operators with a so called Bopp’s (generalized Bopp’s) shift, and then to replace the magnetic potential $\mathbf{A}$ with a special shift which we defined in Eq.(\[L-D\]) in our paper. It is worth to mention that, on NC plane, our result (\[AB-sphase3\]) coincides with the result of Ref. [@CST1]. In order to obtain the NC phase space correction to the usual Aharonov-Bohm phase difference, in section 3, we solve the Schr$\ddot{o}$dinger equation in the presence of magnetic field and obtain the magnetic Aharonov-Bohm phase expression. Especially the new term $\delta\Phi^{NCPS}_{\bar\theta}$ which comes from the momentum-momentum noncommutativity is given explicitly. The method we employed in this paper may also be applied to the other related physical problems on NC space and NC phase space. For example, the Aharonov-Casher effect in NC quantum mechanics. Further study on the related topics will be reported in our forthcoming paper. [**[Acknowledgments:]{}**]{} This paper is completed during our visit to the high energy section of the Abdus Salam International Centre for Theoretical Physics (ICTP). We would like to thank Prof. S. Randjbar-Daemi for his kind invitation and warm hospitality during our visit at the ICTP. This work is supported in part by the National Natural Science Foundation of China (90303003, 10575026, and 10465004). The authors also grateful to the support from the Abdus Salam ICTP, Trieste, Italy. [99]{} N. Seiberg and E. Witten, JHEP 9909:032(1999), hep-th/9908142. A. Connes, M. R. Douglas, A. Schwarz, JHEP 9802, 003(1998), hep-th/9711162; M. R. Douglas, C. M. Hull, JHEP 9802, 008(1998), hep-th/9711165. F. Ardalan, H. Arfaei, M. M. Sheikh-Jabbari, JHEP 9902, 016(1999), hep-th/9810072. T. Curtright, D. Fairlie and C. Zachos, Phys. Rev. D58(1998)25002; L. Mezincescu, hep-th/0007046. S-C. Chu, P-M. Ho, Nucl. Phys. B550, 151(1999), hep-th/9812219; Nucl. Phys. B568, 447(2000), hep-th/9906192. V. Schomerus, JHEP 9906, 030(1999) hep-th/9903205. G. Dunne, R. Jackiw, C. Trugenberger, Phys. Rev. D41, 661(1990). G. G. Athanasiu, E. G. Floratos, S. Nicolis, J. Phys. A29, 6737(1996). J. Lukierski, P. C. Stichel, W. J. Zakrzewski, Ann. Phys. 260, 224(1997). D. Bigatti, L. Susskind, Phys. Rev. D62, 06604(2000). J. Gamboa, M. Loeve, F. Mendez, J. C. Rojas, Mod. Phys. Lett. A16, 2075(2001); Phys. Rev. D64, 067901(2001). V. P. Nair, Phys. Lett. B505, 249(2001); V. P. Nair, A. P. Polychronakos, Phys. Lett. B505, 267(2001). B. Morariu, A. P. Polychronakos, Nucl. Phys. B610, 531(2001); Nucl. Phys. B634 (2002) 326-338, hep-th/0201070. M. Chaichian, P. Presnajder, M. M. Sheikh-Jabbari, A. Tureanu, Phys. Lett. B 527, 149(2002); H. Falomir, J. Gamboa, M. Loeve, F. Mendez, J. C. Rojas, Phys. Rev. D66,045018(2002), hep-th/0203260; Omer F. Dayi and Ahmed Jellal, J. Math. Phys. 43:4592, 2002, hep-th/0111267. M. Chaichian, M. M. Sheikh–Jabbari, A. Tureanu, Phys. Rev. Lett. 86, 2716(2001); M. Chaichian, A. Demichev, P. Presnajder, M. M. Sheikh-Jabbari, A. Tureanu Nucl.Phys. B611 (2001) 383-402, hep-th/0101209. B. Mirza, M. Zarei, Eur. Phys. J. C 32,583-586(2004). Y. Aharonov, D. Bohm, Phys. Rev. 115, 485(1959). R. G. Chambers, Phys. Rev. Lett. 5, 3(1960). D. Kochan and M. Demetrian, Acta Physica Slovaca 52, No. 1, (2002), pp. 1-9, hep-th/0102050. T. Thomas Curtright, C. Zachos, Mod. Phys. Lett. A16 (2001)2381-2385, hep-th/0105226; K. Bolonek, P. Kosinski, Phys. Lett. B547, 51(2002); K. Nozari, T. Azizi, gr-c/0504090; Sayipjamal Dulat, Kang Li, hep-th/0508060. D. Karabali, V. P. Nair, A. P. Polychronakos, Nucl. Phys. B627, 565(2002). R. Jengo, R. Ramachandran, JHEP 0202, 017(2002). B. Muthukumar, P. Mitra, Phys. Rev. D66, 027701(2002). A. A. Deriglazov, Phys. Lett. B530, 235(2002). S. Bellucci, A Nersessian, Phys. Lett. B542 (2002) 295-300, hep-th/0205024. R. Banerjee, Mod. Phys. Lett. A17 (2002), 631. A. E. F. Djemai, H. Smail, Commun. Theor. Phys. 41:837-844, 2004, hep-th/0309006. A. Kokado, Phys. Rev. D 69, 125007(2004), hep-th/0401180. Michal Demetrian and Denis Kochan, hep-th/0102050; Jian-zu Zhang, Phys. Lett. B584, 204(2004). Ahmed Jellal, hep-th/0105303; S. Ghosh, hep-th/0405177. C. Duval, P. A. Horvathy, Phys. Lett. B479,284(2000); C. Duval, P. A. Horvathy, J. Phys. A34:10097-10108, 2001, hep-th/0106089; P. Horvathy, M. Plyushchay, JHEP 0206 (2002), 033; Mariano A. del Olmo and M. S. Plyushchay, hep-th/0508020; P. A. Horvathy, and M. S. Plyushchay, Phys. Lett. B595(2004) 547-555, hep-th/0404137; Nucl. Phys. B714:269-291, 2005, hep-th/0502040. O. Bertolami, J. G. Rosa, C. M. L. de Aragao, P. Castorina, and D. Zappala, Phys. Rev. D72 (2005) 025010, hep-th/0505064. Kang Li, Jianhua Wang, Chiyi Chen, Mod. Phys. Lett. A Vol. 20, No. 28(2005) 2165-2174. [^1]: By Eq.(\[sdeq3\]), one writes the velocity operator on NC space as $v_l=\frac{\partial H}{\partial p_l}=\frac{1}{m}(p_l-qA_l -\frac{1}{2}q\Theta_{ij}p_i\partial_jA_l-\frac{1}{2}q\Theta_{lj}(p_i-qA_i)\partial_jA_i+O(\Theta^2))=\frac{1}{m}(p_l-qA_l +O(\Theta)).$ [^2]: In a similar way as in NC space, we have the relation $m'v_l==p_l -q A_l+O(\Theta)+O(\bar{\Theta})$ on NC phase space, and we omitted the second order terms of $\Theta$ and $\bar{\Theta}$ in Eq. ([\[AB-phase2\]]{}).
--- abstract: 'A mechanism of point defect migration triggered by local depolarization fields is shown to explain some still inexplicable features of aging in acceptor doped ferroelectrics. A drift-diffusion model of the coupled charged defect transport and electrostatic field relaxation within a two-dimensional domain configuration is treated numerically and analytically. Numerical results are given for the emerging internal bias field of about $1 \rm \: kV/mm$ which levels off at dopant concentrations well below $1 \rm \: mol \%$; the fact, long ago known experimentally but still not explained. For higher defect concentrations a closed solution of the model equations in the drift approximation as well as an explicit formula for the internal bias field is derived revealing the plausible time, temperature and concentration dependencies of aging. The results are compared to those due to the mechanism of orientational reordering of defect dipoles.' author: - 'Yuri A. Genenko' bibliography: - 'apssamp.bib' title: 'Space-charge mechanism of aging in ferroelectrics: an exactly solvable two-dimensional model' --- \[sec:intro\]Introduction ========================= The phenomenon of gradual change of physical properties with time, called aging, is for a long time known feature of ferroelectrics, especially when acceptor doped [@plessner56aging; @ikegami67mechanism; @takahashi70space; @thomann72stabilization; @carl78electrical; @takahashi82; @arlt88internal; @Lohkamper1990Gauss; @Warren1995chargetrapping; @Afanasjev2001; @zhang05insitu; @zhang06aging; @Morozov2008]. Aging reveals itself in quasi-logarithmic decrease of the dielectric constant with time [@plessner56aging; @thomann72stabilization], reduction of domain wall mobility leading to stabilization in the aged domain structure [@ikegami67mechanism], altered shape of polarization loops in both poled and unpoled aged samples [@takahashi70space; @carl78electrical; @takahashi82; @Afanasjev2001; @zhang05insitu; @zhang06aging; @Morozov2008] and related indications. A characteristic aging time, $\tau$, a clamping pressure on the domain walls, $P_{cl}$, and an internal bias field, $E_{ib}$, were introduced as parameters quantifying aging [@carl78electrical; @takahashi82]. In past three decades several concepts were developed [@TagantsevReview; @dawber05physics] to explain the aging phenomena in terms of domain splitting [@ikegami67mechanism], space charge formation [@takahashi70space; @thomann72stabilization], electronic charge trapping at domain boundaries [@Warren1995chargetrapping; @Afanasjev2001], ionic drift [@hage80_02; @lamb86_02; @scott87activation; @Morozov2008] or reorientation of defect dipoles [@arlt88internal; @Lohkamper1990Gauss; @zhang05insitu; @zhang06aging]. The latter concept based on the widely recognized mechanism of gradual orientation of defect dipoles formed by the charged acceptor defects and oxygen vacancies has suggested probably most successful quantitative explanation of many features relevant to aging and fatigue in ferroelectrics, particularly, plausible time and temperature dependencies of $E_{ib}$. Nevertheless, some long standing questions remain open, most pronounced among them the dependence of $\tau$ and $E_{ib}$ on the defect concentration [@carl78electrical; @takahashi82]. Resulting from the individual cage motion of an oxygen vacancy around an acceptor defect insensitive to presence of other defect dipoles in the dipole reorientation model [@arlt88internal; @Lohkamper1990Gauss] $\tau$ is expected to be independent on the acceptor concentration. Similarly, independent contributions of different defect dipoles to the clamping pressure in this model have to result in $E_{ib}$ directly proportional to the concentration. However, the experimentally observed saturation of $E_{ib}$ at medium concentrations $<1 \rm \: mol \%$ as well as distinct concentration dependence of $\tau$ [@carl78electrical; @takahashi82; @arlt88internal] provides indications of some collective mechanism of aging. In this paper, therefore, we prove an alternative, space charge mechanism to explain the above mentioned features of aging. It may also be related to the self-polarization phenomenon and internal field-induced, migratory polarization observed in thin ferroelectric films [@Afanasjev2001; @Kholkin1998]. Recently, a model quantifying the space charge mechanism was advanced [@lupascu06aging; @Genenko-ferro2007; @Genenko-ferro2008] which shows that the clamping pressure $P_{cl}\simeq 1 \rm \: MPa$ and the field $E_{ib}\simeq 1 \rm \: kV/mm$ comparable with experiments can result from the formation of space charge zones near charged domain boundaries assuming small but finite mobility of charged defects. Aging was studied first for low defect concentrations about $0.01 \rm \: mol \%$ [@lupascu06aging; @Genenko-ferro2007], and the effect of anisotropy was considered [@Genenko-ferro2008]. Here we apply the isotropic, two-dimensional version of this model [@Genenko-ferro2007] to study aging of unpoled ferroelectrics for a wide range of acceptor concentrations and present numerical and analytical results on temperature, concentration and time dependencies of $E_{ib}$. Gradual change of material properties under the effect of the external dc or ac electric field, e.g. fatigue phenomenon, is not considered at this stage. \[sec:generalmodel\] Model of a ferroelectric grain =================================================== Main assumptions of the model [@Genenko-ferro2007] used here are: (a) availability of mobile charged defects in amount sufficient to substantially compensate the bound charge at the domain boundaries and (b) presence of strong local depolarization fields in the unpoled ferroelectric material. Let us consider the assumption (a). The conductivity of perovskites has been extensively studied during the past two decades [@Choi1986BTOChemistry; @waser91bulk; @Brennan1995; @Raymond1996perovskitechemistry; @DMSmyth2003; @Guo2005conductivity; @Ohly2006]. It was established that, depending on temperature and partial pressure of oxygen, perovskites may exhibit ionic or electronic conductivity, so that the n-type conductivity prevails under reducing conditions and the p-type in oxidizing atmospheres [@Choi1986BTOChemistry; @Raymond1996perovskitechemistry; @DMSmyth2003]. The ionic conductivity prevails between the above two areas of electronic conductivity and is clearly dominated by oxygen vacancies. It was shown also that the bulk mobilities of electrons and holes are not activated in $\rm BaTiO_3$ and exceed the activated mobility of vacancies by many orders of the magnitude. Nevertheless, at atmospheric partial oxygen pressure and not very high temperatures, the concentration of electronic carriers remains less than the concentration of vacancies by many orders of the magnitude and is by far not sufficient to compensate the polarization bound charge. Moreover, the bulk concentration of electronic carriers is so small that the corresponding Debye screening length strongly exceeds the typical grain size. This allows one to entirely neglect electronic screening of the local bound charges. The same applies also to ambient electronic carriers in the intergranular space though their concentration may exceed the bulk one by few orders of the magnitude [@Guo2005conductivity]. The balance between the electronic and ionic species may be substantially changed locally right near the charged domain boundaries because of the electronic band bending by very strong local depolarization fields; however, this possibility depends on the reduction potentials of certain metal dopants [@Gallardo2008]. In our study we suppose for simplicity that concentration of electronic carriers is negligible in and outside the ferroelectric grains. The oxygen vacancy concentration is usually fixed by acceptor defects even in the nominally undoped materials, since these are as a rule unintentionally acceptor doped [@Raymond1996perovskitechemistry; @DMSmyth2003; @Guo2005conductivity]. For these reasons, we will assume in the following that only oxygen vacancies participate in charge migration. Consider now the assumption (b). In the perfectly ordered domain system the bound charges at the domain boundaries can be fully compensated resulting in complete suppression of the depolarization fields inside the bulk ferroelectric. Experimental studies show that it is not the case in real, disordered systems. Considering the high resolution images of domain patterns in $\rm BaTiO_3$ and lead zirconate titanate (PZT) samples by scanning transmission electron microscopy [@Gregg2006], by bright field transmission electron microscopy [@SchmittTEM2007] and by secondary electron spectroscopy [@FarooqSEM2008] one can observe numerous places where the uncompensated bound charges should emerge and, consequently, local depolarization fields can be present. Three relevant circumstances can be identified, namely, when a domain array a) ends up in an unpolarized area inside the grain, b) meets a domain wall of another domain array instead of charged domain faces, c) ends up at the grain boundary contacting the wide enough unpolarized integranular area. To investigate quantitatively the consequencies of the presence of uncompensated bound charges in the material the third of the above mentioned cases, (c), will be representatively studied in the following. Since the depolarization field of a domain array decays exponentially at the distance equal to the domain width $a$ [@lupascu06aging; @Genenko-ferro2007], it suffices to assume the intergrain spacing larger than $a$ to get the neighbour grains electrically independent. That is why, for studying the effects of local depolarization fields, a model of a single ferroelectric grain surrounded by unpolarized medium may be chosen. Let us start with a quadratic ferroelectric grain of zero total polarization surrounded by an infinite dielectric medium. We imagine a two-dimensional periodic array of domains in the grain cut by the interfaces with dielectric, $z=0$ and $z=L$, perpendicular to the direction of spontaneous polarization which is along the $z$-direction of a Cartesian coordinate system $x, y, z$ (Fig. \[grain\]). ![Scheme of a 2D-array of $180^{\circ}$-domain walls in a quadratic ferroelectric grain occupying the space $|x|<L/2,\,0<z<L$. Straight arrows show the direction of the polarization and curved arrows the approximate direction of the local electric field.[]{data-label="grain"}](Fig-mod_1.ps) The system is supposed to be uniform in the $y$-direction so that no variable is $y$-dependent. If the length of the domains $L$ along the $z$-axis is much larger than their width $a$ along the $x$-axis, which is typically the case in experiments, electric field lines are effectively closed at the same side of the grain. Finite-element simulations of the electric field in the finite domain array shows a virtually periodic field pattern along the $x$ axis everywhere but the very ends of the array [@lupascu06aging; @Genenko-ferro2008]. That is why we can consider a periodic domain array infinite along the $x$ axis as a representative model for a finite multi-domain grain. This model configuration is well-known in the physics of polarized media and was used for the study of equilibrium and dynamic properties of ferromagnetic [@Kittel1946; @LandauElectrodynamicsContinuum] and ferroelectric [@Mitsui1953] materials. Furthermore, since both components of the depolarization field exponentially decrease towards the interior of the grain along the $z$-axis [@lupascu06aging; @Genenko-ferro2007], transport of the charged defects driven by the field is expected to occur in the vicinity of the grain boundaries $z=0$ and $z=L$. Considering charge migration near the interface $z=0$ we can therefore assume the domains to be infinite along the $z$-axis without introducing a substantial error. Apparently, the same process of charge separation occurs at the other grain boundary, $z=L$, too. Calculating the forces exerted upon the domain walls, both ends of the domains must be taken into account. Hence, to study the depolarization field induced charge migration it is sufficient to consider the interface $z=0$ between the domain array occupying the ferroelectric half space $z>0$ and the dielectric medium occupying the half space $z<0$. For simplicity, both media are assumed to be isotropic and characterized by the relative dielectric permittivities $\varepsilon_f$ and $\varepsilon_d$, respectively. Due to polarization, the domain faces at $z=0$ are alternatively charged with the bound surface charge density [@LandauElectrodynamicsContinuum] $$\begin{aligned} \label{face-charge} \rho_b(x,z)=\sigma_p\delta(z)\sum_{n=-\infty}^{\infty}(-1)^n\nonumber\\ \times\theta\left(\frac{a}{2}-an+x\right) \theta\left(\frac{a}{2}+an-x\right),\end{aligned}$$ where $\sigma_p = |\mathbf{P}_s|$ is the spontaneous polarization value, $\delta (z)$ and $\theta (x)$ are the Dirac $\delta $-function and the Heaviside unit step function, respectively. The choice of the origin in the middle of the positively charged domain face makes the problem also bilaterally symmetrical. Migration of charge carriers is governed by the drift-diffusion equation which is often used for description of electronic transport in semiconductors [@Sze]: $$\label{continuity}\partial_t c=-\nabla(\mu c {\bf E}) + D\triangle c \,,$$ where ${\bf E}(x,z,t)$ is the local electric field at the time $t$, $c(x,z,t)$ is the concentration of positively charged, mobile species, $\mu$ and $D$ are their mobility and diffusivity, respectively. We assume, for simplicity, that the latter two quantities are isotropic and connected by the Einstein relation, $D=\mu kT/q_f$ with $k$ the Boltzmann constant, $T$ the absolute temperature and $q_f$ the charge of the carriers. The electric field ${\bf E}(x,z,t)$ is determined self-consistently by the charged faces of the domains and the imbalanced charge density in the bulk $\rho(x,z,t)=q_f \left[c(x,z,t)-c_0\right]$ through Gauss’ law $$\label{Gauss}\nabla{\bf E}=\frac{q_f}{ \varepsilon_f\varepsilon_0}(c-c_0) \,,$$ where $c_0$ is the background concentration of the immobile acceptor defects warranting total electroneutrality, and $\varepsilon_0$ is the permittivity of vacuum. For the boundary conditions to the system of equations (\[continuity\]) and (\[Gauss\]) we assume usual boundary conditions for the electric field at the interface between the two media [@LandauElectrodynamicsContinuum] and chemical isolation of the grain determined by the requirement of vanishing particle current at $z=0$: $$\label{boundary-chem}\mu c E_z - D\partial_z c=0\,.$$ Note that the latter condition is not crucial for the present model. Transport of charged species in the dielectric medium (intergranular space) and also through the grain boundary may be additionally included. Here, for simplicity, we consider only redistribution of mobile defects inside the grain. In the initial state, the system is locally neutral assuming $c(x,z,0)\equiv c_0$, while the electric field ${\bf E}(x,z,0)\equiv{\bf E}^0(\sigma_p|x,z)$ is determined at the beginning solely by the surface charge density (\[face-charge\]) and reads, as was calculated in Ref. [@Genenko-ferro2007], $$\begin{aligned} \label{E0} E^0_x=\frac{\sigma_p}{\pi\varepsilon_0 (\varepsilon_f + \varepsilon_d )} \ln{\left[\frac{\cosh(\pi z/a)+\sin(\pi x/a)} {\cosh(\pi z/a)-\sin(\pi x/a)}\right]} ,\nonumber\\ E^0_z=\frac{2\sigma_p}{\pi\varepsilon_0} \frac{1}{(\varepsilon_f + \varepsilon_d )} \arctan{\left[\frac{\cos(\pi x/a)} {\sinh(\pi z/a)}\right]}.\end{aligned}$$ The latter equations together with boundary conditions and Eqs. (\[continuity\]) and (\[Gauss\]) complete the formulation of the problem which can now be treated numerically. \[sec:numsolution\]Numerical evaluation of the internal bias field ================================================================== Numerical solution of the problem is performed using a simple direct Euler scheme described in Ref. [@Genenko-ferro2007]. Due to the periodicity and the bilateral symmetry of the task, it is sufficient to consider the charge redistribution within the area $0<x<a$ contaning a single domain wall at the plane $x=a/2$. Space and time are discretized. At every time step, the change in the carrier concentration is computed from the previous values of the concentration and the electric field using Eq. (\[continuity\]). Updated values of the field are then calculated using Eq. (\[Gauss\]). The evaluation is repeated until convergence is achieved. As an example, we consider now the aging process in unpoled $\rm BaTiO_3$ doped with a bivalent acceptor, e.g. Ni, Ca or Mg. For numerical simulations, the material parameters at temperature $45^{\circ} \rm \:C$ are taken from Refs. [@arlt88internal; @jaff71] with the aim to compare results directly with those of the theory of dipole reorientation. Namely, it is assumed that $P_s=2.7\cdot 10^{-1} \rm \: C/m^2$, $\varepsilon_f = 200$, $a=0.5\rm \: \mu m$. Positively charged oxygen vacancies are assumed as mobile species with $q_f$ twice the elementary charge and the mobility $\mu=8.4\cdot 10^{-22} \rm \: m^2/Vs$ implying activation energy of $E_a=1.1\rm \: eV$ [@waser91bulk; @Raymond1996perovskitechemistry; @DMSmyth2003; @Ohly2006]. Note that, for the dielectric permittivity the intrinsic, lattice value is taken [@jaff71] since the charge migration occurs at the mesoscopic scale of the domain width. The directly measurable macroscopic permittivity may achieve much higher values due to the large contribution of the domain walls [@WangGiant2007] which does not apply in the considered problem. Account of anisotropy of crystalline $\rm BaTiO_3$ would result in enhancement of the field magnitude by the factor $\sqrt{\varepsilon_a/\varepsilon_c}$ and in reduction of the field penetration depth by the same factor [@Genenko-ferro2007; @Genenko-ferro2008], where $\varepsilon_a=2180$ and $\varepsilon_c=56$ are the principal values of the permittivity tensor [@Zgonik]. Since the electrostatic energy is quadratic in the field this would entail the increase in the values of the clamping pressure and the internal bias field by the factor $\sqrt{\varepsilon_a/\varepsilon_c} \simeq 6$. However, for simplicity and comparability with the results of the dipole reorientation model [@arlt88internal; @Lohkamper1990Gauss], we assume here the above introduced isotropic permittivity $\varepsilon_f$. For the dielectric medium outside the ferroelectric grain we take the same but non-polarized material with $\varepsilon_d=\varepsilon_f$. The system reveals two typical time scales: the drift time $\tau_{\mu}=a/\mu E^0_z \simeq 7.8\cdot 10^6\rm \:s$ and the diffusion time $\tau_D=a^2/D\simeq 2.2\cdot 10^{10}\rm \:s$ so that the ratio $\beta = \tau_{\mu}/\tau_D\simeq 3.6\cdot 10^{-4}$ characterizes the contribution of diffusion to Eq. (\[continuity\]) [@Genenko-ferro2007]. Though very small, this contribution cannot be neglected because it influences the structure of space charge zones and provides compatibility with the boundary condition (\[boundary-chem\]). However, because of very small $\beta$, large gradients of concentration arise near the negatively charged face of the domain at $z=0,\,a/2<x<a$ which make the numerical procedure unstable. Since the parameter $\beta$ affects nothing but the thickness of the positively charged layer piled up near the negatively charged domain face [@Genenko-ferro2007] we assume in computations $\beta=5\cdot 10^{-2}$ keeping in mind that this may lead to mistakes if the thickness of the space charge zone in front of the positively charged domain face becomes comparable with $\beta a$. Details on the field and concentration profiles and their evolution with time are exemplary presented in Ref. [@Genenko-ferro2007] for low dopant concentrations about $c_0=0.01 \rm \: mol \%$. Here computations are extended over the region from $c_0=0.01 \rm \: mol \%$ to $1 \rm \: mol \%$. Having the charge density and the electric field calculated, the time dependent forces exerted upon domain walls can be evaluated. The loss of domain wall mobility, characteristic of aging, results from relaxation of the energy of the electrostatic depolarization field due to piling up of the charged defects at the charged domain faces. Distribution of the energy of this field and of the consequent clamping pressure along the domain wall is very nonuniform, peaking at the domain boundaries [@Genenko-ferro2007]. The average clamping pressure preventing the displacement of the domain wall from the energy minimum and the corresponding internal bias field may be estimated as follows. The thermodynamic force exerted upon the domain wall can be defined as the derivative of the energy of the electrostatic field on the domain wall diplacement. This derivative may be roughly estimated from the difference between the energy of the initial state of the system displayed in Fig. \[grain\] and the fully polarized state achieved by the virtual displacement of domain walls over the distance of $a/2$. The energy of the electrostatic field per one half of the domain length reads $$\label{field_energy}W[{\bf E}]=\frac{1}{2}\varepsilon_0 \varepsilon_f \int_{-\infty}^{L/2} dz \int_0^a dx\, {\bf E}^2 ,$$ while the other half of the domain at $z>L/2$ contributes the same amount of the energy for symmetry reasons. In the virgin state of the grain, the depolarization field is represented by Eqs. (\[E0\]), and the energy of this field per one period of the structure is given by the well known formula [@Kittel1946; @LandauElectrodynamicsContinuum; @Mitsui1953] $$\label{virgin_energy}W[{\bf E}^0(\sigma_p)]= 0.85 \frac{\sigma_p^2 a^2}{4\pi \varepsilon_0 \varepsilon_f}.$$ In the course of aging, the electric field transforms to ${\bf E}={\bf E}^0(\sigma_p)+\Delta {\bf E}$, where $\Delta {\bf E}$ is the contribution to the field due to the charge carrier migration. The aged state of the structure in Fig. \[grain\] serves as the initial state with the energy $W[{\bf E}]$ in the clamping force calculation. Consider now the virtual pairwise displacement of the domain walls to each other until they meet which leads to the full polarization of the system in the positive $z$ direction. The resulting uniform bound charge at the grain boundaries at $z=0$ and $z=L$ generates then the uniform depolarization field ${\bf E}_d = (0,0,-P_s/\varepsilon_0 \varepsilon_f),\,0<z<L$, so that the total electric field becomes equal to ${\bf E}_f = {\bf E}_d +\Delta {\bf E}$. The energy of the electrostatic field in this state amounts to $W[{\bf E}_f]$. The clamping pressure on the domain walls related to aging is provided by the time-dependent part of the energy difference $W[{\bf E}_f]$-$W[{\bf E}]$, namely $$\label{clamping-energy}\Delta W_{cl}=\varepsilon_0 \varepsilon_f \int_{-\infty}^{L/2} dz \int_0^a dx \left( {\bf E}_d -{\bf E}^0 \right)\Delta {\bf E} .$$ Calculating here the energy gain near the domain boundary at $z=0$ the integration limit $L/2$ can be extended to infinity because of the exponentially fast descrease of the fields ${\bf E}^0$ and $\Delta {\bf E}$ in both directions of $z$ axis. Finally, the internal bias field in the $z$ direction can be evaluated comparing the force $2P_s {\mathcal E} L$ [@Nechaev] exerted upon the domain wall by an external field ${\mathcal E}$ with the clamping force $2\Delta W_{cl}/(a/2)$ accounting now for both domain boundaries. This results in the estimation $$\label{Eib-formula}E_{ib} \simeq \frac{2}{aLP_s} \Delta W_{cl}.$$ Evaluation of the time-dependent field $E_{ib}$ assuming the typical length of the domain wall $L=20a$ is shown in Fig. \[CoerciveField\] for different dopant concentrations. ![$E_{ib}$ as a function of time for variable acceptor concentration $c_0$ which increases monotonically from the lower to the upper curve (solid lines). The dashed line presents the theoretical limit of high $c_0$ (see text).[]{data-label="CoerciveField"}](Fig-mod_2.ps) All the curves demonstrate saturation of $E_{ib}$ after a some characteristic (aging) time $\tau$ which decreases with the growing concentration. The final, equilibrium value of $E_{ib}$ first increases with the concentration but then levels off well below $c_0\simeq 1.0\rm \: mol \%$ as is seen in Fig. \[Eib-c0\]. ![Saturated value of $E_{ib}$ and inverse aging time, $\tau^{-1}$, determined from the inflection point of plots in Fig. \[CoerciveField\], are presented as functions of acceptor concentration $c_0$ by solid lines. Respective theoretical values derived in the limit of high $c_0$ (see text) are shown by dashed lines.[]{data-label="Eib-c0"}](Fig-mod_3.ps) The inversed aging time $\tau^{-1}$ rises almost linearly with the increasing doping in the whole range of concentrations studied. Saturating dependence of $E_{ib}$ on $c_0$ is clearly seen in experiments on the acceptor doped PZT [@carl78electrical; @takahashi82]; indications of both above described $E_{ib}$ and $\tau$ dependencies are observable on the acceptor doped barium titanate too [@arlt88internal]. Saturation of $E_{ib}$ as well as decreasing of aging time with growing concentration $c_0$ has a simple physical reason. The depth $\Delta$ of the space charge area emerging near the positively charged domain boundary is defined by the amount of carriers needed to fully compensate the surface bound charge and is of the order of $\sigma_p /(q_f c_0)$. For the concentration of $c_0 = 1 \rm \: mol\%$ it is about $5.4 \rm \: nm$ which is two orders of the magnitude smaller than the domain width $a$. This means that, at so high concentrations, the electrostatic depolarization field is compensated virtually in the whole specimen volume due to the charge migration so that the maximum possible energy gain and, consequently, the maximum magnitude of $E_{ib}$ is achieved which is concentration independent. At smaller concentrations, especially below $c_0 = 0.1 \rm \: mol\%$, the extended space charge zone exists near the positively charged domain face where the depolarization field is still present [@Genenko-ferro2007]. This makes the field suppression incomplete, so that the energy gain and, consequently, $E_{ib}$ is not maximum and increases with the concentration. The characteristic time of aging is determined, in turn, by the distance $\Delta$ the charge carriers have to cover and is obviously proportional to $\Delta/\mu E^0_z \sim 1/c_0$. The delineated dependencies of both $E_{ib}$ and $\tau$ are clearly seen in Figs. \[CoerciveField\] and \[Eib-c0\]. \[sec:ansolution\]Analytical solution for medium and high defect concentrations =============================================================================== If the vacancy concentration is high enough in the sense of charge compensation mechanism discussed in the previous section, the problem may be substantially simplified by neglecting the diffusion contribution relevant only within the thin space charge zone. Then Eq. (\[continuity\]) considered in the drift approximation reads $$\label{drift}\partial_t c=-\nabla(\mu c {\bf E}).$$ The boundary condition (\[boundary-chem\]), where drift was outweighed by diffusion, does not apply anymore. To keep charge balance this condition is substituted by the requirement on the surface charge density $\sigma(x,t)$ which now embraces the space charge zones and can be obtained by integration of Eq. (\[drift\]) over an infinitesimal region near the boundary: $$\label{sigma_t}\partial_t \sigma(x,t)=- q_f\mu c(x,+0,t)E_z(x,+0,t).$$ This assumption implies that the defect concentration remains constant in the bulk resulting in an ansatz $$\label{c_ansatz}c(x,z,t)=c_0+\delta(z) \sigma (x,t).$$ Since $E_z(x,+0,t)=\sigma (x,t)/(\varepsilon_f + \varepsilon_d )$, Eq.(\[sigma\_t\]) leads to an equation for the surface charge density $$\label{sigma}\partial_t \sigma(x,t)=-\sigma(x,t)/\tau_r$$ with the Maxwell relaxation time $\tau_r =\varepsilon_0 (\varepsilon_f + \varepsilon_d )/\kappa$ where $\kappa=q_f\mu c_0$ is the ionic conductivity of the bulk material. An apparent initial condition for $\sigma(x,t)$ reads $\sigma(x,0)=\sigma_p\, \text{sign} (a/2-x)$ which provides a solution $\sigma(x,t)=\sigma_s (t)\, \text{sign} (a/2-x) $ with $\sigma_s(t)=\sigma_p \exp{(-t/\tau_r )}$. The asymptotic condition $\sigma_s(t\rightarrow \infty )=0$ means full compensation of the bound charge. With the ansatz (\[c\_ansatz\]), Eq. (\[Gauss\]) becomes homogeneous and is satisfied by the function ${\bf E}^0(\sigma_s(t)|x,z)$. The drift equation (\[drift\]) is then self-evident satisfied in the bulk. The part of the field due to charge migration equals consequently $\Delta {\bf E}=-{\bf E}^0(\sigma_p) [ 1 - \exp{(-t/\tau_r) }]$. Substituting $\Delta {\bf E}$ into Eq. (\[clamping-energy\]) one can note that, for symmetry reasons, the first term in the brackets does not contribute to the energy . Finally, an explicit formula for the internal bias field follows from Eq. (\[Eib-formula\]): $$\label{Eib-exact}E_{ib}(t)\simeq \frac{0.85}{\pi} \frac{a}{L} \frac{P_s}{\varepsilon_0\varepsilon_f} \left[ 1 - \exp{(-t/\tau_r) } \right].$$ Note that, neglecting the thickness of the space charge zone, the saturated (asymptotic) magnitude of $E_{ib}$ achieves the concentration independent maximum value determined by the electrostatic energy of the stripe domain structure (\[virgin\_energy\]). The aging time $\tau$ is represented by the Maxwell relaxation time $\tau_r$ which is, as expected, proportional to $1/c_0$. The dependence (\[Eib-exact\]) is shown exemplary for the concentration $c_0 = 1 \rm \: mol\%$ on the Fig. \[CoerciveField\] (dashed line). It describes well an increase of the corresponding computed curve for $E_{ib}$ below the aging time but levels off at the magnitude approximately $15\%$ larger than the computed value. We interpret this difference as a result of the growing numerical mistake at high concentrations $c_0$ when the thickness of the space charge zone becomes of the order of $\beta a$ as discussed in Sec. \[sec:numsolution\]. The Maxwell relaxation time (dashed line) describes well the aging time at all concentrations considered as is shown in Fig. \[Eib-c0\]. ![$E_{ib}$ as a function of time for activation energy $E_a = 1.1\rm \: eV$, acceptor concentration $c_0 = 1 \rm \: mol \%$ and variable temperature as indicated in the plot (solid lines). $E_{ib}$ averaged over the Gauss distribution of the random energies $E_a$ (dashed lines).[]{data-label="Eib-T"}](Fig-mod_4.ps) The field $E_{ib}$ is temperature dependent through the parameters $\varepsilon_f$ and $\tau_r$ of the formula (\[Eib-exact\]). $\varepsilon_f$ changes strongly when temperature increases towards the ferroelectric transition that is regarded as in Ref. [@arlt88internal]. The dependence of $\tau_r$ is a more subtle question since, in the considered temperature range, the directly measured total conductivity can be dominated by electron holes [@DMSmyth2003; @Guo2005conductivity; @Ohly2006] because of their mobility much higher than the ionic one. However, the density of electronic carriers remains, as was discussed in Sec. III, much lower than that of oxygen vacancies and cannot contribute much to the bound charge compensation. That is why in evaluation of $\tau_r$ only the ionic conductivity $\kappa(T)=\kappa_0 \exp{(-E_a/kT)}$ is included with the activation energy $E_a=1.1\rm \: eV$ and $\kappa_0=112.8 \rm \: (\Omega\cdot cm)^{-1}$ for $c_0 = 1 \rm \: mol \%$ [@DMSmyth2003]. For these $c_0$ and $E_a$, time dependences of $E_{ib}$ by different temperatures are presented in Fig. \[Eib-T\] by solid lines. In fact, the activation energy for oxygen vacancies is not known accurately and usually estimated as $E_a= (1\pm 0.1)\rm \: eV$ [@waser91bulk; @Ohly2006]. Assuming, as in Ref. [@Lohkamper1990Gauss], that $E_a$ is random variable distributed with a Gaussian $\sim \exp[(E_a-\bar E)^2/2s^2]$ using $\bar E=1\rm \: eV$ and $s/\bar E= 0.1$, $E_{ib}$ can be averaged over this distribution [@Lohkamper1990Gauss; @Tagantsev2002stretche] resulting in a quasilogarithmic time dependencies shown by the dashed lines for different temperatures in Fig. \[Eib-T\]. Theoretical curves in Fig. \[Eib-T\] represent satisfactorily experimental time and temperature dependencies of $E_{ib}$ [@arlt88internal; @Lohkamper1990Gauss]. Somewhat overestimated value of $E_{ib}$ could be adjusted by the factor $a/L$ which is, in fact, the only fitting parameter in this theory. In the performed calculations it was taken equal to $0.05$ but it is, in fact, slightly dependent on the grain size and lies between $0.02$ and $0.05$ [@HoffmannActa2001]. The parameter $a/L$ may be also used to account for the fact that not every domain array is pinned by the local charges. The domain arrays which perfectly match other domain arrays, compensating bound charges, are stiffly coupled to each other. This means that for $L$ some effective domain length $L_{eff}$ can be taken which may exceed $L$ few times but is hardly larger than the grain size. The curves in Fig. \[Eib-T\] resemble those in the theory of dipole reorientation [@arlt88internal; @Lohkamper1990Gauss] though the mechanisms involved are completely different which reveals itself in different dependencies on the doping concentration. Note that the described here space charge mechanism of domain pattern fixation near domain boundaries does not preclude at all the dipole reorientation mechanism which can still be valid in the bulk of the grain. \[sec:conclusions\]Discussion and Conclusions ============================================= The two-dimensional model of depolarization field induced charge migration has been presented which explains plausibly the time, temperature and doping dependencies of the internal bias field in aging ferroelecrics. Saturation of this field as well as descrease of the aging time with the increasing dopant concentration is in agreement with experimental observations [@carl78electrical; @takahashi82; @arlt88internal]. This is in contrast to the theory of defect dipole reorientation [@arlt88internal; @Lohkamper1990Gauss] based on the picture of individual cage motion of oxygen vacancies which predicts the internal bias field proportional to and the aging time independent on the dopant concentration. A reasonable question arises: whether account of interaction between defect dipoles could modify the dipole rotation theory so that it explains the concentration dependence properly. This possibility should indeed be carefully studied. It is known, at least, that, at high vacancy concentrations, substantial structural changes in the subsystem of vacancies may occur [@Steinsvik1997]. This happens, however, at about $c_0=7\rm \: mol\%$, while the substantial deviations from the linear dependence of $E_{ib}$ on concentration become apparent already at $c_0=0.1 \rm \: mol\%$ [@carl78electrical; @takahashi82]. Note that doping below $1 \rm \: mol \%$ is usually considered as very dilute [@Scott2000ordering]. On the other hand, one should take into account possible chemical restrictions on solubility of certain acceptors in the bulk of the grains [@Hans2008]. In any case, a concentration of about $1 \rm \: mol \%$ is high enough in the sense of the presented here space charge mechanism of screening of polarization which allows for good agreement with experiment [@carl78electrical; @takahashi82; @arlt88internal]. The proposed two-dimensional model of charge migration can be extended to include further features and mechanisms which may influence aging. Note that the change of the $180^{\circ}$ domain walls to the $90^{\circ}$-walls in the sketch presented in Fig. \[grain\] does not entail strong modification of the results and is reduced solely to the substitution of $\sigma_p = |\mathbf{P}_s|$ by $\sigma_p = |\mathbf{P}_s|/\sqrt{2}$. More important task is to take into account the possible change of the domain pattern during the charge migration. The metastable domain structure results from the compromise between the energy of the electrostatic depolarization field and the energy of domain walls [@Kittel1946; @Mitsui1953]. The relaxation of the electrostatic energy can trigger the change of the domain pattern including possible creation or disappearance of domain walls at the grain boundaries. This process, in turn, involves the energy contribution of the mechanical stresses which has not been considered in this model as yet. One more idealization of the suggested model is an abrupt change of the polarization at the domain boundaries. Considering space distribution of the polarization within the Ginzburg-Landau approach reveals that chracteristic scale at which the polarization gradually changes may become comparable with the domain width, especially when the temperature is not far from the temperature of the phase transition into the paraelectric state [@Lukyanchuk2008]. Including the ferroelastic interaction in the Ginzburg-Landau description may also substantially change the form of the domains providing appearance of the well known needle-shaped domains [@Salje1996]. These all additional features do not preclude, nevertheless, appearence of strong local depolarization fields which present the crucial element of the actual model of aging due to charged defects migration. Self-consistent analysis of the system evolution with many additional variables presents a very challenging task which may be addressed in the future. At the actual stage, only electrostatic arguments were observed so far which allow, however, comparison with the theory of defect dipole reorientation where only electrostatic contributions were included, too. \[sec:acknowledgement\]Acknowledgements ======================================= Discussions with Nina Balke, Ruediger Eichel, Edwin Garcia, Xin Guo, Hans Kungl, Igor Lukyanchuk, Doru Lupascu, Maxim Morozov, Ralf Mueller, Igor Pronin, Hermann Rauh, Jurgen Roedel, Don Smyth, Alexander Tagantsev, Reiner Waser and Vadim Kirillovich Yarmarkin are gratefully acknowledged. This work was supported by the Deutsche Forschungsgemeinschaft through the Sonderforschungsbereich 595. [99]{} K. W. Plessner, Proc. Phys. Soc. B **69**, 1261 (1956). S. Ikegami and I. Ueda, J. Phys. Soc. Jpn. **22**, 725 (1967). M. Takahashi, Jpn. J. Appl. Phys. **9**, 1236 (1970). H. Thomann, Ferroelectrics **4**, 141 (1972). K. Carl and K.H. Härdtl, Ferroelectrics **17**, 473 (1978). M. Takahashi, Ferroelectrics **41**, 143 (1982). G. Arlt and H. Neumann, Ferroelectrics **87**, 109 (1988). R. Lohkämper, H. Neumann, and G. Arlt, J. Appl. Phys. **68**, 4220 (1990). W.L. Warren, D. Dimos, B.A. Tuttle, G.E. Pike, R.W. Schwartz, P.J. Clews, and D.C. McIntyre, J. Appl. Phys. **77**, 6695 (1995). V.P. Afanasjev, A.A. Petrov, I.P. Pronin, E.A. Tarakanov, E.Ju. Kaptelov and J. Graul, J. Phys.: Condens. Matter **13**, 8755 (2001). L.X. Zhang and X. Ren, Phys. Rev. B **71**, 174108 (2005). L.X. Zhang and X. Ren, Phys. Rev. B **73**, 094121 (2006). M.I. Morozov and D. Damjanovic, J. Appl. Phys. **104**, 034107 (2008). A.K. Tagantsev, I. Stolichnov, E.L. Colla, and N. Setter, J. Appl. Phys. **90**, 1387 (2001). M. Dawber, K.M. Rabe, and J.F. Scott, Rev. Mod. Phys. **77**, 1083 (2005). H.-J. Hagemann, J. Phys. C: Solid State Phys. **11**, 3333 (1978). P.V. Lambeck and G.H. Jonker, J. Phys. Chem. Solids **47**, 453 (1986). J.F. Scott, B. Pouligny, K. Dimmler, M. Parris, D. Butler, and S. Eaton, J. Appl. Phys. **62**, 4510 (1987). A.L. Kholkin, K.G. Brooks, D.V. Taylor, S. Hiboux, N. Setter, Integrated Ferroelectr. **22**, 525 (1998). D.C. Lupascu, Y.A. Genenko, and N. Balke, J. Am. Ceram. Soc. **89**, 224 (2006). Y.A. Genenko and D.C. Lupascu, Phys. Rev. B **75**, 184107 (2007); Phys. Rev. B **76**, 149907(E) (2007). Y.A. Genenko, N. Balke and D.C. Lupascu, Ferroelectrics **370**, 196 (2008). G.M. Choi, H.L. Tuller, and D. Goldschmidt, Phys. Rev. B **34**, 6972 (1986). R.M. Waser, J. Am. Ceram. Soc. **74**, 1934 (1991). C.J. Brennan, Integr. Ferroelectrics **7**, 93 (1995). M.V. Raymond and D.M. Smyth, J. Phys. Chem. Solids **57**, 1507 (1996). D.M. Smyth, J. Electroceramics **11**, 89 (2003). X. Guo, C. Pithan, C. Ohly, C.L. Jia, and R. Waser, Appl. Phys. Lett. **86**, 082110 (2005). C. Ohly, S. Hoffmann-Eifert, J. Schubert, and R. Waser, J. Am. Ceram. Soc. **89**, 2845 (2006). P.M. Jones, D.E. Gallardo and S. Dunn, Chem. Mat. **20**, 5901 (2008). A. Schilling, R.M. Bowman, J.M. Gregg, G. Catalan and J.F. Scott, Appl. Phys. Lett. **89**, 212902 (2006). L.A. Schmitt, K.A. Schonau, R. Theissmann, H. Fuess, H. Kungl and M.J. Hoffmann, J. Appl. Phys. **101**, 074107 (2007). M.U. Farooq, R. Villaurrutia, I. Maclaren, H. Kungl, M.J. Hoffmann, J.J. Fundenberger, and E. Bouzy, J. Microscopy **230**, 445 (2008). C. Kittel, Phys. Rev. **70**, 965 (1946). L.D. Landau and E.M. Lifshitz, [*Electrodynamics of Continuous Media*]{} (Pergamon, Oxford, 1963). T. Mitsui and J. Furuichi, Phys. Rev. **90**, 193 (1953). S.M. Sze, [*Physics of Semiconductor Devices*]{} (John Wiley & Sons, Inc., New York 1969). B. Jaffe, W.R. Cook, Jr., and H. Jaffe, [*Piezoelectric Ceramics*]{} (Academic Press, Marietta, OH, 1971). Y.L. Wang, A.K. Tagantsev, D. Damjanovic, and N. Setter, Appl. Phys. Lett. **91**, 062905 (2007). M. Zgonik, P. Bernasconi, M. Duelli, R. Schlesser, P. Gunter, M.H. Garrett, D. Rytz, Y. Zhu, and X. Wu, Phys. Rev. **50**, 5941 (1994). N. Nechaev and A.M. Roschupkin, Ferroelectrics **90**, 29 (1989). A.K. Tagantsev, I. Stolichnov, N. Setter, J.S. Cross, and M. Tsukada, Phys. Rev. B **66**, 214109 (2002). M.J. Hoffmann, M. Hammer, A. Endriss, and D.C. Lupascu, Acta Mat. **49**, 1301 (2001). S. Steinsvik, R. Bugge, J. Gjonnes, J. Tafto, T. Nordby, J. Phys. Chem. Solids **58**, 969 (1997). J.F. Scott and M. Dawber, Appl. Phys. Lett. **76**, 3801 (2000). H. Kungl, private communication. I.A. Lukyanchuk, Talk at the XI Electroceramics Conference, Manchester, 31 August - 4th September, 2008 E.K.H. Saljey and Y. Ishibashi, J. Phys.: Condens. Matter **8**, 8477 (1996).
--- abstract: 'The implications of the pion (meson) degrees of freedom for baryon properties, taken at the constituent quark and baryon levels are compared. It is shown that there is a dramatic qualitative difference between two approaches.' address: - 'Institute for Theoretical Physics, University of Graz, Universitätsplatz 5, A-8010 Graz, Austria' - 'Department of Physics, University of Helsinki, Helsinki, Finland' author: - 'L. Ya. Glozman, D. O. Riska' title: Pion loop fluctuations of constituent quarks and baryons --- The role of pion degrees of freedom for baryon observables was appreciated in the strong coupling theories of the 1950’s and later formed the basis of the Skyrme model and the chiral bag models. Consideration of the pion loop fluctuations on the baryon level, either within the traditional strong coupling approach or within modern ChPT for baryons reveals several problems: \(i) The $\pi NN$ coupling constant is large, $g_{\pi NN}^2/4\pi \sim 14$, which indicates that the loop expansion diverges. The expansion within BChPT indicates a lack of convergence as well; \(ii) The full set of intermediate baryon resonances has to be taken into account in the calculation, which consequently involves a large number of parameters; \(iii) No information about the underlying quark structure is built into the loop integrals; \(iv) $m_N \sim \Lambda_\chi$. In the region of spontaneously broken chiral symmetry, where the pion degrees of freedom are important, the structure of the elementary excitations of the QCD vacuum can be approximately reproduced by absorption of the scalar interaction between bare (current) quarks in the QCD vacuum (which is responsible for the chiral symmetry breaking) into the mass of the quasiparticles, i.e. the constituent quarks. Thus in the low-energy regime the proper chiral dynamics is due to the pion-constituent quark coupling. In this case: \(i) The $\pi QQ$ coupling constant is relatively small, $g_{\pi QQ}^2/4\pi \sim 0.6-0.7$; \(ii) All possible intermediate baryon resonances are taken into account automatically by considering the $u$ and $d$ quarks in the loop amplitudes; \(iii) The role of the underlying $SU(6)$ quark structure is revealed in relations between different observables; \(iv) $(m_Q /\Lambda_\chi)^2 \sim 0.1$. An example of the conceptual difference between the treatment of the mesonic fluctuations of constituent quarks, on the one hand, and (directly) of baryons, on the other hand, is in order. In the latter the mesonic loops imply an infinite shift of the baryon mass in the chiral limit, which is balanced by the phenomenologically determined counter-terms. Consideration of the mesonic degrees of freedom in such approaches yields information on the corrections from the finite masses of current quarks, but not on the origin of the nucleon mass nor on the octet-decuplet splitting in the chiral limit. In the chiral constituent quark model the role of the meson degrees of freedom is broader. On the one hand meson fluctuations contribute to the self-energy of the constituent quarks (loops at the quark level), but on the other hand they imply a strong flavor-dependent interaction between the constituent quarks (meson exchange interaction between different quarks), which yields an octet-decuplet splitting in the chiral limit [@G1]. The pion-exchange interaction contains both the ultraviolet (short-range) part, which is independent of the pion mass, and the infrared (Yukawa) part. The latter one is important for the long-range nuclear force, but it does not produce any significant effect in the baryon because of its small matter radius. The short-range part of the pion-exchange interaction produces a flavor-spin dependent force between quarks and has a range $\Lambda_\chi^{-1}$. While the infrared (Yukawa) part of the interaction vanishes in the chiral limit, the ultraviolet part - does not. This means that in some sense the short-range part of the pion-exchange interaction is “more fundamental” than its Yukawa part. Note that this short-range interaction stems from the $\gamma_5$ structure of the pion-quark vertex (i.e. exclusively from the quantum numbers of the pion) and hence is demanded by the Lorentz invariance. This short-range part of the interaction, combined with the $SU(6)$ structure of the zero order baryon wave functions (which is demanded by the large $N_c$ limit in QCD), provides a basis for the explanation of the low-lying baryon spectrum [@SPECTRUM]. It should be noted that this short-range interaction cannot be obtained by considering the pion loops at the baryon level with the ground $N$ and $\Delta$ states as the intermediate states within the loop [@G2]. At the baryon level one needs to consider the whole infinite tower of the radially excited states within the loop in order to meet this short-range meson exchange interaction. This implies that the meson-baryon or quark models which employ only the subspace of the $\pi N$ and $\pi \Delta$ states are very incomplete and therefore do not take into account the most important short-range effects of the pion (meson) degrees of freedom for baryon structure. As another example consider the different role of the pion cloud for magnetic moments in both approaches. In hadronic models the pion loops is the only source for the nucleon anomalous magnetic moment. In contrast, within the naive quark model the nucleon magnetic moments are exclusively due to the intrinsic $SU(6)$ quark structure of the baryon. The small pion-quark coupling constant together with the additional small parameter $(m_Q /\Lambda_\chi)^2 \sim 0.1$ guarantee that the loop corrections to the naive quark model predictions (i.e. loops at the constituent quark level) are not large [@GR]. For the absolute values of the proton and neutron anomalous magnetic moments one finds contributions at the level of 5-10%. What is more important, the ratio $\mu_n/\mu_p$, comes out to be only 2% above the empirical ratio 0.68, while the naive quark model prediction, $2/3$, is 2% below. The pion loop corrections supply only a tiny contribution to the neutron Dirac charge radius [@GR]. They therefore do not perturb the satisfactory description of the negative mean square radius of the neutron, which is implied by the empirical value of the neutron magnetic moment (Foldy term). The kaon loop fluctuations of the valence $u$ and $d$ constituent quarks induce only a very small strangeness magnetic moment of the proton [@HRG], which is in the range -0.01 – -0.05, and is well consistent with the most recent data from the SAMPLE experiment, reported at this conference. This is again in contrast with the large negative strange magnetic moment that is implied by the kaon loop fluctuations considered at the baryon level. [9]{} L. Ya. Glozman, Phys. Lett. [**B459**]{} (1999) 589. L. Ya. Glozman and D. O. Riska, Phys. Rep. [**268**]{} (1996) 263; L. Ya. Glozman, W. Plessas, K. Varga, R. F. Wagenbrunn, Phys. Rev. [**D58**]{} (1998) 094030. L. Ya. Glozman, Phys. Lett. B - in print (hep-ph/0004229). L. Ya. Glozman and D. O. Riska, Phys. Lett. [**B459**]{} (1999) 49. L. Hannelius, D. O. Riska and L. Ya. Glozman, Nucl. Phys. [**A665**]{} (2000) 353.
--- abstract: 'We propose a resolution for the fermion doubling problem in discrete field theories based on the fuzzy sphere and its Cartesian products.' --- SU-4240-712\ imsc 99/10/36\ **The Fermion Doubling Problem and Noncommutative Geometry** A. P. Balachandran$^{*}$, T.R.Govindarajan$^{+}$, B. Ydri$^{*}$ .\ $^{*}$[*Physics Department, Syracuse University,\ Syracuse,N.Y.,13244-1130, U.S.A.*]{}\ $^{+}$[*Institute of Mathematical Sciences,\ Chennai 600 113, India*]{}\ .5cm The nonperturbative formulation of chiral gauge theories is a long standing programme in particle physics. It seems clear that one should regularise these theories with all symmetries intact. There are two major problems associated with conventional lattice approaches to this programme, both with roots in topological features: (1) The well-known Nielsen-Ninomiya theorem [@NN] states that if we want to maintain chiral symmetry, then under plausible assumptions, one cannot avoid the doubling of fermions in the usual lattice formulations . (2) It is not straightforward to understand features like anomalies with naive ultraviolet cut-off. Recently a novel approach to discrete physics has been developed. It works with quantum fields on a “fuzzy space” ${{{\cal M}}}_{F}$ obtained by treating the underlying manifold ${{{\cal M}}}$ as a phase space and quantising it [@madore; @gropre; @grklpr1; @grklpr2; @grklpr3; @watamura1; @watamura2; @frgrre]. The earliest contributions to topological features of the emergent fuzzy physics came from Grosse, Klimčík and Prešnajder [@grklpr1]. They dealt with monopoles and chiral anomaly for the fuzzy two-sphere $S_F^2$ and took particular advantage of supersymmetry . Later Baez et al. [@bbiv] further elaborated on their monopole work and developed the fuzzy physics of ${\sigma}-$ models and their solitons using cyclic cohomology [@connes; @coquereaux]. An attractive feature of this cohomological approach is its ability to write analogues of continuum winding number formulae and derive a fuzzy Belavin-Polyakov bound [@bbiv]. In this paper, we propose a solution of the fermion doubling problem for ${\cal M}=S^2$ using fuzzy physics . An alternative approach, also using fuzzy physics, can be found in [@grklpr1] while further developments of our method with applications to instantons and chiral anomaly is reported in [@bal].There have also been important developments [@luscher] in the theory of chiral fermions and anomalies in the usual lattice formulations. They avoid fermion doubling by relaxing the requirement that chirality and Dirac operators anticommute. In contrast, fuzzy physics needs no such relaxation. Quantisable adjoint orbits of compact semi-simple Lie groups seem amenable to the full fuzzy treatment and lead to manageable finite dimensional matrix models for quantum fields . There are two such manifolds in dimension four, namely $S^2{\times}S^2$ and $ {\mathbb C}{\mathbb P}^2 $ . Our methods readily extend to $S^2{\times}S^2$. They are not anticipated to encounter obstructions for $ {\mathbb C}{\mathbb P}^2 $ as well. But we have not yet fully worked out its noncommutative geometry. The published work of Grosse and Strohmaier [@grostr] on $ {\mathbb C}{\mathbb P}^2 $ gives their description of fuzzy $4d$ fermions. A sphere $S^2$ is a submanifold of ${\mathbb R}^3$: $$S^2=\langle \vec{n} \in {\mathbb R}^3: \sum_{i=1}^3 n_i^2=1 \rangle.$$ If $\hat{n}_i$ are the coordinate functions on $S^2$, $\hat{n}_i(\vec{n}) = n_i$, then $\hat{n}_i$ commute and the algebra ${\cal A}$ of smooth functions they generate is commutative. In contrast, the operators $x_i$ describing $S_F^2$ are noncommutative: $$[x_i, x_j] = \frac{i \epsilon_{ijk} x_k}{[l(l+1)]^{1/2}}, \quad \sum_{i=1}^3 x_i^2 = {\bf 1}, \quad l \in \{\frac{1}{2}, 1, \frac{3}{2} \ldots \}.$$ The $x_i$ commute and become $\hat{n}_i$ in the limit $l \rightarrow \infty$. If $L_i = [l(l+1)]^{1/2}x_i$, then $[L_i, L_j] = i \epsilon_{ijk}L_k$ and $\sum L_i^2 = l(l+1)$, so that $L_i$ give the irreducible representation (IRR) of the $SU(2)$ Lie algebra for angular momentum $l$. $L_i$ or $x_i$ generate the algebra $A=M_{2l+1}$ of $(2l+1) \times (2l+1)$ matrices. Scalar wave functions on $S^2$ come from elements of ${\cal A}$. In a similar way, elements of $A$ assume the role of scalar wave functions on $S_F^2$. A scalar product on $A$ is $\langle \xi, \eta \rangle = Tr {\xi}^{\dagger} \eta$. $A$ acts on this Hilbert space by left- and right- multiplications giving rise to the left and right- regular representations $A^{L,R}$ of $A$. For each $a \in A$, we thus have operators $a^{L, R} \in A^{L,R}$ acting on $\xi \in A$ according to $a^L \xi = a \xi, a^R \xi = \xi a$. \[Note that $a^L b^L = (ab)^L $ while $a^R b^R = (ba)^R$.\] We assume by convention that elements of $A^L$ are to be identified with fuzzy versions of functions on $S^2$. Note that there are two kinds of angular momentum operators $L_i^{L}$ and $-L_i^{R} $ for $S_F^2$. The orbital angular momentum operator, which should annihilate ${\bf 1}$, is ${\cal L}_i = L_i^L - L_i^R$. $\vec{\cal L}$ plays the role of the continuum $-i(\vec{r} \times \vec{\nabla})$. The construction of the Dirac operator is of crucial importance for fuzzy physics. The following two Dirac operators on $S^2$ have occurred in the fuzzy literature : $$\begin{aligned} {\cal D}_1 &=& \vec{\sigma}. [-i(\vec{r} \times \vec{\nabla})] + {\bf 1}, \\ {\cal D}_2 &=& \epsilon_{ijk}\sigma_i \hat{n}_j {\cal J}_k,\end{aligned}$$ where $${\cal J}_k = [-i(\vec{x} \times \vec{\nabla})]_k + \sigma_k/2 = \rm{Total~ angular~ momentum~ operators~} .$$ There is a common chirality operator $\Gamma$ anticommuting with both: $$\begin{aligned} \Gamma = \vec{\sigma}.\hat{n} &=& \Gamma^{\dagger}, \quad \Gamma^2 = {\bf 1},\label{g2} \\ \Gamma {\cal D}_\alpha + {\cal D}_\alpha \Gamma &=&0.\end{aligned}$$ These Dirac operators [*in the continuum*]{} are unitarily equivalent, $$\begin{aligned} {\cal D}_2& =& \exp{(i \pi \Gamma/4)} {\cal D}_1 \exp{(-i \pi \Gamma/4)} \label{gamma}\nonumber\\ &=&i{\Gamma}{\cal D}_1,\nonumber\\\end{aligned}$$ and have the spectrum $$\text{Spectrum of}\,\,{\cal D}_\alpha = \{ \pm (j+1/2): j \in \{1/2, 3/2, \ldots \} \},$$ where $j$ is total angular momentum (spectrum of $\vec{\cal J}^2 =\{j(j+1)\}$ ). Since $|{\cal D}_{\alpha}|$ $(\equiv $ positive square root of ${\cal D}_{\alpha}^2$ $)$ for both ${\alpha}$ share the same spectrum and eigenvectors by $(9)$ and rotational invariance, $ |{\cal D}_{1}|=|{\cal D}_{2}| $. Further being multiples of unity for each fixed $j$, they commute with the rotationally invariant ${\Gamma}$. As they are invertible too, we have the important identity $$\Gamma~=~i{\frac{{\cal D}_1}{|{\cal D}_1|}}{\frac{{\cal D}_2} {|{\cal D}_2|}}.$$ The discrete version of ${\cal D}_1$ is: $$D_1 = \vec{\sigma}. \vec{\cal L} + {\bf 1}.$$ Its spectrum is $$\begin{aligned} \text{Spectrum of}\,\,D_1 &=& \left\{ \pm (j+\frac{1}{2}): j \in \{ \frac{1}{2}, \frac{3}{2}, \ldots 2l-\frac{1}{2} \} \right\} \nonumber \\ &\cup& \left\{ (j+\frac{1}{2}): j=2l+\frac{1}{2} \right\}.\label{specd1}\end{aligned}$$ It is easy to derive (\[specd1\]) by writing $$\begin{aligned} D_1&=& \vec{J}^2 - \vec{\cal L}^2 - \left(\frac{\vec{\sigma}}{2}\right)^2 + {\bf 1}, \\ \left(\frac{\vec{\sigma}}{2}\right)^2 &=& \frac{3}{4} {\bf 1},\\ {\cal L}_k +\frac{{\sigma}_k}{2}&=&J_k = ~\text{Total} ~\text{angular} ~\text{momentum} ~\text{operators}. \end{aligned}$$ We let $j(j+1)$ denote the eigenvalues of ${\vec{J}}^2$. Then for $\vec{\cal L}^2 = k(k+1), k \in \{0, 1, \ldots 2l \}$, if $j=k+1/2$ we get $+(j+1/2)$ as eigenvalue of $D_1$, while if $j=k-1/2$ we get $-(j+1/2)$. The absence of $-(2l+1/2)$ in (\[specd1\]) is just because $k$ cuts off at $2l$.(The same derivation works also for ${\cal D}_1$). The discrete version of ${\cal D}_2$ is : $$D_2 = -\epsilon_{ijk}\sigma_i x^L_j J_k = \epsilon_{ijk}\sigma_i x_j^L L_k^R .$$ $D_2$ is no longer unitarily equivalent to $D_1$, its spectrum being [@watamura1; @watamura2] $$\begin{aligned} \text{Spectrum of}\,\,D_2 &=& \left\{ \pm(j+1/2)\left[1 + \frac{1-(j+1/2)^2}{4l(l+1)}\right]^{1/2}: \nonumber \right.\\ &j& \left. \in \{ \frac{1}{2},\frac{3}{2}, \ldots 2l+\frac{1}{2} \} \right\}{\cup}\{0:j=2l+\frac{1}{2}\}. \label{specd2}\end{aligned}$$ The first operator has been used extensively by Grosse et al [@grklpr1; @grklpr2; @grklpr3] while the second was first introduced by Watamuras [@watamura1; @watamura2]. It is remarkable that the eigenvalues (\[specd1\]) coincide [*exactly*]{} with those of ${\cal D}_\alpha$ upto $j=(2l-1/2)$. In contrast $D_2$ has zero modes when $j~=~2l~+\frac{1}{2}$ and very small eigenvalues for large values of $j$, both being absent for ${\cal D}_{\alpha}$ . So $D_1$ is a better approximation to ${\cal D}_\alpha$. But $D_1$ as it stands admits no chirality operator anti-commuting with it. This is easy to see as its top eigenvalue does not have its negative in the spectrum. Instead $D_2$ has the nice feature of admitting a chirality operator: the eigenvalue for top $j$, even though it has no pair, is exactly zero. So the best fuzzy Dirac operator has to combine the good properties of $D_1$ and $D_2$. We suggest it to be $D_1$ after projecting out its top $j$ mode. We will show that it then admits a chirality with the correct continuum limit . The chirality operator anticommuting with $D_2$ and squaring to ${\bf 1}$ in the [*entire*]{} Hilbert space is $$\begin{aligned} \gamma_2 &=& \gamma_2^{\dagger} = -\frac{\vec{\sigma}.{\vec{L}}^R -1/2}{l+1/2}, \\ \gamma_2^2 &=& {\bf 1}.\end{aligned}$$ An interpretation of ${\gamma}_2$ is that $(1{\pm}{\gamma}_2)/2$ are projectors to subspaces where $(-\vec{L}^R+\vec{\sigma}/2)^2$ have values $(l{\pm}\frac{1}{2})(l{\pm}\frac{1}{2}+1)$ [@bbiv]. From $(16)$ follows the identity $$[D_1,\gamma_2]~=-~2~i~{\lambda}~D_2 ; {\lambda}~=~\sqrt{1-\frac{1}{(2l~+~1)^2}}~.$$ Now $D_{\alpha}^2$ and $ |D_{\alpha}|(\equiv $ nonnegative square root of $D_{\alpha}^2)$ are multiples of identity for each fixed [*j*]{}, and ${\gamma}_2$ commutes with ${\vec{J}}$ . Hence they mutually commute : $$[A,B]~=~0~ \text{for}~ A~,B~=~D_{\alpha}^2~ , |D_{\alpha}|~ \text{or}~ {\gamma}_2~ .$$ Therefore from $(20)$, $$\{D_1,D_2\}=\frac{1}{2i{\lambda}}[D_1^2,{\gamma}_2]=0 .$$ In addition we can see that $$[D_{\alpha}^2,D_{\beta}]=[|D_{\alpha}|,D_{\beta}]=0.$$ If $$\begin{aligned} \epsilon_\alpha &=& \frac{D_\alpha}{|D_\alpha|} \quad \text{on the subspace $V$ with}\;j \leq 2l-1/2 , \nonumber \\ &=& 0 \quad \text{on the subspace $W$ with}\;j=2l+1/2.\end{aligned}$$ it follows that $$e_1={\epsilon}_1~ , e_2={\epsilon}_2~ , e_3=i{\epsilon}_1{\epsilon}_2$$ generate a Clifford algebra on $V$ . That is, if $P$ is the orthogonal projector on $V$, $$\begin{aligned} P \xi &=& \xi, \quad \xi \in V, \nonumber \\ &=& 0, \quad \xi \in W,\end{aligned}$$ then $$\{e_{\alpha},e_{\beta}\}=2{\delta}_{{\alpha}{\beta}}P.$$ All this allows us to infer that $\{e_3,D_1\}=0$ so that it is a chirality operator for either $D_1$ or its restriction $PD_1P$ to $V$.In addition,[*in view of $(10,25)$, it has the correct continuum limit as well* ]{}so that it is a good choice for chirality in that respect too . A unitary transformation of $e_3$ and $D_1$ will not disturb their nice features. Such a transformation bringing $e_3$ to ${\gamma}_2$ on $V$ is convenient . It can be constructed as follows . $e_{\alpha}$ and ${\gamma}_2$ being rotational scalars leave the two-dimensional subspaces in $V$ with fixed values of $\vec{J}^2$ and $J_3$ invariant . On this subspace, $e_{\alpha}$ and unity form a basis for linear operators, so ${\gamma}_2$ is their linear combination . As $e_1$,$e_3$ and ${\gamma}_2$ anticommute with $e_2$,and all square to ${\bf 1}$, in this subspace , we infer that ${\gamma}_2$ is a transform by a unitary operator $U=exp(i{\theta}e_2/2)$ of $e_3$ in each such subspace . And ${\theta}$ can depend only on $\vec{J}^2$ by rotational invariance . Thus we can replace $PD_1P$ and $e_3$ by the new Dirac and chirality operators $$\begin{aligned} D &=& e^{(i \theta (J^2) \epsilon_2)/2} (P D_1 P) e^{(-i \theta (J^2) \epsilon_2)/2},\nonumber\\ {\gamma}:&=&P\gamma_2 P = e^{(i \theta (J^2) \epsilon_2)/2} (i\epsilon_1\epsilon_2) e^{(-i \theta (J^2) \epsilon_2)/2} \nonumber \\ &=& \cos \theta (J^2) (i\epsilon_1\epsilon_2) + \sin \theta (J^2) \epsilon_1. \end{aligned}$$ The coefficients can be determined by taking traces with ${\epsilon}_1$ and $i{\epsilon}_1{\epsilon}_2$ . We have established that chiral fermions can be defined on $S^2_F$ with no fermion doubling, at least in the absence of fuzzy monopoles. We next extend this result to include them as well. #### *[Monopoles and Instantons]{}:* In the continuum, monopoles and instantons are particular connection fields $\omega$ on certain twisted bundles over the base manifold ${\cal M}$. On $S^2$, they are monopole bundles, on $S^4$ or ${\mathbb C}{\mathbb P}^2$, they can be $SU(2)$ bundles. In algebraic $K$-theory, it is well-known that these bundles are associated with projectors ${\cal P}$ [@connes; @coquereaux; @mssv]. ${\cal P}$ is a matrix of some dimension $M$ with ${\cal P}_{ij} \in {\cal A} \equiv {\cal C}^{\infty}({\cal M})$, ${\cal P}^2 = {\cal P} = {\cal P}^{\dagger}$. The physical meaning of ${\cal P}$ is the following. Let ${\cal A}^M = {\cal A} \otimes {\mathbb C}^M = \{ \xi =(\xi_1, \xi_2 \ldots \xi_M):\xi_i \in {\cal A} \}$. Then ${\cal PA}^M = \{ {\cal P}\xi: \xi \in {\cal A}^M\}$ consists of smooth sections (or wave functions) ${\cal P} \xi$ of a vector bundle over ${\cal M}$. For suitable choices of ${\cal P}$, we get monopole or instanton vector bundles. These projectors are known [@mssv] and those for monopoles have been reproduced in [@bbiv]. The projectors $p^{(\pm N)}$ for fuzzy monopoles of charge $\pm N$ have also been explicitly found in [@bbiv]. They act on $A^{2^N} = \{ \xi$ with components $\xi_{b_1 \ldots b_N} \in A, b_i \in \{1,2\} \}$. Let $\vec{\tau}^{(i)}$ ($i=1, 2, \ldots N$) be commuting Pauli matrices. $\vec{\tau}^{(i)}$ has the normal action on the index $b_i$ and does not affect $b_j$ ($j \neq i$). Then $\vec{K} = \vec{L^L} + \sum_i \vec{\tau}^{(i)}/2$ generates the $SU(2)$ Lie algebra and $p^{(N)}$ ($p^{(-N)}$) is the projector to the maximum (minimum) angular momentum $k_{\text{max}}=l+N/2$ ($k_{\text{min}}=l-N/2$). \[$\vec{K}^2 p^{(N)} = k_{\text{max}}(k_{\text{max}}+1) p^{(N)}$, $\vec{K}^2 p^{(-N)} = k_{\text{min}}(k_{\text{min}}+1) p^{(-N)}$.\] Fuzzy analogues of monopole wave functions are $p^{(\pm N)} A^{2^N}$. When spin is included, we must enhance $p^{(\pm N)} A^{2^N}$ to $p^{(\pm N)} A^{2^N} \otimes {\mathbb C}^2 = p^{(\pm N)} A^{2^{N+1}} = \{\xi$ with components $\xi_{b_1 \ldots b_N, j} \in A: b_i, j \in \{1,2 \} \}$. The complications to be resolved now are caused by the need to project out a subspace of $A^{2^{N+1}}$. It is the analogue of the subspace projected out by $P$ for $N=0$. In its absence, for example in the continuum, there is a canonical way due to Connes [@connes] for extending cyclic cohomology to twisted bundles. In the $N=0$ sector, the projector $P$ cuts out the subspace $W$ of $A^2$. When we pass to $p^{(\pm N)} A^{2^N}$ and thence to $p^{(\pm N)} A^{2^{N+1}}$ by including spin, the subspace to be projected out is [*not*]{} determined by $P$ if $N \neq 0$, as we shall see below. Rather, we can explain it as follows: Let $\vec{J} = \vec{K} - \vec{L^R} + \vec{\sigma}/2$ be the “total angular momentum”. Calling $\vec{J}$ by this name is appropriate as its components become $(15)$ for $N=0$ and displays the known “spin-isospin mixing”[@jacreb] for $N \neq 0$. The maximum of $\vec{J}^2$ on $p^{(\pm N)} A^{2^{N+1}}$ is $J_{\text{max}}(J_{\text{max}} +1), J_{\text{max}} = (l \pm N/2)+l+1/2 =2l \pm N/2 +1/2$. \[We assume that $2l \geq (N-1)/2$.\] The vectors to be projected out are those with total angular momentum $J_{\text{max}}$. If ${\cal J}^{(\pm N)}$ are the corresponding projectors \[with ${\cal J}^{(0)} = P$\], the twisted space we work with is ${\cal J}^{(\pm N)}p^{(\pm N)} A^{2^{N+1}}$. Since $p^{(\pm N)}$ commute with $\vec{J}$ and hence with ${\cal J}^{(\pm N)}$, $Q^{(\pm N)} = {\cal J}^{(\pm N)}p^{(\pm N)}$ are also projectors. There is no degeneracy for angular momentum $J_{\text{max}}$ in $p^{(\pm N)} A^{2^{N+1}}$. That is because there is only one way to couple $l \pm N/2, l$ and $1/2$ to $J_{\text{max}}$. The space $({\bf 1} - {\cal J}^{(\pm N)})p^{(\pm N)} A^{2^{N+1}}$ is thus of dimension $2 J_{\text{max}} + 1$. We want to get rid of this subspace. The operators $T = D$ or $\gamma$ are zero on $({\bf 1} - P)A^2$ where $P$ cuts out states of angular momentum $2l+1/2$. There is no degeneracy for this angular momentum in $A^2$. $T$ and $P$ extend canonically to $A^2 \otimes {\mathbb C}^{2^N} (\equiv A^{2^{N+1}})$ as $T \otimes {\bf 1}$ and $P \otimes {\bf 1}$. Let us call them once more as $T$ and $P$. $T$ and $P$ commute with $\vec{J}$ and hence with ${\cal J}^{(\pm N)}$. There is only one way to couple $N$ “isospin” $1/2$’s to $(2l + 1/2)$ to get $J_{\text{max}}$ so that $({\bf 1} - {\cal J}^{(\pm N)})({\bf 1} - P) A^{2^{N+1}}$ is also of dimension $2 J_{\text{max}}+1$. And $T$ is zero on this subspace. The projectors $({\bf 1} - {\cal J}^{(\pm N)})p^{(\pm N)}$ and $({\bf 1} - {\cal J}^{(\pm N)})({\bf 1} - P)$ being of the same rank, there exists a unitary operator $U$ on $A^{2^{N+1}}$ transforming one to the other: $$({\bf 1} - {\cal J}^{(\pm N)})p^{(\pm N)} = U({\bf 1} - {\cal J}^{(\pm N)})({\bf 1} - P)U^{-1}. \label{Udef}$$ If we transport $T$ by $U$, $$T' = UTU^{-1},$$ then $T'=D', F'$ or $\gamma'$ vanishes on $({\bf 1} - {\cal J}^{(\pm N)}) p^{(\pm N)} A^{2^{N+1}}$. On its orthogonal complement $[{\bf 1} - ({\bf 1} - {\cal J}^{(\pm N)})p^{(\pm N)}] A^{2^{N+1}}$, invariant under $T'$, $\gamma'$ squares to unity and anticommutes with $D'$ just as we want. What replaces $P$ now is not ${\cal J}^{(\pm N)}$, but rather $$\begin{aligned} P^{(\pm N)} &=& [{\bf 1}-({\bf 1}-{\cal J}^{(\pm N)})p^{(\pm N)}], \\ P^{(0)} &=& P.\end{aligned}$$ As $l \rightarrow \infty$, $J_{\text{max}}$ becomes dominated by $2l$ and so we have the freedom to let $U$ approach ${\bf 1}$. That is, no $U$ is needed in the continuum limit. Total angular momentum $2l+1/2+N/2$ has no multiplicity in $A^{2^{N+1}}$. As both $({\bf 1} - {\cal J}^{(N)})({\bf 1} - P) A^{2^{N+1}}$ and $({\bf 1} - {\cal J}^{(N)})p^{(N)} A^{2^{N+1}}$ have this angular momentum, we have that $$({\bf 1} - {\cal J}^{(N)})({\bf 1} - P) A^{2^{N+1}} = ({\bf 1} - {\cal J}^{(N)})p^{(N)} A^{2^{N+1}}. \label{same}$$ So we choose $$U={\bf 1} \quad \text{on} \quad ({\bf 1} - {\cal J}^{(N)})({\bf 1}-P) A^{2^{N+1}}. \label{Uchoice}$$ Next in accordance with (\[Udef\]), we set $$U({\bf 1} - {\cal J}^{(-N)})({\bf 1} - P) A^{2^{N+1}} = ({\bf 1} - {\cal J}^{(-N)})p^{(-N)} A^{2^{N+1}}. \label{UonminusN}$$ We also demand that $$[U, J_i]=0. \label{rotinv}$$ That fixes $U$ upto a phase on the subspace $({\bf 1} - {\cal J}^{(-N)})({\bf 1} - P) A^{2^{N+1}}$. We saw in [@bbiv] that $(1 \pm \gamma)/2$ are projectors for combining $-\vec{L}^R$ and $\vec{\sigma}/2$ to give angular momenta $l \pm 1/2$. So $\gamma =+1$ on all the subspaces $({\bf 1} - {\cal J}^{(\pm N)})({\bf 1}-P) A^{2^{N+1}}$, $({\bf 1} - {\cal J}^{(\pm N)})(p^{(\pm N)}) A^{2^{N+1}}$. Also, $(\frac{\sum \vec{\tau}^{(i)}}{2})^2$ is $\frac{N}{2}(\frac{N}{2}+1)$ on these subspaces. It follows that (\[Uchoice\]), (\[UonminusN\]) and (\[rotinv\]) are compatible with a $U$ commuting with $J_i$, $\gamma$, $(-\vec{L}^R +\vec{\sigma}/2)^2$ and $(\frac{\sum \vec{\tau}^{(i)}}{2})^2$. We now outline an extension of $U$ to all of $A^{2^{N+1}}$ consistently with rotational invariance (\[rotinv\]) and $$\left[U, \left( -\vec{L}^R+\frac{\vec{\sigma}}{2} \right)^2 \right] = \left[U, \left(\frac{\sum \vec{\tau}^{(i)}}{2}\right)^2 \right]= 0.$$ An important consequence is that $$\gamma' = \gamma$$ so that $$[\gamma', p^{(\pm N)}] = [\gamma', J_i] = [\gamma', {\cal J}^{(\pm N)}] = 0. \label{chiprop}$$ One way to specify $U$ more fully is as follows. Let $$A^{2^{N+1}} = X \oplus X^{\perp} = X' \oplus {X'}^{\perp}$$ be orthogonal decompositions where $$\begin{aligned} X &=& ({\bf 1}-{\cal J}^{(N)})({\bf 1} - P)A^{2^{N+1}} \oplus ({\bf 1}-{\cal J}^{(-N)})({\bf 1} - P)A^{2^{N+1}}, \\ X' &=& UX = ({\bf 1}-{\cal J}^{(N)})p^{(N)}A^{2^{N+1}} \oplus ({\bf 1}-{\cal J}^{(-N)}) p^{(-N)} A^{2^{N+1}}.\end{aligned}$$ Both $X$ and $X'$ are invariant under the self-adjoint operators $$J_i, \left( -\vec{L}^R + \frac{\vec{\sigma}}{2} \right)^2 \quad \text{and} \quad \left( \sum \frac{\vec{\tau}^{(i)}}{2} \right)^2. \nonumber$$ Therefore, the same is the case with $X^{\perp}$ and ${X'}^{\perp}$. We can extend $U$ to a map $X^{\perp} \rightarrow {X'}^{\perp}$ which commutes with the above operators. There would still be uncertainties about choosing $U$ requiring further conventions for elimination. Although $T'$ are operators on $P^{(\pm N)}A^{2^{N+1}}$, that is not the space of sections for the twisted bundles. The latter is, rather, $$Q^{(\pm N)} A^{2^{N+1}} =p^{(\pm N)}P^{(\pm N)} A^{2^{N+1}}= p^{(\pm N)}{\cal J}^{(\pm N)}A^{2^{N+1}} .$$ It is not an invariant subspace for $D'$ unless $D'$ is projected, or corrected by connections as explained in [@bal] . However, chirality $\gamma'$ is well-defined on twisted sections because of (\[chiprop\]). We now permanently rename $T'$, $P^{(\pm N)}$ and $Q^{(\pm N)}$ as follows: $$\begin{aligned} D', \gamma' &\rightarrow& D, \gamma~ ~,~ ~p^{(\pm N)} \rightarrow p, \nonumber\\ P^{(\pm N)} &\rightarrow& P~ ~,~ ~ Q^{(\pm N)} \rightarrow Q.\end{aligned}$$ $\gamma'$ in any case is $\gamma$. This completes our construction of Dirac operator and chirality on $S^2_F$ . We may remark here that the algebra for the space $S^2_F{\times}S^2_F$ is $A{\otimes}_{\mathbb C}A$ while its Dirac and chirality operators are $D{\otimes}{\bf 1} +{\gamma}{\otimes}D$ and ${\gamma}{\otimes}{\gamma}$ . [**Acknowledgments**]{} S.Vaidya, Xavier Martin, Denjoe O’Connor and Peter Prešnajder offered us many good suggestions during this work while Apoorva Patel told us about [@luscher]. We thank them for their help. The work of APB was supported in part by the DOE under contract number DE-FG02-85ER40231. [10]{} N. B. Nielsen and M. Ninomiya, , B105: 219, 1981; , B185: 20, 1981. J. Madore. . Cambridge University Press, Cambridge, 1995; [gr-qc/9906059]{}. H. Grosse and P. Prešnajder. , 33:171–182, 1995. and references therein. H. Grosse, C. Klimčík, and P. Prešnajder. , [**178**]{}:507–526, 1996; [**185**]{}:155–175, 1997; H. Grosse and P. Prešnajder. :61–69, 1998 and ESI preprint, 1999. H. Grosse, C. Klimčík, and P. Prešnajder. , 180:429–438, 1996. . H. Grosse, C. Klimčík, and P. Prešnajder. In [*Les Houches Summer School on Theoretical Physics*]{}, 1995. . See citations in [@gropre; @grklpr1; @grklpr2; @grklpr3] for further references. U. Carow-Watamura and S. Watamura. . ,183:365–382, 1997. U. Carow-Watamura and S. Watamura. . J. Frohlich, O. Grandjean, and A. Recknagel. , 203:119–184,, 1999. and references therein for fuzzy group manifolds. S. Baez, A. P. Balachandran, S. Vaidya and B. Ydri. and Comm.Math.Phys (in press). A. Connes. . Academic Press, London, 1994; G. Landi. . Springer-Verlag , Berlin , 1997 . . R. Coquereaux. , 6:425–490, 1989. A. P. Balachandran, S. Vaidya. and [hep-th/9910129]{}. M. Luscher. and ; [hep-lat/9904009]{} and references therein . H. Grosse and A. Strohmaier. :163–179,1999. . J. A. Mignaco, C. Sigaud, A. R. da Silva and F. J. Vanhecke. and [hep-th/9611058]{}; G. Landi. . R. Jackiw and C. Rebbi. :1116-1119, 1976 ; P. Hasenfratz and G. ’t Hooft. :1119-1122, 1976.
--- abstract: 'To address three important issues involved in latent variable models (LVMs), including capturing infrequent patterns, achieving small-sized but expressive models and alleviating overfitting, several studies have been devoted to “diversifying” LVMs, which aim at encouraging the components in LVMs to be diverse. Most existing studies fall into a frequentist-style regularization framework, where the components are learned via point estimation. In this paper, we investigate how to “diversify” LVMs in the paradigm of Bayesian learning. We propose two approaches that have complementary advantages. One is to define a diversity-promoting mutual angular prior which assigns larger density to components with larger mutual angles and use this prior to affect the posterior via Bayes’ rule. We develop two efficient approximate posterior inference algorithms based on variational inference and MCMC sampling. The other approach is to impose diversity-promoting regularization directly over the post-data distribution of components. We also extend our approach to “diversify” Bayesian nonparametric models where the number of components is infinite. A sampling algorithm based on slice sampling and Hamiltonian Monte Carlo is developed. We apply these methods to “diversify” Bayesian mixture of experts model and infinite latent feature model. Experiments on various datasets demonstrate the effectiveness and efficiency of our methods.' author: - | Pengtao Xie pengtaox@cs.cmu.edu\ Machine Learning Department\ Carnegie Mellon University\ Pittsburgh, PA 15213, USA\ Jun Zhu dcszj@tsinghua.edu.cn\ State Key Lab of Intelligent Technology and Systems\ Tsinghua National Lab for Information Science and Technology\ Department of Computer Science and Technology\ Tsinghua University\ Beijing, 100084 China\ Eric P. Xing eric.xing@petuum.com\ Petuum Inc.\ 2555 Smallman St.\ Pittsburgh, PA 15222, USA\ bibliography: - 'egbib.bib' title: 'Diversity-Promoting Bayesian Learning of Latent Variable Models' --- Diversity-Promoting, Latent Variable Models, Mutual Angular Prior, Bayesian Learning, Variational Inference Introduction ============ Latent variable models (LVMs) [@bishop1998latent; @knott1999latent; @blei2014build] are a major workhorse in machine learning (ML) to extract latent *patterns* underlying data, such as *themes* behind documents and *motifs* hiding in genome sequences. To properly capture these patterns, LVMs are equipped with a set of *components*, each of which is aimed to capture one pattern and is usually parametrized by a vector. For instance, in topic models [@blei2003latent], each component (referred to as *topic*) is in charge of capturing one *theme* underlying documents and is represented by a multinomial vector. While existing LVMs have demonstrated great success, they are less capable in addressing two new problems emerged due to the growing volume and complexity of data. First, it is often the case that the frequency of patterns is distributed in a power-law fashion [@wang2014peacock; @xie2015diversifying] where a handful of patterns occur very frequently whereas most patterns are of low frequency. Existing LVMs lack capability to capture infrequent patterns, which is possibly due to the design of LVMs’ objective function used for training. For example, a maximum likelihood estimator would reward itself by modeling the frequent patterns well as they are the major contributors of the likelihood function. On the other hand, infrequent patterns contribute much less to the likelihood, thereby it is not very rewarding to model them well and LVMs tend to ignore them. Infrequent patterns often carry valuable information, thus should not be ignored. For instance, in a topic modeling based recommendation system, an infrequent topic (pattern) like *losing weight* is more likely to improve the click-through rate than a frequent topic like *politics*. Second, the number of components $K$ strikes a tradeoff between model size (complexity) and modeling power. For a small $K$, the model is not expressive enough to sufficiently capture the complex patterns behind data; for a large $K$, the model would be of large size and complexity, incurring high computational overhead. How to reduce model size while preserving modeling power is a challenging issue. To cope with the two problems, several studies [@Zou_priorsfor; @xie2015diversifying; @xie2015learning] propose a “diversification” approach, which encourages the components of a LVM to be mutually “dissimilar”. First, regarding capturing infrequent patterns, as posited in [@xie2015diversifying] “diversified” components are expected to be less aggregated over frequent patterns and part of them would be spared to cover the infrequent patterns. Second, concerning shrinking model size without compromising modeling power, @xie2015learning argued that “diversified” components bear less redundancy and are mutually complementary, making it possible to capture information sufficiently well with a small set of components, i.e., obtaining LVMs possessing high representational power and low computational complexity. The existing studies [@Zou_priorsfor; @xie2015diversifying; @xie2015learning] of “diversifying” LVMs mostly focus on [*point estimation*]{} [@wasserman2013all] of the model components, under a frequentist-style regularized optimization framework. In this paper, we study how to promote diversity under an alternative learning paradigm: Bayesian inference [@jaakkola1997variational; @bishop2003bayesian; @neal2012bayesian], where the components are considered as random variables of which a [*posterior distribution*]{} shall be computed from data under certain priors. Compared with point estimation, Bayesian learning offers complementary benefits. First, it offers a “model-averaging" [@jaakkola1997variational; @bishop2003bayesian] effect for LVMs when they are used for decision-making and prediction because the parameters shall be integrated under a posterior distribution, thus potentially alleviate overfitting on training data. Second, it provides a natural way to quantify uncertainties of model parameters, and downstream decisions and predictions made thereupon [@jaakkola1997variational; @bishop2003bayesian; @neal2012bayesian]. @affandi2013approximate investigated the “diversification” of Bayesian LVMs using the determinantal point process (DPP) [@kulesza2012determinantal] prior. While Markov chain Monte Carlo (MCMC) [@affandi2013approximate] methods have been developed for approximate posterior inference under the DPP prior, DPP is not amenable for another mainstream paradigm of approximate inference techniques – variational inference [@wainwright2008graphical] – which is usually more efficient [@hoffman2013stochastic] than MCMC. In this paper, we propose alternative diversity-promoting priors that overcome this limitation. We propose two approaches that have complementary advantages to perform diversity-promoting Bayesian learning of LVMs. Following [@xie2015diversifying], we adopt a notion of diversity that component vectors are more diverse provided the pairwise angles between them are larger. First, we define mutual angular Bayesian network (MABN) priors over the components, which assign higher probability density to components that have larger mutual angles and use these priors to affect the posterior via Bayes’ rule. Specifically, we build a Bayesian network [@koller2009probabilistic] whose nodes represent the directional vectors of the components and local probabilities are parameterized by von Mises-Fisher [@mardia2009directional] distributions that entail an inductive bias towards vectors with larger mutual angles. The MABN priors are amenable for approximate posterior inference of model components. In particular, they facilitate variational inference, which is usually more efficient than MCMC sampling. Second, in light of that it is not flexible (or even possible) to define priors to capture certain diversity-promoting effects such as small variance of mutual angles, we adopt a posterior regularization approach [@zhu2014bayesian], in which a diversity-promoting regularizer is directly imposed over the post-data distributions to encourage diversity and the regularizer can be flexibly defined to accommodate various desired diversity-promoting goals. We instantiate the two approaches to the Bayesian mixture of experts model (BMEM) [@waterhouse1996bayesian] and experiments demonstrate the effectiveness and efficiency of our approaches. We also study how to “diversify” Bayesian nonparametric LVMs (BN-LVMs) [@ferguson1973bayesian; @ghahramani2005infinite; @hjort2010bayesian]. Different from parametric LVMs where the component number is set to an finite value and does not change throughout the entire execution of algorithm, in BN-LVMs the number of components is unlimited and can reach infinite in principle. As more data accumulates, new components are dynamically added. Compared with parametric LVMs, BN-LVMs possess the following advantages: (1) they are highly flexible and adaptive: if new data cannot be well modeled by existing components, new components are automatically invoked; (2) in BN-LVMs, the “best" number of components is determined according to the fitness to data, rather than being manually set which is a challenging task even for domain experts. To “Diversify" BN-LVMs, we extend the MABN prior to an Infinite Mutual Angular (IMA) prior that encourages infinitely many components to have large angles. In this prior, the components are mutually dependent, which incurs great challenges for posterior inference. We develop an efficient sampling algorithm based on slice sampling [@tehstick] and Riemann manifold Hamiltonian Monte Carlo [@girolami2011riemann]. We apply the IMA prior to induce diversity in the infinite latent feature model (ILFM) [@ghahramani2005infinite] and experiments on various datasets demonstrate that the IMA is able to (1) achieve better performance with fewer components; (2) better capture infrequent patterns; and (3) reduce overfitting. The major contributions of this work are: - We propose a mutual angular Bayesian network (MABN) prior which is biased towards components having large mutual angles, to promote diversity in Bayesian LVMs. - We develop an efficient variational inference method for posterior inference of model components under the MABN priors. - To flexibly accommodate various diversity-promoting effects, we study a posterior regularization approach which directly imposes diversity-promoting regularization over the post-data distributions. - We extend the MABN prior from the finite case to the infinite case and apply it to “diversify" Bayesian nonparametric models. - We develop an efficient sampling algorithm based on slice sampling and Riemann manifold Hamiltonian Monte Carlo for “diversified” BN-LVMs. - Using Bayesian mixture of experts model and infinite latent feature model as study cases, we empirically demonstrate the effectiveness and efficiency of our methods. The rest of the paper is organized as follows. Section 2 reviews related works. In Section 3 and 4, we introduce how to promote diversity in Bayesian parametric and nonparametric LVMs respectively. Section 5 gives experimental results and Section 6 concludes the paper. Related Work ============ Recent works [@Zou_priorsfor; @xie2015diversifying; @xie2015learning] have studied the diversification of components in LVMs under a point estimation framework. @Zou_priorsfor leverage the determinantal point process (DPP) [@kulesza2012determinantal] to promote diversity in latent variable models. @xie2015diversifying propose a mutual angular regularizer that encourages model components to be mutually different where the dissimilarity is measured by angles. @cogswell2015reducing define a covariance-based regularizer to reduce the correlation among hidden units in neural networks, for the sake of alleviating overfitting. Diversity-promoting Bayesian learning of LVMs has been investigated in [@affandi2013approximate], which utilizes the DPP prior to induce bias towards diverse components. They develop a Gibbs sampling [@gilks2005markov] algorithm. But the determinant in DPP makes variational inference based algorithms very difficult to derive. Our conference version of the paper [@xie2016diversity] has introduced a mutual angular prior to “diversify” Bayesian parametric LVMs. This work extends the study of “diversification” to nonparametric models where the number of components is infinite. Diversity-promoting regularization is investigated in other problems as well, such as ensemble learning and classification. In ensemble learning, many studies [@kuncheva2003measures; @banfield2005ensemble; @partalas2008focused; @yu2011diversity] explore how to select a diverse subset of base classifiers or regressors, with the aim to improve generalization performance and reduce computational complexity. In multi-way classification, @malkin2008ratio propose to use the determinant of a covariance matrix to encourage “diversity” among classifiers. @jalali2015variational propose a class of [variational Gram functions]{} (VGFs) to promote pairwise dissimilarity among classifiers. Diversity-Promoting Bayesian Learning of Parametric Latent Variable Models ========================================================================== In this section, we study how to “diversify” parametric Bayesian LVMs where the number of components is finite. We investigate two approaches: prior control and posterior regularization, which have complementary advantages. Diversity-Promoting Mutual Angular Prior {#sec:map} ---------------------------------------- The first approach we take is to define a prior which has an inductive bias towards components that are more “diverse” and use it to affect the posterior via Bayes’ rule. We refer to this approach as *prior control*. While diversity can be defined in various ways, following [@xie2015diversifying] we adopt the notion that a set of component vectors are considered to be more diverse if the pairwise angles between them are larger. We desire the prior to have two traits. First, to favor diversity, they assign a higher density to components having larger mutual angles. Second, it should facilitate posterior inference. In Bayesian learning, the easiness of posterior inference relies heavily on the prior [@blei2006correlated; @wang2013variational]. One possible solution is to turn the mutual angular regularizer $\Omega({\mathbf}{A})$ [@xie2015diversifying] that encourages a set of component vectors ${\mathbf}{A}=\{{\mathbf}{a}_i\}_{i=1}^{K}$ to have large mutual angles into a distribution $p({\mathbf}{A})=\frac{1}{Z}\exp(\Omega({\mathbf}{A}))$ based on Gibbs measure [@kindermann1980markov], where $Z$ is the partition function guaranteeing that $p({\mathbf}{A})$ integrates to one. The concern is that it is not sure whether $Z=\int_{{\mathbf}{A}} \exp(\Omega({\mathbf}{A})) \mathrm{d}{\mathbf}{A}$ is finite, i.e., whether $p({\mathbf}{A})$ is proper. When an improper prior is utilized in Bayesian learning, the posterior is also highly likely to be improper, except in a few special cases [@wasserman2013all]. Performing inference on improper posteriors is problematic. Here we define mutual angular Bayesian network (MABN) priors possessing the aforementioned two traits, based on Bayesian network [@koller2009probabilistic] and von Mises-Fisher [@mardia2009directional] distribution. For technical convenience, we decompose each real-valued component vector ${\mathbf}{a}$ into ${\mathbf}{a}=g{\mathbf}{\tilde{a}}$, where $g=\|{\mathbf}{a}\|_2$ is the magnitude and ${\mathbf}{\tilde{a}}$ is the direction ($\|{\mathbf}{\tilde{a}}\|_2=1$). Let ${\mathbf}{\widetilde{A}}=\{{\mathbf}{\tilde{a}}_{i}\}_{i=1}^{K}$ denote the directional vectors. Note that the angle between two vectors is invariant to their magnitudes, thereby, the mutual angles of component vectors in ${\mathbf}{A}$ are the same as angles of directional vectors in ${\mathbf}{\widetilde{A}}$. We first construct a prior which prefers vectors in ${\mathbf}{\widetilde{A}}$ to possess large angles. The basic idea is to use a Bayesian network (BN) to characterize the dependency among directional vectors and design local probabilities to entail an inductive bias towards large mutual angles. In the Bayesian network (BN) shown in Figure \[fig:bn\], each node $i$ represents a directional vector ${\mathbf}{\tilde{a}}_i$ and its parents $\textrm{pa}({\mathbf}{\tilde{a}}_i)$ are nodes $1,\cdots, i-1$. We define a local probability at node $i$ to encourage ${\mathbf}{\tilde{a}}_i$ to have large mutual angles with ${\mathbf}{\tilde{a}}_1,\cdots,{\mathbf}{\tilde{a}}_{i-1}$. Since these directional vectors lie on a sphere, we use the von Mises-Fisher (vMF) distribution to model them. The probability density function of the vMF distribution is $f({\mathbf}{x})=C_{p}(\kappa)\exp(\kappa{\boldsymbol}\mu^\top {\mathbf}{x})$, where the random variable ${\mathbf}{x}\in\mathbb{R}^p$ lies on a $p-1$ dimensional sphere ($\|{\mathbf}{x}\|_2=1$), ${\boldsymbol}\mu$ is the mean direction with $\|{\boldsymbol}\mu\|_2=1$, $\kappa>0$ is the concentration parameter and $C_{p}(\kappa)$ is the normalization constant. The local probability $p({\mathbf}{\tilde{a}}_i|\textrm{pa}({\mathbf}{\tilde{a}}_i))$ at node $i$ is defined as a von Mises-Fisher (vMF) distribution whose density is $$\begin{aligned} \label{eq:map1} && p({\mathbf}{\tilde{a}}_i|\textrm{pa}({\mathbf}{\tilde{a}}_i))=C_{p}(\kappa)\exp\left( \kappa\left(-\frac{\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j}{\|\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j\|_2}\right)^\top \tilde{{\mathbf}{a}}_i \right)\end{aligned}$$ with mean direction $-\sum_{j=1}^{i-1}{\mathbf}{\tilde{a}}_j/||\sum_{j=1}^{i-1}{\mathbf}{\tilde{a}}_j||_2$. Now we explain why this local probability favors large mutual angles. Since ${\mathbf}{\tilde{a}}_i$ and ${\mathbf}{\tilde{a}}_j$ are unit-length vectors, ${\mathbf}{\tilde{a}}_j^\top {\mathbf}{\tilde{a}}_i$ is the cosine of the angle between ${\mathbf}{\tilde{a}}_i$ and ${\mathbf}{\tilde{a}}_j$. If ${\mathbf}{\tilde{a}}_i$ has larger angles with $\{{\mathbf}{\tilde{a}}_j\}_{j=1}^{i-1}$, then the average negative cosine similarity $(-\sum_{j=1}^{i-1}{\mathbf}{\tilde{a}}_j)^\top{\mathbf}{\tilde{a}}_i$ would be larger, accordingly $p({\mathbf}{\tilde{a}}_i|\textrm{pa}({\mathbf}{\tilde{a}}_i))$ would be larger. This statement is true for all $i>1$. As a result, $p({\mathbf}{\widetilde{A}})=p({\mathbf}{\tilde{a}}_1)\prod_{i=2}^{K}p({\mathbf}{\tilde{a}}_i|\textrm{pa}({\mathbf}{\tilde{a}}_i))$ would be larger if the directional vectors have larger mutual angles. For the magnitudes $\{g_i\}_{i=1}^K$ of the components, which have nothing to do with the mutual angles, we sample $g_i$ for each component independently from a gamma distribution with shape parameter $\alpha_1$ and rate parameter $\alpha_2$. ![A Bayesian Network Representation of the Mutual Angular Prior[]{data-label="fig:bn"}](figs/bn){width="0.45\columnwidth"} The generative process of ${\mathbf}{A}$ is summarized as follows: - Draw ${\mathbf}{\tilde{a}}_1\sim \textrm{vMF}({\boldsymbol}\mu_{0},\kappa)$ - For $i=2,\cdots,K$, draw ${\mathbf}{\tilde{a}}_i\sim \textrm{vMF}(-\frac{\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j}{\|\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j\|_2},\kappa)$ - For $i=1,\cdots,K$, draw $g_i\sim \textrm{Gamma}(\alpha_1,\alpha_2)$ - For $i=1,\cdots,K$, let ${\mathbf}{a}_i={\mathbf}{\tilde{a}}_i g_i$ The probability distribution over ${\mathbf}{A}$ can be written as $$\label{eq:map_1} \begin{array}{l} p({\mathbf}{A})=C_{p}(\kappa)\exp(\kappa \mu_{0}^\top {\mathbf}{\tilde{a}}_1)\prod_{i=2}^{K}C_{p}(\kappa)\exp\!\left(\kappa \left(-\frac{\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j}{\|\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j\|_2} \right)^\top \!\! \tilde{{\mathbf}{a}}_i \right) \!\! \prod_{i=1}^{K} \! \frac{\alpha_2^{\alpha_1}g_i^{\alpha_1-1} e^{-g_i\alpha_2}}{\Gamma(\alpha_1)} . \end{array}$$ According to the factorization theorem [@koller2009probabilistic] of Bayesian network, it is easy to verify $\int_{{\mathbf}{A}} p({\mathbf}{A}) \mathrm{d}{\mathbf}{A}=1$, thus $p({\mathbf}{A})$ is a proper prior. When inferring the posterior of model components using a variational inference method, we need to compute the expectation of $1/\|\sum_{j=1}^{i-1}{\mathbf}{\tilde{a}}_j||_2$ appearing in the local probability $p({\mathbf}{\tilde{a}}_i|\textrm{pa}({\mathbf}{\tilde{a}}_i))$, which is extremely difficult. To address this issue, we define an alternative local probability that achieves similar modeling effect as $p({\mathbf}{\tilde{a}}_i|\textrm{pa}({\mathbf}{\tilde{a}}_i))$, but greatly facilitates variational inference. We re-parametrize the local probability $\hat{p}({\mathbf}{\tilde{a}}_i|\textrm{pa}({\mathbf}{\tilde{a}}_i))$ defined in Eq.(\[eq:map1\]) using Gibbs measure: $$\begin{aligned} \label{eq:map_2} \hat{p}({\mathbf}{\tilde{a}}_i|\textrm{pa}({\mathbf}{\tilde{a}}_i)) &\propto& \exp\left( \kappa(-\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j)^\top \tilde{{\mathbf}{a}}_i \right) \nonumber \\ &\propto&\exp\left(\kappa\|\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j\|_2(-\frac{\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j}{\|\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j\|_2})^\top \tilde{{\mathbf}{a}}_i\right) \nonumber \\ &=&C_{p}\left(\kappa\|\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j\|_2\right)\exp\left(\kappa\|\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j\|_2(-\frac{\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j}{\|\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j\|_2})^\top \tilde{{\mathbf}{a}}_i\right) \nonumber \\ &=&C_{p}\left(\kappa\|\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j\|_2\right)\exp\left(\kappa(-\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j)^\top \tilde{{\mathbf}{a}}_i \right),\end{aligned}$$ which is another vMF distribution with mean direction $-\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j/\|\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j\|_2$ and concentration parameter $\kappa\|\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j\|_2$. This re-parameterized local probability is proportional to $(-\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j)^\top \tilde{{\mathbf}{a}}_i$, which measures the negative cosine similarity between $\tilde{{\mathbf}{a}}_i$ and its parent vectors. Thereby, $\hat{p}({\mathbf}{\tilde{a}}_i|\textrm{pa}({\mathbf}{\tilde{a}}_i))$ still encourages large mutual angles between vectors as $p({\mathbf}{\tilde{a}}_i|\textrm{pa}({\mathbf}{\tilde{a}}_i))$ does. The difference between $\hat{p}({\mathbf}{\tilde{a}}_i|\textrm{pa}({\mathbf}{\tilde{a}}_i))$ and $p({\mathbf}{\tilde{a}}_i|\textrm{pa}({\mathbf}{\tilde{a}}_i))$ is that in $\hat{p}({\mathbf}{\tilde{a}}_i|\textrm{pa}({\mathbf}{\tilde{a}}_i))$ the term $\|\sum_{j=1}^{i-1}\tilde{a}_j\|_2$ is moved from the denominator to the normalizer, thus we can avoid computing the expectation of $1/\|\sum_{j=1}^{i-1}\tilde{a}_j\|_2$. Though it incurs a new problem that we need to compute the expectation of $\log C_{p}(\kappa\|\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j\|_2)$, which is also hard due to the complex form of the $C_{p}(\cdot)$ function, we managed to resolve this problem as detailed in Section \[sec:vi\]. We refer to the MABN prior defined in Eq.(\[eq:map\_1\]) as type I MABN and that with local probability defined in Eq.(\[eq:map\_2\]) as type II MABN. ### Approximate Inference Algorithms {#sec:vi} We develop algorithms to infer the posteriors of components under the MABN prior. Since exact posteriors are intractable, we resort to approximate inference techniques. Two main paradigms of approximate inference methods are: (1) variational inference (VI) [@wainwright2008graphical]; (2) Markov chain Monte Carlo (MCMC) sampling [@gilks2005markov]. These two approaches possess benefits that are mutually complementary. MCMC can achieve a better approximation of the posterior than VI since it generates samples from the exact posterior while VI seeks an approximation. However, VI can be computationally more efficient [@hoffman2013stochastic]. #### Variational Inference The basic idea of VI [@wainwright2008graphical] is to use a “simpler” variational distribution $q({\mathbf}{A})$ to approximate the true posterior by minimizing the Kullback-Leibler divergence between these two distributions, which is equivalent to maximizing the following variational lower bound w.r.t $q({\mathbf}{A})$: $$\label{eq:vlb} \begin{array}{l} \mathbb{E}_{q({\mathbf}{A})}[\log p(\mathcal{D}|{\mathbf}{A})]+\mathbb{E}_{q({\mathbf}{A})}[\log p({\mathbf}{A})]-\mathbb{E}_{q({\mathbf}{A})}[\log q({\mathbf}{A})] \end{array}$$ where $p({\mathbf}{A})$ is the MABN prior and $p(\mathcal{D}|{\mathbf}{A})$ is data likelihood. Here we choose $q({\mathbf}{A})$ to be a mean field variational distribution $q({\mathbf}{A})= \prod_{k=1}^{K}q(\tilde{{\mathbf}{a}}_k)q(g_k)$, where $q(\tilde{{\mathbf}{a}}_k)=\textrm{vMF}(\tilde{{\mathbf}{a}}_k|\hat{{\mathbf}{a}}_k,\hat{\kappa})$ and $q(g_k)=\textrm{Gamma}(g_k|r_k,s_k)$. Given the variational distribution, we first compute the analytical expression of the variational lower bound, in which we particularly discuss how to compute $\mathbb{E}_{q({\mathbf}{A})}[\log p({\mathbf}{A})]$. If choosing $p({\mathbf}{A})$ to be the type-I MABN prior (Eq.(\[eq:map\_1\])), we need to compute $\mathbb{E}_{}[(-\frac{\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j}{\|\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j\|_2})^\top \tilde{{\mathbf}{a}}_i]$ which is very difficult to deal with due to the presence of $1/\|\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j\|_2$. Instead we choose the type-II MABN prior for the convenience of deriving the variational lower bound. Under the type-II MABN, we need to compute $\mathbb{E}_{q({\mathbf}{A})}[\log Z_i]$ for all $i$, where $Z_i=1/C_{p}(\kappa\|\sum_{j=1}^{i-1}\tilde{{\mathbf}{a}}_j\|_2)$ is the partition function of $p({\mathbf}{\tilde{a}}_i|\textrm{pa}({\mathbf}{\tilde{a}}_i))$. The analytical form of this expectation is difficult to derive as well due to the complexity of the $C_p(x)$ function: $C_p(x)=\frac{x^{p/2-1}}{(2\pi)^{p/2}I_{p/2-1}(x)}$ where $I_{p/2-1}(x)$ is the modified Bessel function of the first kind at order $p/2-1$. To address this issue, we derive an upper bound of $\log Z_i$ and compute the expectation of the upper bound, which is relatively easy to do. Consequently, we obtain a further lower bound of the variational lower bound and learn the variational and model parameters w.r.t the new lower bound. Now we proceed to derive the upper bound of $\log Z_i$, which equals to $\log \int \exp(\kappa (-\sum_{j=1}^{i-1}{\mathbf}{\tilde{a}}_j)\cdot {\mathbf}{\tilde{a}}_i) \mathrm{d}{\mathbf}{\tilde{a}}_i$. Applying the inequality $\log \int\exp(x)\mathrm{d}x\leq \gamma+\int\log(1+\exp(x-\gamma))\mathrm{d}x$ [@bouchard2007efficient], where $\gamma$ is a variational parameter, we have $$\log Z_i \leq \gamma+\int \log(1+\exp(\kappa (-\sum_{j=1}^{i-1}{\mathbf}{\tilde{a}}_j)\cdot {\mathbf}{\tilde{a}}_i-\gamma) \mathrm{d}{\mathbf}{\tilde{a}}_i.$$ Then applying the inequality $\log(1+e^{-x})\leq \log(1+e^{-\xi})-\frac{x-\xi}{2}-\frac{1/2-g(\xi)}{2\xi}(x^2-\xi^2)$ [@bouchard2007efficient], where $\xi$ is another variational parameter and $g(\xi)=1/(1+\exp(-\xi))$, we have $$\log Z_i \leq \gamma+\int [\log(1+e^{-\xi})-\frac{\kappa (\sum\limits_{j=1}^{i-1}{\mathbf}{\tilde{a}}_j)\cdot {\mathbf}{\tilde{a}}_i+\gamma-\xi}{2}-\frac{1/2-g(\xi)}{2\xi}((\kappa (\sum\limits_{j=1}^{i-1}{\mathbf}{\tilde{a}}_j)\cdot {\mathbf}{\tilde{a}}_i+\gamma)^2-\xi^2)] \mathrm{d}{\mathbf}{\tilde{a}}_i .$$ Finally, applying the following integrals on a high-dimensional sphere: (1) $\int_{\|{\mathbf}{y}\|_2=1} 1\mathrm{d}{\mathbf}{y}=\frac{2\pi^{(p+1)/2}}{\Gamma(\frac{p+1}{2})}$, (2) $\int_{\|{\mathbf}{y}\|_2=1} {\mathbf}{x}^\top{\mathbf}{y}\mathrm{d}{\mathbf}{y}=0$, (3) $\int_{\|{\mathbf}{y}\|_2=1} ({\mathbf}{x}^\top{\mathbf}{y})^2\mathrm{d}{\mathbf}{y}\leq \|{\mathbf}{x}\|_2^2\frac{2\pi^{(p+1)/2}}{\Gamma(\frac{p+1}{2})}$, we get $$\label{eq:logz} \begin{array}{l} \log Z_i\leq-\frac{1/2-g(\xi)}{2\xi}\kappa^2\|\sum\limits_{j=1}^{i-1}{\mathbf}{\tilde{a}}_j\|_2^2\frac{2\pi^{(p+1)/2}}{\Gamma(\frac{p+1}{2})}+\gamma+[\log(1+e^{-\xi})+\frac{\xi-\gamma}{2}+\frac{1/2-g(\xi)}{2\xi}(\xi^2-\gamma^2)]\frac{2\pi^{(p+1)/2}}{\Gamma(\frac{p+1}{2})}\\ \end{array}$$ The expectation of this upper bound is much easier to compute. Specifically, we need to tackle $\mathbb{E}_{q({\mathbf}{A})}[\|\sum_{j=1}^{i-1}{\mathbf}{\tilde{a}}_j\|_2^2]$, which can be computed as $$\begin{aligned} \mathbb{E}_{q({\mathbf}{A})}[\|\sum_{j=1}^{i-1}{\mathbf}{\tilde{a}}_j\|_2^2] &=&\mathbb{E}_{q({\mathbf}{A})}[\sum_{j=1}^{i-1}{\mathbf}{\tilde{a}}_j^\top {\mathbf}{\tilde{a}}_j+ \sum_{j=1}^{i-1}\sum_{k\neq j}^{i-1}{\mathbf}{\tilde{a}}_j^\top {\mathbf}{\tilde{a}}_k] \nonumber \\ &=&\sum\limits_{j=1}^{i-1}\mathrm{tr}(\mathbb{E}_{q({\mathbf}{\tilde{a}}_j)}[{\mathbf}{\tilde{a}}_j{\mathbf}{\tilde{a}}_j^\top])+ \sum\limits_{j=1}^{i-1}\sum\limits_{k\neq j}^{i-1}\mathbb{E}_{q({\mathbf}{\tilde{a}}_j)}[{\mathbf}{\tilde{a}}_j]^\top \mathbb{E}_{q({\mathbf}{\tilde{a}}_k)}[{\mathbf}{\tilde{a}}_k] \nonumber \\ &=&\sum\limits_{j=1}^{i-1}\mathrm{tr}(\mathrm{cov}({\mathbf}{\tilde{a}}_j))+ \sum\limits_{j=1}^{i-1}\sum\limits_{k=1}^{i-1}\mathbb{E}_{q({\mathbf}{\tilde{a}}_j)}[{\mathbf}{\tilde{a}}_j]^\top \mathbb{E}_{q({\mathbf}{\tilde{a}}_k)}[{\mathbf}{\tilde{a}}_k],\end{aligned}$$ where $\mathbb{E}_{q({\mathbf}{\tilde{a}}_j)}[{\mathbf}{\tilde{a}}_j]=A_p(\hat{\kappa})\hat{{\mathbf}{a}}_j$, $\mathrm{cov}({\mathbf}{\tilde{a}}_j)=\frac{h(\hat{\kappa})}{\hat{\kappa}}\mathbf{I}+(1-2\frac{\nu+1}{\hat{\kappa}}h(\hat{\kappa})-h^2(\hat{\kappa}))\hat{{\mathbf}{a}}_j \hat{{\mathbf}{a}}_j^\top$, $h(\hat{\kappa})=\frac{I_{\nu+1}(\hat{\kappa})}{I_{\nu}(\hat{\kappa})}$, $A_p(\hat{\kappa})=\frac{I_{p/2}(\hat{\kappa})}{I_{p/2-1}(\hat{\kappa})}$ and $\nu=p/2-1$. #### MCMC Sampling One potential drawback of the variational inference approach is that a large approximation error can be incurred if the variational distribution is far from the true posterior. We further present an alternative approximation inference method — Markov chain Monte Carlo (MCMC) [@gilks2005markov], which draws samples from the $\textit{exact}$ posterior distribution and uses the samples to represent the posterior. Specifically we choose the Metropolis-Hastings (MH) algorithm [@gilks2005markov] which generates samples from an adaptive proposal distribution, computes acceptance probabilities based on the unnormalized true posterior and uses the acceptance probabilities to decide whether a sample should be accepted or rejected. The most commonly used proposal distribution is based on random walk: the newly proposed sample $t+1$ comes from a random perturbation around the previous sample $t$. For the directional variables $\{{\mathbf}{\tilde{a}}_i\}_{i=1}^{K}$ and magnitude variables $\{g_i\}_{i=1}^{K}$, we define the proposal distributions to be a von Mises-Fisher distribution and a normal distribution respectively: $$\begin{array}{l} q({\mathbf}{\tilde{a}}_i^{(t+1)}|{\mathbf}{\tilde{a}}_i^{(t)})= C_{p}(\hat{\kappa})\exp\left(\hat{\kappa}{\mathbf}{\tilde{a}}_i^{(t+1)}\cdot{\mathbf}{\tilde{a}}_i^{(t)}\right)\\ q(g_i^{(t+1)}|g_i^{(t)})=\frac{1}{\sigma\sqrt{2\pi}}\exp\left\{-\frac{( g_i^{(t+1)}-g_i^{(t)})^2}{2\sigma^2}\right\}. \end{array}$$ $g_i^{(t+1)}$ is required to be positive, but the Gaussian distribution may generate non-positive samples. To address this problem, we adopt a truncated sampler [@Wilkinson] which repeatedly draws samples until a positive value is obtained. Under such a truncated sampling scheme, the MH acceptance ratio needs to be modified accordingly. Please refer to [@Wilkinson] for details. MH eventually converges to a stationary distribution where the generated samples represent the true posterior. The downside of MCMC is that it could take a long time to converge, which is usually computationally less efficient than variational inference [@hoffman2013stochastic]. Under the MH algorithm, the MABN prior facilitates better efficiency compared with the DPP prior. In each iteration, the MABN prior needs to be evaluated, whose complexity is quadratic in the component number $K$ whereas evaluating the DPP has a cubic complexity in $K$. Diversity-Promoting Posterior Regularization -------------------------------------------- In practice, one may desire to achieve more than one diversity-promoting effects in LVMs. For example, the mutual angular regularizer [@xie2015diversifying] aims to encourage the pairwise angles between components to have not only large mean, but also small variance such that the components are uniformly “different” from each other and evenly spread out to different directions in the space. It would be extremely difficult, if ever possible, to define a proper prior that can accommodate all desired effects. For instance, the MABN priors defined above can encourage the mutual angles to have large mean, but are unable to promote small variance. To overcome such inflexibility of the prior control method, we resort to a *posterior regularization* approach [@zhu2014bayesian]. Instead of designing a Bayesian prior to encode the diversification desideratum and indirectly influencing the posterior, posterior regularization directly imposes a control over the post-data distributions to achieve certain goals. Giving prior $\pi({\mathbf}{A})$ and data likelihood $p(\mathcal{D}|{\mathbf}{A})$, computing the posterior $p({\mathbf}{A}|\mathcal{D})$ is equivalent to solving the following optimization problem [@zhu2014bayesian] $$\label{eq:pr} \textrm{sup}_{q({\mathbf}{A})}\quad\mathbb{E}_{q({\mathbf}{A})}[\log p(\mathcal{D}|{\mathbf}{A})\pi({\mathbf}{A})]-\mathbb{E}_{q({\mathbf}{A})}[\log q({\mathbf}{A})],$$ where $q({\mathbf}{A})$ is any valid probability distribution. The basic idea of posterior regularization is to impose a certain regularizer $\mathcal{R}(q({\mathbf}{A}))$ over $q({\mathbf}{A})$ to incorporate prior knowledge and structural bias [@zhu2014bayesian] and solve the following regularized problem $$\label{eq:postreg} \textrm{sup}_{q({\mathbf}{A})}\quad\mathbb{E}_{q({\mathbf}{A})}[\log p(\mathcal{D}|{\mathbf}{A})\pi({\mathbf}{A})]-\mathbb{E}_{q({\mathbf}{A})}[\log q({\mathbf}{A})]+\lambda\mathcal{R}(q({\mathbf}{A})),$$ where $\lambda$ is a tradeoff parameter. Through properly designing $\mathcal{R}(q({\mathbf}{A}))$, many diversity-promoting effects can be flexibly incorporated. Here we present a specific example while noting that many other choices are applicable. Gaining insight from [@xie2015diversifying], we define $\mathcal{R}(q({\mathbf}{A}))$ as $$\label{eq:mar} \begin{array}{l} \Omega(\{\mathbb{E}_{q({\mathbf}{a}_i)}[{\mathbf}{a}_i]\}_{i=1}^K)=\frac{1}{K(K-1)}\sum_{i=1}^{K}\limits\sum_{j\neq i}^{K}\limits\theta_{ij} -\gamma\frac{1}{K(K-1)}\sum\limits_{i=1}^{K}\sum\limits_{j\neq i}^{K}(\theta_{ij}-\frac{1}{K(K-1)}\sum\limits_{p=1}^{K}\sum\limits_{q\neq p}^{K}\theta_{pq})^{2} , \end{array}$$ where $\theta_{ij}=\textrm{arccos}(\frac{|\mathbb{E}[{\mathbf}{a}_i]\cdot\mathbb{E}[{\mathbf}{a}_j]|}{\|\mathbb{E}[{\mathbf}{a}_i]\|_2\|\mathbb{E}[{\mathbf}{a}_j]\|_2})$ is the non-obtuse angle measuring the dissimilarity between $\mathbb{E}[{\mathbf}{a}_i]$ and $\mathbb{E}[{\mathbf}{a}_j]$, and the regularizer is defined as the mean of pairwise angles minus their variance. The intuition behind this regularizer is: if the mean of angles is larger (indicating these vectors are more different from each other on the whole) and the variance of the angles is smaller (indicating these vectors evenly spread out to different directions), then these vectors are more diverse. Note that it is very difficult to design priors to simultaneously achieve these two effects. While posterior regularization is more flexible, it lacks some strengths possessed by the prior control method for our consideration of diversifying latent variable models. First, prior control is a more natural way of incorporating prior knowledge, with solid theoretical foundation. Second, prior control can facilitate sampling based algorithms that are not applicable for the above posterior regularization.[^1] In sum, the two approaches have complementary advantages and should be chosen according to specific problem context. ![Bayesian Mixture of Experts with Mutual Angular Prior[]{data-label="fig:me"}](figs/mdl2){width="0.4\columnwidth"} “Diversifying” Bayesian Mixture of Experts Model ------------------------------------------------ In this section, we apply the two approaches developed above to “diversify” the Bayesian mixture of experts model (BMEM) [@waterhouse1996bayesian]. ### BMEM with Mutual Angular Prior The mixture of experts model (MEM) [@jordan1994hierarchical] has been widely used for machine learning tasks where the distribution of input data is so complicated that a single model (“expert”) cannot be effective for all the data. MEM assumes that the input data is inherently belonging to multiple latent groups and one single “expert” is allocated to each group to handle the data therein. Here we consider a classification task whose goal is to learn binary linear classifiers given the training data $\mathcal{D}=\{({\mathbf}{x}_i,y_i)\}_{i=1}^{N}$, where ${\mathbf}{x}$ is the input feature vector and $y_i\in\{1,0\}$ is the class label. We assume there are $K$ latent experts where each expert is a classifier with coefficient vector ${\boldsymbol}\beta$. Given a test example ${\mathbf}{x}$, it first goes through a gate function that decides which expert is best suitable to classify this example and the decision is made in a probabilistic way. A discrete variable $z$ is utilized to indicate the selected expert and the probability that $z=k$ (assigning example ${\mathbf}{x}$ to expert $k$) is $\frac{\exp({\boldsymbol}\eta_k^\top {\mathbf}{x})}{\sum_{j=1}^K\exp({\boldsymbol}\eta_j^\top {\mathbf}{x})}$, where ${\boldsymbol}\eta_k$ is a coefficient vector characterizing the selection of expert $k$. Given the selected expert, the example is classified using the coefficient vector ${\boldsymbol}\beta$ corresponding to that expert. As described in Figure \[fig:me\], the generative process of $\{({\mathbf}{x}_i,y_i)\}_{i=1}^{N}$ is as follows - For $i=1,\cdots,N$ - Draw $z_i\sim \text{Multi}({\boldsymbol}\zeta)$, where $\zeta_k=\frac{\exp({\boldsymbol}\eta_k^\top {\mathbf}{x}_i)}{\sum_{j=1}^K\exp({\boldsymbol}\eta_j^\top {\mathbf}{x}_i)}$ - Draw $y_i\sim \text{Bernoulli}(\frac{1}{1+\exp(-{\boldsymbol}\beta_{z_i}^\top {\mathbf}{x}_i)})$. As of now, the model parameters ${\mathbf}{B}=\{{\boldsymbol}\beta_k\}_{k=1}^{K}$ and ${\mathbf}{H}=\{{\boldsymbol}\eta_k\}_{k=1}^{K}$ are deterministic variables. Next we place a prior over them to enable Bayesian learning [@waterhouse1996bayesian] and desire this prior to be able to promote diversity among the experts to retain the advantages of “diversifying” LVMs as stated before. The mutual angular Bayesian network prior can be applied to achieve this goal $$p({\mathbf}{B})=C_{p}(\kappa)\exp\left(\kappa \mu_{0}^\top {\mathbf}{\tilde{{\boldsymbol}\beta}}_1\right)\prod_{i=2}^{K}C_{p}(\kappa)\exp\left(\kappa(-\frac{\sum_{j=1}^{i-1}\tilde{{\boldsymbol}\beta}_j}{||\sum_{j=1}^{i-1}{\mathbf}{\tilde{{\boldsymbol}\beta}}_j||_2})^\top {\mathbf}{\tilde{{\boldsymbol}\beta}}_i\right) \prod_{i=1}^{K}\frac{\alpha_2^{\alpha_1}g_i^{\alpha_1-1}e^{-g_i\alpha_2}}{\Gamma(\alpha_1)},$$ $$p({\mathbf}{H})=C_{p}(\kappa)\exp\left(\kappa \xi_{0}^\top {\mathbf}{\tilde{{\boldsymbol}\eta}}_1\right)\prod_{i=2}^{K}C_{p}(\kappa)\exp\left(\kappa(-\frac{\sum_{j=1}^{i-1}\tilde{{\boldsymbol}\eta}_j}{||\sum_{j=1}^{i-1}{\mathbf}{\tilde{{\boldsymbol}\eta}}_j||_2})^\top {\mathbf}{\tilde{{\boldsymbol}\eta}}_i\right) \prod_{i=1}^{K}\frac{\omega_2^{\omega_1}h_i^{\omega_1-1}e^{-h_i\omega_2}}{\Gamma(\omega_1)},$$ where ${\boldsymbol}\beta_k=g_k\tilde{{\boldsymbol}\beta}_k$ and ${\boldsymbol}\eta_k=h_k\tilde{{\boldsymbol}\eta}_k$. ### BMEM with Mutual Angular Posterior Regularization As an alternative approach, the diversity in BMEM can be imposed by placing the mutual angular regularizer (Eq.(\[eq:mar\])) over the post-data posteriors [@zhu2014bayesian]. Here we instantiate the general diversity-promoting posterior regularization defined in Eq.(\[eq:postreg\]) to BMEM, by specifying the following parametrization. The latent variables in BMEM include ${\mathbf}{B}$, ${\mathbf}{H}$ and ${\mathbf}{z}=\{z_i\}_{i=1}^{N}$ and the post-data distribution over them is defined as $q({\mathbf}{B},{\mathbf}{H},{\mathbf}{z})=q({\mathbf}{B})q({\mathbf}{H})q({\mathbf}{z})$. For computational tractability, we define $q({\mathbf}{B})$ and $q({\mathbf}{H})$ to be: $q({\mathbf}{B})=\prod_{k=1}^{K}q(\tilde{{\boldsymbol}\beta}_k)q(g_k)$ and $q({\mathbf}{H})=\prod_{k=1}^{K}q(\tilde{{\boldsymbol}\eta}_k)q(h_k)$ where $q(\tilde{{\boldsymbol}\beta}_k)$, $q(\tilde{{\boldsymbol}\eta}_k)$ are von Mises-Fisher distributions and $q(g_k)$, $q(h_k)$ are gamma distributions, and define $q({\mathbf}{z})$ to be multinomial distributions: $q({\mathbf}{z})=\prod_{i=1}^{N}q(z_i|{\boldsymbol}\phi_i)$ where ${\boldsymbol}\phi_i$ is a multinomial vector. The priors over ${\mathbf}{B}$ and ${\mathbf}{H}$ are specified to be: $\pi({\mathbf}{B})=\prod_{k=1}^{K}p(\tilde{{\boldsymbol}\beta}_k)p(g_k)$ and $\pi({\mathbf}{H})=\prod_{k=1}^{K}p(\tilde{{\boldsymbol}\eta}_k)p(h_k)$ where $p(\tilde{{\boldsymbol}\beta}_k)$, $p(\tilde{{\boldsymbol}\eta}_k)$ are vMF distributions and $p(g_k)$, $p(h_k)$ are gamma distributions. Under such parametrization, we solve the following diversity-promoting posterior regularization problem $$\begin{array}{ll} \textrm{sup}_{q({\mathbf}{B},{\mathbf}{H},{\mathbf}{z})}&\quad\mathbb{E}_{q({\mathbf}{B},{\mathbf}{H},{\mathbf}{z})}[\log p(\{y_i\}_{i=1}^{N},{\mathbf}{z}|{\mathbf}{B},{\mathbf}{H})\pi({\mathbf}{B},{\mathbf}{H})]-\mathbb{E}_{q({\mathbf}{B},{\mathbf}{H},{\mathbf}{z})}[\log q({\mathbf}{B},{\mathbf}{H},{\mathbf}{z})]\\ &\quad+\lambda_1\Omega(\{\mathbb{E}_{q(\tilde{{\boldsymbol}\beta}_k)}[\tilde{{\boldsymbol}\beta}_k]\}_{k=1}^K)+\lambda_2\Omega(\{\mathbb{E}_{q(\tilde{{\boldsymbol}\eta}_k)}[\tilde{{\boldsymbol}\eta}_k]\}_{k=1}^K). \end{array}$$ Note that other parametrizations are also valid, such as placing Gaussian priors over ${\mathbf}{B}$ and ${\mathbf}{H}$ and setting $q({\mathbf}{B})$, $q({\mathbf}{H})$ to be Gaussian. Diversity-Promoting Bayesian Nonparametric Modeling =================================================== In the last section, we study how to promote diversity among a finite number of components in parametric LVMs. In this section, we investigate how to achieve this goal in nonparametric LVMs where the component number is infinite in principle. We extend the mutual angular Bayesian network (MABN) prior defined in last section to an Infinite Mutual Angular (IMA) prior that encourages infinitely many components to have large angles. In this prior, the components are mutually dependent, which incurs great challenges for posterior inference. We develop an efficient sampling algorithm based on slice sampling [@tehstick] and Riemann manifold Hamiltonian Monte Carlo [@girolami2011riemann]. We apply the IMA prior to induce diversity in the infinite latent feature model (ILFM) [@ghahramani2005infinite]. Bayesian Nonparametric Latent Variable Models --------------------------------------------- A BN-LVM consists of an infinite number of components, each parameterized by a vector. For example, in Dirichlet process Gaussian mixture model (DP-GMM) [@rasmussen1999infinite; @blei2006variational], the components are *clusters*, each parameterized with a Gaussian mean vector. In Indian buffet process latent feature model (IBP-LFM) [@ghahramani2005infinite], the components are *features*, each parameterized by a weight vector. Given these infinitely many components, BN-LVMs design some proper mechanism to select one or a finite subset of them to model each observed data example. For example, in DP-GMM, a Chinese restaurant process (CRP) [@aldous1985exchangeability] is designed to assign each data example to one of the infinite number of clusters. In IBP-LFM, an Indian buffet process (IBP) [@ghahramani2005infinite] is utilized to select a finite set of features from the infinite feature pool to reconstruct each data example. A BN-LVM typically consists of two priors. One is a base distribution from which the parameter vectors of components are drawn. The other is a stochastic process – such as CRP and IBP – which designates how to select components to model data. The prior studied in this paper belongs to the first regime. It is commonly assumed that parameter vectors of the components are *independently* drawn from the same base distribution. For example, in both DP-GMM and IBP-LFM, the mean vectors and weight vectors are independently drawn from a Gaussian distribution. In this paper, we aim to design a prior that encourages the component vectors to be mutually different and “diverse", under which the component vectors are not independent any more, which presents great challenges for posterior inference. Infinite Mutual Angular Prior ----------------------------- In the MABN prior, the components are added one by one. Each new component is encouraged to have large angles with previous ones. This adding process can be repeated infinitely many times, resulting in a prior that encourages an infinite number of components to have large mutual angles $$p(\{{\mathbf}{\widehat{w}}_i\}_{i=1}^{\infty})=p({\mathbf}{\widehat{w}}_1)\prod_{i=2}^{\infty}p({\mathbf}{\widehat{w}}_i|\textrm{pa}({\mathbf}{\widehat{w}}_i))$$ The factorization theorem [@koller2009probabilistic] of Bayesian network ensures that $p(\{{\mathbf}{\widehat{w}}_i\}_{i=1}^{\infty})$ integrates to one. The magnitudes $\{r_i\}_{i=1}^{\infty}$ do not affect angles (hence diversity), which can be generated independently from a gamma distribution. To this end, the generative process of $\{{\mathbf}{w}_i\}_{i=1}^{\infty}$ can be summarized as follows: - Sample ${\mathbf}{\widehat{w}}_1\sim \textrm{vMF}({\boldsymbol}\mu_{0},\kappa)$ - For $i=2,\cdots,\infty$, sample ${\mathbf}{\widehat{w}}_i\sim \textrm{vMF}(-\frac{\sum_{j=1}^{i-1}\widehat{{\mathbf}{w}}_j}{\|\sum_{j=1}^{i-1}\widehat{{\mathbf}{w}}_j\|_2},\kappa)$ - For $i=1,\cdots,\infty$, sample $r_i\sim \textrm{Gamma}(\alpha_1,\alpha_2)$ - For $i=1,\cdots,\infty$, ${\mathbf}{w}_i={\mathbf}{\widehat{w}}_i r_i$ The probability distribution over $\{{\mathbf}{w}_i\}_{i=1}^{\infty}$ can be written as $$\label{eq:map_1_np} \begin{array}{l} p(\{{\mathbf}{w}_i\}_{i=1}^{\infty})=C_{p}(\kappa)\exp(\kappa \mu_{0}^\top{\mathbf}{\widehat{w}}_1)\prod_{i=2}^{\infty}C_{p}(\kappa) \exp(\kappa (-\frac{\sum_{j=1}^{i-1}\widehat{{\mathbf}{w}}_j}{\|\sum_{j=1}^{i-1}\widehat{{\mathbf}{w}}_j\|_2})^\top\widehat{{\mathbf}{w}}_i)\prod_{i=1}^{\infty}\frac{\alpha_2^{\alpha_1}r_i^{\alpha_1-1} e^{-r_i\alpha_2}}{\Gamma(\alpha_1)} \end{array}$$ Diversity-Promoting Infinite Latent Feature Model ------------------------------------------------- In this section, using infinite latent feature model (ILFM) [@griffiths2005infinite] as an instance of BN-LVM, we showcase how to promote diversity among the components therein with the IMA prior. Given a set of data examples ${\mathcal}{X}=\{{\mathbf}{x}_n\}_{n=1}^{N}$ where ${\mathbf}{x}_n\in \mathbb{R}^{D}$, ILFM aims to invoke a finite subset of features from an infinite feature collection $\mathcal{W}=\{{\mathbf}{w}_k\}_{k=1}^{\infty}$ to construct these data examples. Each feature (which is a component in this LVM) is parameterized by a vector ${\mathbf}{w}_k\in \mathbb{R}^{D}$. For each data example ${\mathbf}{x}_n$, a subset of features are selected to construct it. The selection is denoted by a binary vector ${\mathbf}{z}_n\in \{0,1\}^\infty$ where $z_{nk}=1$ denotes the $k$-th feature is invoked to construct the $n$-th example and $z_{nk}=0$ otherwise. Given the parameter vectors of features $\{{\mathbf}{w}_k\}_{k=1}^{\infty}$ and the selection vector ${\mathbf}{z}_n$, the example ${\mathbf}{x}_n$ can be represented as: ${\mathbf}{x}_n\sim \mathcal{N}(\sum_{k=1}^{\infty}z_{nk}{\mathbf}{w}_k,\sigma^2{\mathbf}{I})$. The binary selection vectors $\mathcal{Z}=\{{\mathbf}{z}_n\}_{n=1}^{N}$ can be either drawn from an Indian buffet process (IBP) [@ghahramani2005infinite] or a stick-breaking construction [@tehstick]. Let $\mu_k$ be the prior probability that feature $k$ is present in a data example and the features are permuted such that their prior probabilities are in a decreasing ordering: $\mu_{(1)}>\mu_{(2)}>\cdots$. According to the stick-breaking construction, these prior probabilities can be generated in the following way: $\nu_{k}\sim \text{Beta}(\alpha,1)$, $\mu_{(k)}=\nu_{k}\mu_{(k-1)}=\prod_{l=1}^{k}\nu_{l}$. Given $\mu_{(k)}$, the binary indicator $z_{nk}$ is generated as $z_{nk}|\mu_{(k)}\sim \text{Bernoulli}(\mu_{(k)})$. To reduce the redundancy among the features, we impose the IMA prior over their parameter vectors $\mathcal{W}$ to encourage them to be mutually different, which results in an IMA-LFM model. The generative process of IMA-LFM can be described as follows. - Draw $\mu_{(k)}$ according to Eq.(\[eq:draw\_mu\]), for $k=1,\cdots,\infty$ - Draw $z_{nk}$ according to Eq.(\[eq:draw\_z\]), for $n=1,\cdots,N$; $k=1,\cdots,\infty$ - Draw $\mathcal{W}$ according to Eq.(\[eq:map\_1\_np\]) - Draw ${\mathbf}{x}_{n}$ according to Eq.(\[eq:draw\_x\]), for $n=1,\cdots,N$ Algorithm --------- In this section, we develop a sampling algorithm to infer the posteriors of $\mathcal{W}$ and $\mathcal{Z}$ in the IMA-LFM model. Two major challenges need to be addressed. First, the prior over $\mathcal{W}$ is not conjugate to the likelihood function $p({\mathbf}{x})$. Second, the parameter vectors $\mathcal{W}$ are usually of high-dimensional, rendering slow mixing. To address the first challenge, we adopt the slicing sampling algorithm [@tehstick]. This algorithm introduces an auxiliary slice variable $ s|{\mathcal}{Z},\mu_{(1:\infty)} \sim \text{Uniform}[0,\mu^*]$, where $\mu^*=\text{min}\{1, \underset{k:\exists n, z_{nk}=1}{\textrm{min}}\mu_k\}$ is the prior probability of the last active feature. A feature $k$ is active if there exists an example $n$ such that $z_{nk}=1$ and is inactive otherwise. In the sequel, we discuss the sampling of other variables. #### Sample New Features Let $K^*$ be the maximal feature index with $\mu_{(K^*)}>s$ and $K^+$ be the index such that all active features have index $k<K^+$ ($K^+$ itself would be inactive feature). If the new value of $s$ makes $K^*\geq K^+$, then we draw $K^*-K^++1$ new (inactive) features, including the parameter vectors and prior probabilities. The prior probabilities $\{\mu_{(k)}\}$ are drawn sequentially from $p(\mu_{(k)}|\mu_{(k-1)}) \propto \exp(\alpha \sum_{n=1}^{N}\frac{1}{n}(1-\mu_{(k)})^n)\mu_{(k)}^{\alpha-1}(1-\mu_{(k)})^N \mathbb{I}(0\leq \mu_{(k)}\leq \mu_{(k-1)})$ using adaptive rejection sampling (ARS) [@gilks1992adaptive]. The parameter vectors are drawn sequentially from $$\begin{array}{l} p({\mathbf}{w}_k|\{{\mathbf}{w}_j\}_{j=1}^{k-1})= p({\mathbf}{\widehat{w}}_k|\{{\mathbf}{\widehat{w}}_j\}_{j=1}^{k-1})p(r_k) =C_{p}(\kappa)\exp(\kappa(-\frac{\sum_{j=1}^{k-1}\widehat{{\mathbf}{w}}_j}{\|\sum_{j=1}^{k-1}\widehat{{\mathbf}{w}}_j\|_2})^\top\widehat{{\mathbf}{w}}_k) \frac{\alpha_2^{\alpha_1}r_i^{\alpha_1-1} e^{-r_i\alpha_2}}{\Gamma(\alpha_1)} \end{array}$$ where we draw ${\mathbf}{\widehat{w}}_k$ from $p({\mathbf}{\widehat{w}}_k|\{{\mathbf}{\widehat{w}}_j\}_{j=1}^{k-1})$ which is a von Mises-Fisher distribution and draw $r_k$ from a Gamma distribution, then multiply ${\mathbf}{\widehat{w}}_k$ and $r_k$ together since they are independent. For each new feature $k$, the corresponding binary selection variables $z_{:,k}$ are initialized to zero. #### Sample Existing $\mu_{(k)}$ $(1\leq k\leq K^+-1)$ We sample $\mu_{(k)}$ from $p(\mu_{(k)}|\text{rest}) \propto \mu_{(k)}^{m_{k}-1}(1-\mu_{(k)})^{N-m_{k}}\mathbb{I}(\mu_{(k+1)}\leq \mu_{(k)}\leq \mu_{(k-1)})$, where $m_{k}=\sum_{n=1}^{N}z_{nk}$. #### Sample $z_{nk}$ $(1\leq n\leq N, 1\leq k\leq K^*)$ Given $s$, we only need to sample $z_{nk}$ for $k\leq K^*$ from $p(z_{nk}=1|\text{rest})\propto \frac{\mu_{(k)}}{\mu^*}p({\mathbf}{x}_n|z_{n,\neg k},z_{nk}=1,\{{\mathbf}{w}_j\}_{j=1}^{K^+})$, where $p({\mathbf}{x}_n|z_{n,\neg k},z_{nk}=1,\{{\mathbf}{w}_j\}_{j=1}^{K^+}) =\mathcal{N}({\mathbf}{x}_n|{\mathbf}{w}_k+\sum_{j\neq k}^{K^+}z_{nj}{\mathbf}{w}_{j},\sigma{\mathbf}{I})$ and $z_{n,\neg k}$ denotes all other elements in ${\mathbf}{z}$ except the $k$-th one and ${\mathbf}{w}_{j}={\mathbf}{\widehat{w}}_j r_j$. #### Sample ${\mathbf}{w}_{k}$ $(k=1,\cdots,K^+)$ We draw ${\mathbf}{w}_{k}=\widetilde{{\mathbf}{w}}_{k}r_k$ from the following conditional probability $$\label{eq:update_a} \begin{array}{l} p(\widetilde{{\mathbf}{w}}_{k}r_k|\text{rest}) \propto p(\widetilde{{\mathbf}{w}}_{k}r_k|\{{\mathbf}{w}_j\}_{j\neq k}^{K^+})\prod\limits_{n=1}^{N}p({\mathbf}{x}_n|z_{n,1:K^+},\{{\mathbf}{w}_j\}_{j\neq k}^{K^+},\widetilde{{\mathbf}{w}}_{k}r_k) \end{array}$$ where $p(\widetilde{{\mathbf}{w}}_{k}r_k|\{{\mathbf}{w}_j\}_{j\neq k}^{K^+})\propto p(\widetilde{{\mathbf}{w}}_{k}r_k|\{{\mathbf}{w}_i\}_{i=1}^{k-1}) \prod_{j=k+1}^{K^+}p({\mathbf}{w}_j|\{{\mathbf}{w}_i\}_{i\neq k}^{j-1},\widetilde{{\mathbf}{w}}_{k}r_k)$ and $p({\mathbf}{x}_n|z_{n,1:K^+}$ $,\{{\mathbf}{w}_j\} _{j\neq k}^{K^+}$, $\widetilde{{\mathbf}{w}}_{k}r_k)=\mathcal{N}({\mathbf}{x}_n|\widetilde{{\mathbf}{w}}_{k}r_k+\sum_{j\neq k}^{K^+}z_{nj}{\mathbf}{w}_{j},\sigma{\mathbf}{I})$. In the vanilla IBP latent feature model [@ghahramani2005infinite], the prior over ${\mathbf}{w}_k$ is a Gaussian distribution, which is conjugate to the Gaussian likelihood function. In that case, the posterior $p({\mathbf}{w}_{k}|\text{rest})$ is again a Gaussian, from which samples can be easily drawn. But in Eq.(\[eq:update\_a\]), the posterior does not have a closed form expression since the prior $p(\widetilde{{\mathbf}{w}}_{k}r_k|\{{\mathbf}{w}_j\}_{j\neq k}^{K^+})$ is no longer a conjugate prior, making the sampling very challenging. We sample $\widetilde{{\mathbf}{w}}_{k}$ and $r_k$ separately. $r_k$ can be efficiently sampled using the Metropolis-Hastings (MH) [@hastings1970monte] algorithm. For $\widetilde{{\mathbf}{w}}_{k}$ which is a random vector, the sampling is much more difficult. The random walk based MH algorithm suffers slow mixing when the dimension of $\widetilde{{\mathbf}{w}}_{k}$ is large (which is typically the case in LVMs). In addition, $\widetilde{{\mathbf}{w}}_{k}$ lies on a unit sphere. The sampling algorithm should preserve this geometric constraint. To address these two issues, we study a Riemann manifold Hamiltonian Monte Carlo (RM-HMC) method [@girolami2011riemann; @byrne2013geodesic]. HMC leverages the Hamiltonian dynamics to produce distant proposals for the Metropolis-Hastings algorithm, enabling a faster exploration of the state space and faster mixing. The RM-HMC algorithm introduces an auxiliary vector ${\mathbf}{v}\in\mathbb{R}^{d}$ and defines a Hamiltonian function $H({\mathbf}{\widehat{w}}_{k},{\mathbf}{v})=-\log p({\mathbf}{\widehat{w}}_{k}|\text{rest}) +\log|G({\mathbf}{\widehat{w}}_{k})|+\frac{1}{2}{\mathbf}{v}^\top G({\mathbf}{\widehat{w}}_{k})^{-1} {\mathbf}{v}$, where $G$ is the metric tensor associated with the Riemann manifold, which in our case is a unit sphere. After a transformation of coordinate system, $H({\mathbf}{\widehat{w}}_{k},{\mathbf}{v})$ can be re-written as $$H({\mathbf}{\widehat{w}}_{k},{\mathbf}{v})=-\log p({\mathbf}{\widehat{w}}|rest)+\frac{1}{2}{\mathbf}{v}^\top {\mathbf}{v}$$ $p({\mathbf}{\widehat{w}}_{k}|\text{rest})$ needs not to be normalized and $\log p({\mathbf}{\widehat{w}}_{k}|\text{rest})\propto \kappa(-\frac{\sum_{j=1}^{k-1}\widehat{{\mathbf}{w}}_j}{\|\sum_{j=1}^{k-1}\widehat{{\mathbf}{w}}_j\|_2})^\top\widehat{{\mathbf}{w}}_k +\sum_{j=k+1}^{K^+}\\\kappa(-\frac{\sum_{i\neq k}^{j-1}\widehat{{\mathbf}{w}}_i+{\mathbf}{\widehat{w}}_{k}r_k}{\|\sum_{i\neq k}^{j-1}\widehat{{\mathbf}{w}}_i+{\mathbf}{\widehat{w}}_{k}r_k\|_2})^\top\widehat{{\mathbf}{w}}_j +\sum_{n=1}^{N}\frac{1}{\sigma}({\mathbf}{x}_n-\sum_{j\neq k}^{K^+}z_{nj}{\mathbf}{w}_j)^\top {\mathbf}{\widehat{w}}_{k}r_k-\frac{1}{2\sigma}\|{\mathbf}{\widehat{w}}_{k}r_k\|_2^2$. A new sample of ${\mathbf}{\widehat{w}}_{k}$ can be generated by approximately solving a system of differential equations characterizing the Hamiltonian dynamics on the manifold [@girolami2011riemann]. 01\. ${\mathbf}{v}\sim\mathcal{N}(0,{\mathbf}{I})$ 02. ${\mathbf}{v}\gets {\mathbf}{v}-({\mathbf}{I}-{\mathbf}{\widehat{w}} {\mathbf}{\widehat{w}}^\top){\mathbf}{v}$ 03. $h\gets \log p({\mathbf}{\widehat{w}}|rest)-\frac{1}{2}{\mathbf}{v}^\top {\mathbf}{v}$ 04. ${\mathbf}{\widehat{w}}^*\gets {\mathbf}{\widehat{w}}$ 05. **for** [$\tau=1,\cdots, T$]{} **do** 06.${\mathbf}{v}\gets {\mathbf}{v}+\frac{\epsilon}{2}\nabla_{{\mathbf}{\widehat{w}}^*}\log p({\mathbf}{\widehat{w}}^*|rest)$ 07.${\mathbf}{v}\gets {\mathbf}{v}-{\mathbf}{\widehat{w}} {\mathbf}{\widehat{w}}^\top{\mathbf}{v}$ 08.${\mathbf}{\widehat{w}}^*\gets \cos(\epsilon\|{\mathbf}{v}\|_2){\mathbf}{\widehat{w}}^*+\|{\mathbf}{v}\|_2^{-1}\sin(\epsilon\|{\mathbf}{v}\|_2){\mathbf}{v}$ 09.${\mathbf}{v}\gets -\|{\mathbf}{v}\|_2\sin(\epsilon\|{\mathbf}{v}\|_2){\mathbf}{\widehat{w}}^*+\cos(\epsilon\|{\mathbf}{v}\|_2){\mathbf}{v}$ 10.${\mathbf}{v}\gets {\mathbf}{v}+\frac{\epsilon}{2}\nabla_{{\mathbf}{\widehat{w}}^*}\log p({\mathbf}{\widehat{w}}^*|rest)$ 11.${\mathbf}{v}\gets {\mathbf}{v}-{\mathbf}{\widehat{w}} {\mathbf}{\widehat{w}}^\top{\mathbf}{v}$ 12. **end for** 13. $h^*\gets \log p({\mathbf}{\widehat{w}}^*|rest)-\frac{1}{2}{\mathbf}{v}^\top {\mathbf}{v}$ 14. $u\sim \text{uniform}(0,1)$ 15. **if** [$u<\text{exp}(h^*-h)$]{} 16.${\mathbf}{\widehat{w}}\gets {\mathbf}{\widehat{w}}^*$ 17. **end if** \[alg:mhmc\] Following [@byrne2013geodesic], we solve this problem based upon geodesic flow, which is shown in Line 6-11 in Algorithm \[alg:mhmc\]. Line 6 performs an update of ${\mathbf}{v}$ according to the Hamiltonian dynamics, where $\nabla_{{\mathbf}{\widehat{w}}^*}\log p({\mathbf}{\widehat{w}}^*|rest)$ denotes the gradient of $\log p({\mathbf}{\widehat{w}}^*|rest)$ w.r.t ${\mathbf}{\widehat{w}}^*$. Line 7 performs the transformation of coordinate system. Line 8-9 calculate the geodesic flow on unit sphere. Line 10-11 repeat the update of ${\mathbf}{v}$ as done in Line 6-7. These procedures are repeated $T$ times to generate a new sample ${\mathbf}{\widehat{w}}^*$, which then goes through an acceptance/rejection procedure (Line 3, 13-17). Experiments =========== We now present experimental results for both the parametric and nonparametric latent variable models (LVMs) to demonstrate the effectiveness on encouraging diversity. Parametric LVM -------------- We first present the results with parametric LVMs. Using Bayesian mixture of experts model as an instance, we conducted experiments to verify the effectiveness and efficiency of the two proposed approaches. \[sec:exp\] #### Datasets We used two binary-classification datasets. The first one is the Adult-9 [@platt1999fast] dataset, which has $\sim$33K training instances and $\sim$16K testing instances. The feature dimension is 123. The other dataset is SUN-Building compiled from the SUN [@xiao2010sun] dataset, which contains $\sim$6K building images and 7K non-building images randomly sampled from other categories, where 70% of images are used for training and the rest for testing. We use SIFT [@lowe1999object] based bag-of-words to represent the images with a dimensionality of 1000. #### Experimental Settings To understand the effects of diversification in Bayesian learning, we compare the following methods: (1) mixture of experts model (MEM) with L2 regularization (MEM-L2) where the L2 regularizer is imposed over “experts” independently; (2) MEM with mutual angular regularization [@xie2015diversifying] (MEM-MAR) where the “experts” are encouraged to be diverse; (3) Bayesian MEM with a Gaussian prior (BMEM-G) where the “experts” are independently drawn from a Gaussian prior; (4) BMEM with mutual angular Bayesian network priors (type I or II) where the MABN favors diverse “experts” (BMEM-MABN-I, BMEM-MABN-II); BMEM-MABN-I is inferred with MCMC sampling and BMEM-MABN-II is inferred with variational inference; (5) BMEM with posterior regularization (BMEM-PR). The key parameters involved in the above methods are: (1) the regularization parameter $\lambda$ in MEM-L2, MEM-MAR, BMEM-PR; (2) the concentration parameter $\kappa$ in the mutual angular priors in BMEM-MABN-(I,II); (3) the concentration parameter $\hat{\kappa}$ in the variational distribution in BMEM-MABN-II. All parameters were tuned using 5-fold cross validation. Besides internal comparison, we also compared with 5 baseline methods, which are among the most widely used classification approaches that achieve the state of the art performance. They are: (1) kernel support vector machine (KSVM) [@burges1998tutorial]; (2) random forest (RF) [@breiman2001random]; (3) deep neural network (DNN) [@hinton2006reducing]; (4) Infinite SVM (ISVM) [@zhu2011infinite]; (5) BMEM with DPP prior (BMEM-DPP) [@kulesza2012determinantal], in which a Metropolis-Hastings sampling algorithm was adopted[^2]. The kernels in KSVM and BMEM-DPP are both radial basis function kernel. Parameters of the baselines were tuned using 5-fold cross validation. #### Results K 5 10 20 30 -------------- ---------- ---------- ---------- ---------- -- MEM-L2 82.6 83.8 84.3 84.7 MEM-MAR 85.3 86.4 86.6 87.1 BMEM-G 83.4 84.2 84.9 84.9 BMEM-MABN-I [87.1]{} [88.3]{} 88.6 [88.9]{} BMEM-MABN-II 86.4 87.8 88.1 88.4 BMEM-PR 86.2 87.9 [88.7]{} 88.1 : Classification accuracy (%) on Adult-9 dataset[]{data-label="table:retrieval_20news"} K 5 10 20 30 -------------- ---------- ---------- ---------- ---------- MEM-L2 76.2 78.8 79.4 79.7 MEM-MAR 81.3 82.1 82.7 82.3 BMEM-G 76.5 78.6 80.2 80.4 BMEM-MABN-I [82.1]{} 83.6 [85.3]{} [85.2]{} BMEM-MABN-II 80.9 82.8 84.9 84.1 BMEM-PR 81.7 [84.1]{} 83.8 84.9 : Classification accuracy (%) on SUN-Building dataset[]{data-label="table:retrieval_15scenes"} Category ID C18 C17 C12 C14 C22 C34 C23 C32 C16 -------------------------- ------ ------ ------ ------ ------ ------ ------ ------ ------ Num. of Docs 5281 4125 1194 741 611 483 262 208 192 BMEM-G Accuracy (%) 87.3 88.5 75.7 70.1 71.6 64.2 55.9 57.4 51.3 BMEM-MABN-I Accuracy (%) 88.1 86.9 74.7 72.2 70.5 67.3 68.9 70.1 65.5 Relative Improvement (%) 1.0 -1.8 -1.3 2.9 -1.6 4.6 18.9 18.1 21.7 Adult-9 SUN-Building -------------- ---------- -------------- KSVM 85.2 84.2 RF 87.7 85.1 DNN 87.1 84.8 ISVM 85.8 82.3 BMEM-DPP 86.5 84.5 BMEM-MABN-I [88.9]{} [85.3]{} BMEM-MABN-II 88.4 84.9 BMEM-PR 88.7 84.9 : Classification accuracy (%) on two datasets[]{data-label="table:cmp_ap"} Table \[table:retrieval\_20news\] and \[table:retrieval\_15scenes\] show the classification accuracy under different numbers of “experts” on the Adult-9 and SUN-Building datasets respectively. From these two tables, we observe that: (1) diversification can greatly improve the performance of Bayesian MEM, which can be seen from the comparison between diversified BMEM methods and their non-diversified counterparts, such as BMEM-MABN-(I,II) versus BMEM-G, and BMEM-PR versus BMEM-G. (2) Bayesian learning achieves better performance than point estimation, which is manifested by comparing BMEM-G with MEM-L2, and BMEM-MABN-(I,II)/BMEM-PR with MEM-MAR. (3) BMEM-MAR-I works better than BMEM-MABN-II and BMEM-PR. The reason is that BMEM-MAR-I inferred with MCMC draws samples from the *exact* posterior while BMEM-MABN-II and BMEM-PR inferred with variational inference seek an *approximation* of the posterior. Adult-9 SUN-Building -------------- --------- -------------- BMEM-DPP 8.2 11.7 BMEM-MABN-I 7.5 10.5 BMEM-MABN-II 2.9 4.1 BMEM-PR 3.3 4.9 : Training time (hours) of different methods with $K=30$[]{data-label="table:runtime"} Recall that the goals of promoting diversity in LVMs are to reduce model size without sacrificing modeling power and effectively capture infrequent patterns. Here we empirically verify whether these two goals can be achieved. Regarding the first goal, we compare diversified BMEM methods BMEM-MABN-(I,II)/BMEM-PR with non-diversified counterpart BMEM-G and check whether diversified methods with a small number of components $K$ which entails low computational complexity can achieve performance as good as non-diversified methods with large $K$. It can be observed that BMEM-MABN-(I,II)/BMEM-PR with a small $K$ can achieve accuracy that is comparable to or even better than BMEM-G with a large $K$. For example, on the Adult-9 dataset (Table \[table:retrieval\_20news\]), with 5 experts BMEM-MABN-I is able to achieve an accuracy of $87.1\%$, which cannot be achieved by BMEM-G with even 30 experts. This corroborates the effectiveness of diversification in reducing model size (hence computational complexity) without compromising performance. To verify the second goal – capturing infrequent patterns, from the RCV1 [@lewis2004rcv1] dataset we pick up a subset of documents (for binary classification) such that the popularity of categories (patterns) is in power-law distribution. Specifically, we choose documents from 9 subcategories (the 1st row of Table \[tb:prec\_cat\_rcv1\]) of the CCAT category as the positive instances, and randomly sample 15K documents from non-CCAT categories as negative instances. The 2nd row shows the number of documents in each of the 9 categories. The distribution of document frequency is in a power-law fashion, where frequent categories (such as C18 and C17) have a lot of documents while infrequent categories (such as C32 and C16) have a small amount of documents. The 3rd and 4th row show the accuracy achieved by BMEM-G and BMEM-MABN-I on each category. The 5th row shows the relative improvement of BMEM-MABN-I over BMEM-G, which is defined as $\frac{A_{bmem\_mabn}-A_{bmem\_g}}{A_{bmem\_g}}$, where $A_{bmem\_mabn}$ and $A_{bmem\_g}$ denote the accuracy achieved by BMEM-MABN-I and BMEM-G respectively. While achieving accuracy comparable to BMEM-G over the frequent categories, BMEM-MABN-I obtains much better performance on infrequent categories. For example, the relative improvements on infrequent categories C32 and C16 are 18.1% and 21.7%. This demonstrates that BMEM-MABN-I can effectively capture the infrequent patterns. Table \[table:cmp\_ap\] presents the comparison with the state of the art classification methods. As one can see, our method achieves the best performances on both datasets. In particular, BMEM-MAR-(I,II) work better than BMEM-DPP, demonstrating the proposed mutual angular priors possess ability that is better than or comparable to the DPP prior in inducing diversity. Table \[table:runtime\] compares the time (hours) taken by each method to achieve convergence, with $K$ set to 30. BMEM-MABN-II inferred with variational inference (VI) is more efficient than BMEM-MABN-I inferred with MCMC sampling, due to the higher efficiency of VI than MCMC. BMEM-PR is solved with an optimization algorithm which is more efficient than the sampling algorithm in BMEM-MABN-I. BMEM-MABN-II and BMEM-PR are more efficient than BMEM-DPP where DPP inhibits the adoption of VI. Dataset \#Examples Dimension \#Classes Description -------------- ------------ ----------- ----------- ------------------------------------------------ Yale 722 1032 - faces images Block-Images 1000 36 - noisy overlays of four binary shapes on a grid AR 2600 1598 - faces with lighting, accessories EEG 4400 32 - EEG recording on various tasks Piano 10000 161 - DFT of a piano recording Reuters 7195 5000 9 Reuters news articles TDT 9394 5000 30 Nist Topic Detection and Tracking (TDT) corpus 20-News 18846 5000 20 documents from 20 newsgroups 15-Scenes 4485 1000 15 images from 15 scene categories Caltech-101 9144 1000 101 images from 101 object categories Nonparametric LVM ----------------- We evaluate the effectiveness of the IMA prior in alleviate overfitting, reducing model size without sacrificing modeling power and capturing infrequent patterns, on a wide range of datasets. #### Datasets We used ten datasets from different domains including texts, images, sound and EEG signals. Their statistics are summarized in Table \[tb:data\_stat\]. The first five datasets are represented as raw data without feature extraction. The documents in Reuters[^3], TDT[^4] and 20-News[^5] are represented with bag-of-words vectors, weighted using tf-idf. The images in 15-Scenes [@lazebnik2006beyond] and Caltech-101 [@fei2007learning] are represented with visual bag-of-words vectors based on SIFT [@lowe2004distinctive] features. The train/test split of each dataset is 70%/30%. The results are averaged over five random splits. #### Experimental Setup For each dataset, we use IMA-LFM to learn the latent features $\mathcal{W}$ on the training set, then use $\mathcal{W}$ to reconstruct the test data. The reconstruction performance is measured with L2 error (the smaller, the better) and log-likelihood (the larger, the better). Meanwhile, we use $\mathcal{W}$ to infer the representations $\mathcal{Z}$ of test data and perform data clustering on $\mathcal{Z}$. Clustering performance is measured using accuracy and normalized mutual information (NMI) (the higher, the better) [@cai2005document]. We compared with two baselines: Indian buffet process LFM (IBP-LTM) [@griffiths2005infinite] and Pitman-Yor process LFM (PYP-LFM) [@teh2009indian]. Following [@doshi2009accelerated], all datasets are centered to have zero mean. $\kappa$ is set to 1. $\sigma^2$ is set to $0.25\hat{\sigma}$ where $\hat{\sigma}$ is the standard deviation of data across all dimensions. $\alpha$ is set to 2. Dataset IBP-LFM PYP-LFM IMA-LFM ---------------------------- ------------- ------------- ----------------- Yale 447$\pm$7 432$\pm$3 [419]{}$\pm$4 Block ($\times 10^{-2}$) 6.3$\pm$0.4 5.8$\pm$0.1 [4.4]{}$\pm$0.2 AR 926$\pm$4 939$\pm$11 [871]{}$\pm$7 EEG (+$2.1 \times 10^{6}$) 5382$\pm$34 3731$\pm$15 [575]{}$\pm$21 Piano ($\times 10^{-4}$) 5.3$\pm$0.1 5.7$\pm$0.2 [4.2]{}$\pm$0.2 : L2 Test Error[]{data-label="tb:l2"} #### Results Table \[tb:l2\] and \[tb:llh\] present the L2 error and likelihood (mean$\pm$standard error) on the test set of the first five datasets. As can be seen, IMA-LFM achieves much lower L2 error and higher likelihood than IBP-LFM and PYP-LFM. Table \[tb:txt\_clus\_ac\] and \[tb:txt\_clus\_nmi\] show the clustering accuracy and NMI (mean$\pm$standard error) on the last 5 datasets which have class labels. IMA-LFM outperforms the two baseline methods with a large margin. We conjecture the reasons are two-fold. First, IMA places a diversity-biased structure over the latent features, which alleviates overfitting. In both IBP-LFM and PYP-LFM, the weight vectors of latent features are drawn independently from a Gaussian distribution, which is unable to characterize the relations among features. In contrast, IMA-LFM imposes a structure over the features, encouraging them to be “diverse" and less redundant. This structural constraint reduces model complexity of LFM, thus alleviating overfitting on the training data and achieving better reconstruction of the test data. Second, “diversified" features presumably have higher representational power and are able to capture richer information and subtle aspects of data, thus achieving a better modeling effect. Dataset IBP-LFM PYP-LFM IMA-LFM ------------- --------------- ---------------- ------------------- Yale -16.4$\pm$0.3 -14.9 $\pm$0.4 [-12.7]{}$\pm$0.1 Block-Image -2.1$\pm$0.2 -1.8$\pm$0.1 [-1.4]{}$\pm$0.1 AR -13.9$\pm$0.3 -14.6$\pm$0.7 [-8.5]{}$\pm$0.4 EEG -14133$\pm$54 -12893 $\pm$73 [-9735]{}$\pm$32 Piano -6.8$\pm$0.6 -6.9$\pm$0.2 [-4.2]{}$\pm$0.5 : Test Likelihood[]{data-label="tb:llh"} Table \[tb:num\_feats\] shows the number of features (mean$\pm$standard error) obtained by each model when the algorithm converges. Analyzing Table \[tb:l2\]-\[tb:num\_feats\] simultaneously, we see that IMA-LFM uses much fewer features to achieve better performance than the baselines. For instance, on the Reuters dataset, with 294 features, IMA-LFM achieves a 48.2% clustering accuracy. In contrast, IBP-LFM uses 60 more features, but achieves 2.8% lower accuracy. This suggests that IMA is able to reduce the size of LFM (the number of features) without sacrificing the modeling power. Because of IMA’s diversity-promoting mechanism, the learned features bear less redundancy and are highly complementary to each other. Each feature captures a significant amount of information. As a result, a small number of such features are sufficient to model data well. In contrast, the features in IBP-LFM and PYP-LFM are drawn independently from a base distribution, which lacks the mechanism to reduce redundancy. IMA achieves more significant reduction of feature number on datasets with larger dimensions. This is possibly because higher-dimensional data contains more redundancy, giving IMA a larger room to improve. IBP-LFM PYP-LFM IMA-LFM ------------- -------------- -------------- ------------------ Reuters 45.4$\pm$0.3 43.1$\pm$0.4 [48.2]{}$\pm$0.6 TDT 48.3$\pm$0.7 47.5$\pm$0.3 [53.2]{}$\pm$0.4 20-News 21.5$\pm$0.1 23.7$\pm$0.2 [25.2]{}$\pm$0.1 15-Scenes 22.7$\pm$0.2 21.9$\pm$0.4 [25.3]{}$\pm$0.2 Caltech-101 11.6$\pm$0.4 12.1$\pm$0.1 [14.7]{}$\pm$0.2 : Clustering Accuracy (%)[]{data-label="tb:txt_clus_ac"} To verify whether IMA helps to better capture infrequent patterns, on the learned features of the Reuters dataset we perform a retrieval task and measure the precision@100 on each category. For each test document, we retrieve 100 documents from the training set based on Euclidean distance. Precision@100 is defined as $n$/100, where $n$ is the number of retrieved documents that share the same category label with the query document. We treat each category as a pattern and define its frequency as the number of documents belonging to it. A category with more than 1000 documents is labeled as frequent. Table \[tb:pc\_prec\] shows the per-category precision. The last row shows the relative improvement of IMA-LFM over IBP-LFM, defined as $(P_{imap}-P_{ibp})/P_{ibp}$. As can be seen, on the infrequent categories 3-9, IMA-LFM achieves much better precision than IBP-LFM, while on the frequent categories 1 and 2, their performance are comparable. This demonstrates that IMA is able to better capture infrequent patterns without losing the modeling power on frequent patterns. IMA promotes diversity among the components, which pushes some of them away from the frequent patterns toward infrequent patterns, giving the infrequent ones a better chance to be captured. On the 20-News dataset, we visualize the learned features. For a latent feature with parameter vector ${\mathbf}{w}$, we pick up the top 10 words corresponding to the largest values in ${\mathbf}{w}$. Table 8 shows 5 exemplar features learned by IBP-LFM and IMA-LFM. As can be seen, the features learned by IBP-LFM have much overlap and redundancy and are hard to distinguish, whereas those learned by IMA-LFM are more diverse. IBP-LFM PYP-LFM IMA-LFM ------------- -------------- -------------- ------------------ Reuters 41.7$\pm$0.5 38.6$\pm$0.2 [45.4]{}$\pm$0.4 TDT 44.2$\pm$0.1 46.7$\pm$0.3 [49.6]{}$\pm$0.6 20-News 38.9$\pm$0.8 35.2$\pm$0.5 [44.6]{}$\pm$0.9 15-Scenes 42.1$\pm$0.7 44.9$\pm$0.8 [47.5]{}$\pm$0.2 Caltech-101 34.2$\pm$0.4 36.8$\pm$0.4 [40.3]{}$\pm$0.3 : Normalized Mutual Information (%)[]{data-label="tb:txt_clus_nmi"} IBP-LFM PYP-LFM IMA-LFM ------------- ------------- ------------- --------------- Yale 201$\pm$5 220$\pm$8 [165]{}$\pm$4 Block-Image [8]{}$\pm$2 9$\pm$4 11$\pm$4 AR 257$\pm$11 193$\pm$5 [176]{}$\pm$8 EEG 14$\pm$2 [9]{}$\pm$2 12$\pm$1 Piano 37$\pm$4 34$\pm$6 [28]{}$\pm$3 Reuters 354$\pm$12 326$\pm$5 [294]{}$\pm$7 TDT 297$\pm$6 311$\pm$9 [274]{}$\pm$3 20-News 442$\pm$8 408$\pm$3 [369]{}$\pm$5 15-Scenes 192$\pm$3 218$\pm$5 [171]{}$\pm$8 Caltech-101 127$\pm$7 113$\pm$6 [96]{}$\pm$6 : Number of features[]{data-label="tb:num_feats"} Category 1 2 3 4 5 6 7 8 9 --------------------------- ------ ------ ------ ------ ------ ------ ----- ------ ------ Frequency 3713 2055 321 298 245 197 142 114 110 IBP-LFM Precision@100 (%) 73.7 45.1 7.5 8.3 6.9 7.2 2.6 3.8 3.4 IMA-LFM Precision@100 (%) 81.5 78.4 36.2 37.8 29.1 20.4 8.3 13.8 11.6 Relative Improvement (%) 11 74 382 355 321 183 219 263 241 : Per-category precision@100 on the Reuters dataset[]{data-label="tb:pc_prec"} \[tb:20news\_topics\] ------------ ----------- ----------- ------------ ----------- ------------ ----------- ----------- Feature 1 Feature 2 Feature 3 Feature 4 Feature 1 Feature 2 Feature 3 Feature 4 government saddam nuclear turkish president clinton olympic school house white iraqi soviet clinton government team great baghdad clinton weapons government legal lewinsky hockey institute weapons united united weapons years nuclear good program tax time work number state work baseball study years president spkr enemy baghdad minister gold japanese white baghdad president good church weapons ball office united iraq people don white india medal reading state un baghdad citizens united years april level bill lewinsky state due iraqi white winter number ------------ ----------- ----------- ------------ ----------- ------------ ----------- ----------- \[tb:vis\] Conclusions =========== We study how to promote diversity in Bayesian latent variable models, for the purpose of better capturing infrequent patterns and reducing model size without compromising modeling power. We define a mutual angular Bayesian network (MABN) prior that entails an inductive bias towards components having larger mutual angles and investigate a posterior regularization approach which directly applies regularizers over the post-data distributions to promote diversity. Approximate algorithms are developed for posterior inference under the MABN priors. With Bayesian mixture of experts model as a study case, empirical experiments demonstrate the effectiveness and efficiency of our methods. We also study how to promote diversity among infinitely many components in Bayesian nonparametric latent variable models. We extend the MABN prior to an infinite mutual angular (IMA) prior that encourages an infinite number of components to have large mutual angles. We apply the IMA prior to the infinite latent feature model, to encourage the latent features therein to be diverse. An efficient posterior inference algorithm is developed. Experiments demonstrate that the IMA prior can effectively capture infrequent patterns, reduce model size without compromising modeling power and alleviate overfitting. Appendix A. Variational Inference for LVMs with Type I MABN Prior {#appendix-a.-variational-inference-for-lvms-with-type-i-mabn-prior .unnumbered} ================================================================= In this section, we present details on how to derive the variational lower bound $$\begin{array}{l} \mathbb{E}_{q({\mathbf}{A})}[\log p(\mathcal{D}|{\mathbf}{A})]+\mathbb{E}_{q({\mathbf}{A})}[\log p({\mathbf}{A})]-\mathbb{E}_{q({\mathbf}{A})}[\log q({\mathbf}{A})] \end{array}$$ where the variational distribution $q({\mathbf}{A})$ is chosen to be $$\begin{array}{l} q({\mathbf}{A})= \prod_{k=1}^{K}q(\tilde{{\mathbf}{a}}_k)q(g_k)=\prod\limits_{k=1}^{K} \textrm{vMF}(\tilde{{\mathbf}{a}}_k|\hat{{\mathbf}{a}}_k,\hat{\kappa}) \textrm{Gamma}(g_k|r_k,s_k) \end{array}$$ Among the three expectation terms, $\mathbb{E}_{q({\mathbf}{A})}[\log p({\mathbf}{A})]$ and $\mathbb{E}_{q({\mathbf}{A})}[\log q({\mathbf}{A})]$ are model-independent and we discuss how to compute them in this section. $\mathbb{E}_{q({\mathbf}{A})}[\log p(\mathcal{D}|{\mathbf}{A})]$ depends on the specific LVM and a concrete example will be given in Appendix B. First we introduce some equalities and inequalities used later on. Let ${\mathbf}{a}\sim \mathrm{vMF}({\boldsymbol}\mu, \kappa)$, then\ ([slowromancap1@]{}) $\mathbb{E}[{\mathbf}{a}]=A_p(\kappa){\boldsymbol}\mu$ where $A_p(\kappa)=\frac{I_{p/2}(\kappa)}{I_{p/2-1}(\kappa)}$, and $I_{v}(\cdot)$ denotes the modified Bessel function of the first kind at order $v$.\ ([slowromancap2@]{}) $\mathrm{cov}({\mathbf}{a})=\frac{h(\kappa)}{\kappa}\mathbf{I}+(1-2\frac{\nu+1}{\kappa}h(\kappa)-h^2(\kappa)){\boldsymbol}\mu {\boldsymbol}\mu^{T}$, where $h(\kappa)=\frac{I_{\nu+1}(\kappa)}{I_{\nu}(\kappa)}$ and $\nu=p/2-1$.\ Please refer to [@Abeywardana] for the derivation of $\mathbb{E}[{\mathbf}{a}]$ and $\mathrm{cov}({\mathbf}{a})$.\ ([slowromancap3@]{}) $\mathbb{E}[{\mathbf}{a}^T{\mathbf}{a}]=\mathrm{tr}(\mathrm{cov}({\mathbf}{a}))+A_p^2(\kappa){\boldsymbol}\mu^T{\boldsymbol}\mu$. #### Proof $$\begin{array}{l} \mathbb{E}[\mathrm{tr}({\mathbf}{a}^T{\mathbf}{a})] =\mathbb{E}[\mathrm{tr}({\mathbf}{a}{\mathbf}{a}^T)]= \mathrm{tr}(\mathbb{E}[{\mathbf}{a}{\mathbf}{a}^T])\\ =\mathrm{tr}(\textrm{cov}({\mathbf}{a})+\mathbb{E}[{\mathbf}{a}]\mathbb{E}[{\mathbf}{a}]^T) =\mathrm{tr}(\textrm{cov}({\mathbf}{a}))+\mathrm{tr}(\mathbb{E}[{\mathbf}{a}]\mathbb{E}[{\mathbf}{a}]^T)\\ =\mathrm{tr}(\textrm{cov}({\mathbf}{a}))+A_p^2(\kappa){\boldsymbol}\mu^T{\boldsymbol}\mu\\ \end{array}$$ Let $g\sim \textrm{Gamma}(\alpha,\beta)$, then\ ([slowromancap4@]{}) $\mathbb{E}[g]=\frac{\alpha}{\beta}$\ ([slowromancap5@]{}) $\mathbb{E}[\log g]=\psi(\alpha)-\log\beta$\ ([slowromancap6@]{}) $\log \sum_{k=1}^K\exp(x_k)\leq \gamma+\sum_{k=1}^K\log(1+\exp(x_k-\gamma))$ and $\log \int\exp(x)\mathrm{d}x\leq \gamma+\int\log(1+\exp(x-\gamma))\mathrm{d}x$, where $\gamma$ is a variational parameter. See [@bouchard2007efficient] for the proof.\ ([slowromancap7@]{}) $\log(1+e^{-x})\leq \log(1+e^{-\xi})-\frac{x-\xi}{2}-\frac{1/2-g(\xi)}{2\xi}(x^2-\xi^2)$, $\log(1+e^{x})\leq \log(1+e^{\xi})+\frac{x-\xi}{2}-\frac{1/2-g(\xi)}{2\xi}(x^2-\xi^2)$, where $\xi$ is a variational parameter and $g(\xi)=1/(1+\exp(-\xi))$. See [@bouchard2007efficient] for the proof.\ ([slowromancap8@]{}) $\int_{\|{\mathbf}{y}\|_2=1} 1\mathrm{d}{\mathbf}{y}=\frac{2\pi^{(p+1)/2}}{\Gamma(\frac{p+1}{2})}$, which is the surface area[^6] of $p$-dimensional unit sphere. $\Gamma(\cdot)$ is the Gamma function.\ ([slowromancap9@]{})$\int_{\|{\mathbf}{y}\|_2=1} {\mathbf}{x}^T{\mathbf}{y}\mathrm{d}{\mathbf}{y}=0$, which can be shown according to the symmetry of unit sphere.\ ([slowromancap10@]{})$\int_{\|{\mathbf}{y}\|_2=1} ({\mathbf}{x}^T{\mathbf}{y})^2\mathrm{d}{\mathbf}{y}\leq \|{\mathbf}{x}\|_2^2\frac{2\pi^{(p+1)/2}}{\Gamma(\frac{p+1}{2})}$. #### Proof $$\begin{array}{lll} &&\int_{\|{\mathbf}{y}\|_2=1} ({\mathbf}{x}^T{\mathbf}{y})^2\mathrm{d}{\mathbf}{y}\\ &=&\|{\mathbf}{x}\|_2^2\int_{\|{\mathbf}{y}\|_2=1} ((\frac{{\mathbf}{x}}{\|{\mathbf}{x}\|_2})^T{\mathbf}{y})^2\mathrm{d}{\mathbf}{y}\\ &=&\|{\mathbf}{x}\|_2^2\int_{\|{\mathbf}{y}\|_2=1} ({\mathbf}{e}_1^T{\mathbf}{y})^2\mathrm{d}{\mathbf}{y}\\ &&(\text{according to the symmetry of unit sphere})\\ &\leq&\|{\mathbf}{x}\|_2^2\int_{\|{\mathbf}{y}\|_2=1} 1\mathrm{d}{\mathbf}{y}\\ &=&\|{\mathbf}{x}\|_2^2\frac{2\pi^{(p+1)/2}}{\Gamma(\frac{p+1}{2})} \end{array}$$ Given these equalities and inequalities, we can upper bound $\log Z_i$ (Eq.(\[eq:logz\]) in Section \[sec:vi\]). Given this upper bound, we can derive a lower bound of $\mathbb{E}_{q({\mathbf}{A})}[\log p({\mathbf}{A})]$ $$\label{eq:lb} \begin{array}{l} \mathbb{E}_{q({\mathbf}{A})}[\log p({\mathbf}{A})]\\ =\mathbb{E}_{q({\mathbf}{A})}[\log p({\mathbf}{\tilde{a}}_1)\prod_{i=2}^{K}p({\mathbf}{\tilde{a}}_i|\{{\mathbf}{\tilde{a}}_j\}_{j=1}^{i-1}) \prod_{i=1}^{K}q(g_i)]\\ =\mathbb{E}_{q({\mathbf}{A})}[\log p({\mathbf}{\tilde{a}}_1)\prod\limits_{i=2}^{K}\frac{\exp(\kappa (-\sum_{j=1}^{i-1}{\mathbf}{\tilde{a}}_j)\cdot {\mathbf}{\tilde{a}}_i)}{Z_i}\prod\limits_{i=1}^{K}\frac{\alpha_2^{\alpha_1}g_i^{\alpha_1-1} e^{-g_i\alpha_2}}{\Gamma(\alpha_1)}]\\ \geq \kappa \mu_{0}^{\mathsf{T}}\mathbb{E}_{q({\mathbf}{\tilde{a}}_1)}[{\mathbf}{\tilde{a}}_1]+ \sum_{i=2}^{K} (-\kappa\sum_{j=1}^{i-1}\mathbb{E}_{q({\mathbf}{\tilde{a}}_j)}[{\mathbf}{\tilde{a}}_j]\cdot \mathbb{E}_{q({\mathbf}{\tilde{a}}_i)}[{\mathbf}{\tilde{a}}_i] \quad-\gamma_i-(\log(1+e^{-\xi_i})+\frac{\xi_i-\gamma_i}{2}\\ \quad+\frac{1/2-g(\xi_i)}{2\xi_i}(\xi_i^2-\gamma_i^2))\frac{2\pi^{(p+1)/2}}{\Gamma(\frac{p+1}{2})} +\frac{1/2-g(\xi_i)}{2\xi_i}\kappa^2\mathbb{E}_{q({\mathbf}{A})}[\|\sum_{j=1}^{i-1}{\mathbf}{\tilde{a}}_j\|_2^2]\frac{2\pi^{(p+1)/2}}{\Gamma(\frac{p+1}{2})} )+K(\alpha_1\log \alpha_2\\ \quad-\log\Gamma(\alpha_1)) +\sum\limits_{i=1}^{K}(\alpha_1-1)\mathbb{E}_{q(g_i)}[\log g_i ]-\alpha_2 \mathbb{E}_{q(g_i)}[g_i]+\textrm{const}\\ \geq \kappa A_p(\hat{\kappa}) \mu_{0}^{\mathsf{T}}{\mathbf}{\hat{a}}_1+ \sum_{i=2}^{K} (-\kappa A_p(\hat{\kappa})^2\sum_{j=1}^{i-1}{\mathbf}{\hat{a}}_j\cdot {\mathbf}{\hat{a}}_i -\gamma_i-(\log(1+e^{-\xi_i})+\frac{\xi_i-\gamma_i}{2}\\ \quad+\frac{1/2-g(\xi_i)}{2\xi_i}(\xi_i^2-\gamma_i^2))\frac{2\pi^{(p+1)/2}}{\Gamma(\frac{p+1}{2})}+\frac{1/2-g(\xi_i)}{2\xi_i}\kappa^2 (A_p^2(\hat{\kappa})\sum_{j=1}^{i-1}\sum_{k\neq j}^{i-1}{\mathbf}{\hat{a}}_j\cdot {\mathbf}{\hat{a}}_k+\sum_{j=1}^{i-1}(\mathrm{tr}(\Lambda_j)\\ \quad+A_p^2(\hat{\kappa}){\mathbf}{\hat{a}}_j^T{\mathbf}{\hat{a}}_j)) \frac{2\pi^{(p+1)/2}}{\Gamma(\frac{p+1}{2})} )+K(\alpha_1\log \alpha_2-\log\Gamma(\alpha_1))+\sum_{i=1}^{K}(\alpha_1-1)(\psi(r_i)-\log(s_i))-\alpha_2 \frac{r_i}{s_i}+\textrm{const}\\ \end{array}$$ where $\Lambda_j=\frac{h(\hat{\kappa})}{\hat{\kappa}}\mathbf{I}+(1-2\frac{\nu+1}{\hat{\kappa}}h(\hat{\kappa})-h^2(\hat{\kappa})){\mathbf}{\hat{a}}_j {\mathbf}{\hat{a}}_j^{T}$. The other expectation term $\mathbb{E}_{q({\mathbf}{A})}[\log q({\mathbf}{A})]$ can be computed as $$\label{eq:entropy} \begin{array}{l} \mathbb{E}_{q({\mathbf}{A})}[\log q({\mathbf}{A})]\\ =\mathbb{E}_{q({\mathbf}{A})}[\log \prod\limits_{k=1}^{K} \textrm{vMF}(\tilde{{\mathbf}{a}}_k|\hat{{\mathbf}{a}}_k,\hat{\kappa}) \textrm{Gamma}(g_k|r_k,s_k)]\\ =\sum\limits_{k=1}^{K} \hat{\kappa}A_p(\hat{\kappa})\|\hat{\alpha}_k\|_2^2+r_k\log s_k-\log\Gamma(r_k)+(r_k-1)(\psi(r_k)-\log(s_k))-r_k \end{array}$$ Appendix B. VI for BMEM with Type I MABN {#appendix-b.-vi-for-bmem-with-type-i-mabn .unnumbered} ======================================== In this section, we discuss how to derive the variational lower bound for BMEM with type I MABN. The latent variables are $\{{\boldsymbol}\beta_k\}_{k=1}^{K}$,$\{{\boldsymbol}\eta_k\}_{k=1}^{K}$,$ \{z_n\}_{n=1}^{N}$. The joint probability of all variables is $$\begin{array}{l} p(\{{\boldsymbol}\beta_k\}_{k=1}^{K},\{{\boldsymbol}\eta_k\}_{k=1}^{K}, \{{\mathbf}{x}_n,y_n, z_n\}_{n=1}^{N})\\ = p(\{y_n\}_{n=1}^N|\{{\mathbf}{x}_n\}_{n=1}^N,\{z_n\}_{n=1}^N,\{{\boldsymbol}\beta_k\}_{k=1}^{K}) p(\{z_n\}_{n=1}^N|\{{\mathbf}{x}_n\}_{n=1}^N,\{{\boldsymbol}\eta_k\}_{k=1}^{K}) p(\{{\boldsymbol}\beta_k\}_{k=1}^{K})p(\{{\boldsymbol}\eta_k\}_{k=1}^{K}) \end{array}$$ To perform variational inference, we employ a mean field variational distribution $$\begin{array}{l} Q=q(\{{\boldsymbol}\beta_k\}_{k=1}^{K},\{{\boldsymbol}\eta_k\}_{k=1}^{K}, \{z_n\}_{n=1}^{N})\\ =\prod\limits_{k=1}^K q({\boldsymbol}\beta_k) q({\boldsymbol}\eta_k) \prod_{n=1}^N q(z_n)\\ =\prod\limits_{k=1}^{K} \textrm{vMF}(\tilde{{\boldsymbol}\beta}_k|\hat{{\boldsymbol}\beta}_k,\hat{\kappa}) \textrm{Gamma}(g_k|r_k,s_k) \textrm{vMF}(\tilde{{\boldsymbol}\eta}_k|\hat{{\boldsymbol}\eta}_k,\hat{\kappa}) \textrm{Gamma}(h_k|t_k,u_k) \prod\limits_{n=1}^{N}q(z_n|{\boldsymbol}\phi_n) \end{array}$$ Accordingly, the variational lower bound is $$\begin{array}{l} \mathbb{E}_{Q}[\log p(\{{\boldsymbol}\beta_k\}_{k=1}^{K},\{{\boldsymbol}\eta_k\}_{k=1}^{K}, \{{\mathbf}{x}_n,y_n, z_n\}_{n=1}^{N})] -\mathbb{E}_{Q}[\log q(\{{\boldsymbol}\beta_k\}_{k=1}^{K},\{{\boldsymbol}\eta_k\}_{k=1}^{K}, \{z_n\}_{n=1}^{N})]\\ =\mathbb{E}_{Q}[\log p(\{y_n\}_{n=1}^N|\{{\mathbf}{x}_n\}_{n=1}^N,\{z_n\}_{n=1}^N,\{{\boldsymbol}\beta_k\}_{k=1}^{K})] + \mathbb{E}_{Q}[\log p(\{z_n\}_{n=1}^N|\{{\mathbf}{x}_n\}_{n=1}^N,\{{\boldsymbol}\eta_k\}_{k=1}^{K})]\\ +\mathbb{E}_{Q}[\log p(\{{\boldsymbol}\beta_k\}_{k=1}^{K})]) +\mathbb{E}_{Q}[\log p(\{{\boldsymbol}\eta_k\}_{k=1}^{K})]) -\mathbb{E}_{Q}[\log q(\{{\boldsymbol}\beta_k\}_{k=1}^{K})]-\mathbb{E}_{Q}[\log q(\{{\boldsymbol}\eta_k\}_{k=1}^{K})]\\ -\mathbb{E}_{Q}[\log q(\{z_n\}_{n=1}^{N})]\\ \end{array}$$ where $\mathbb{E}_{Q}[\log p(\{{\boldsymbol}\beta_k\}_{k=1}^{K})]$ and $\mathbb{E}_{Q}[\log p(\{{\boldsymbol}\eta_k\}_{k=1}^{K})]$ can be lower bounded in a similar way as that in Eq.(\[eq:lb\]). $\mathbb{E}_{Q}[\log q(\{{\boldsymbol}\beta_k\}_{k=1}^{K})]$ and $\mathbb{E}_{Q}[\log q(\{{\boldsymbol}\eta_k\}_{k=1}^{K})]$ can be computed in a similar manner as that in Eq.(\[eq:entropy\]). Next we discuss how to compute the remaining expectation terms. Compute $\mathbb{E}_{Q}[\log p(\{z_n\}_{n=1}^{N}|\{{\boldsymbol}\eta_k\}_{k=1}^{K},\{{\mathbf}{x}_n\}_{n=1}^{N})]$ {#compute-mathbbe_qlog-pz_n_n1nboldsymboleta_k_k1kmathbfx_n_n1n .unnumbered} ------------------------------------------------------------------------------------------------------------------ First, $p(\{z_n\}_{n=1}^{N}|\{{\boldsymbol}\eta_k\}_{k=1}^{K},\{{\mathbf}{x}_n\}_{n=1}^{N})$ is defined as $$\begin{array}{l} p(\{z_n\}_{n=1}^{N}|\{{\boldsymbol}\eta_k\}_{k=1}^{K},\{{\mathbf}{x}_n\}_{n=1}^{N})\\ =\prod\limits_{n=1}^{N}p(z_n|{\mathbf}{x}_n,\{{\boldsymbol}\eta_k\}_{k=1}^{K})\\ =\prod\limits_{n=1}^{N}\frac{\prod\limits_{k=1}^{K}[\exp({\boldsymbol}\eta_k^{\mathsf{T}}{\mathbf}{x}_n)]^{z_{nk}}}{\sum_{j=1}^K\exp({\boldsymbol}\eta_j^{\mathsf{T}}{\mathbf}{x}_n)} \end{array}$$ $\log p(\{z_n\}_{n=1}^{N}|\{{\boldsymbol}\eta_k\}_{k=1}^{K},\{{\mathbf}{x}_n\}_{n=1}^{N})$ can be lower bounded as $$\begin{array}{l} \log p(\{z_n\}_{n=1}^{N}|\{{\boldsymbol}\eta_k\}_{k=1}^{K},\{{\mathbf}{x}_n\}_{n=1}^{N})\\ = \sum\limits_{n=1}^{N}\sum\limits_{k=1}^{K}z_{nk}{\boldsymbol}\eta_k^{\mathsf{T}}{\mathbf}{x}_n -\log(\sum_{j=1}^K\exp({\boldsymbol}\eta_j^{\mathsf{T}}{\mathbf}{x}_n))\\ = \sum\limits_{n=1}^{N}\sum\limits_{k=1}^{K}z_{nk}h_k\tilde{{\boldsymbol}\eta}_k^{\mathsf{T}}{\mathbf}{x}_n -\log(\sum_{j=1}^K\exp({\boldsymbol}\eta_j^{\mathsf{T}}{\mathbf}{x}_n))\\ \geq \sum\limits_{n=1}^{N}\sum\limits_{k=1}^{K}z_{nk}h_k\tilde{{\boldsymbol}\eta}_k^{\mathsf{T}}{\mathbf}{x}_n -c_n-\sum_{j=1}^K \quad \log(1+\exp({\boldsymbol}\eta_j^{\mathsf{T}}{\mathbf}{x}_n-c_n)) \text{(Using Inequality VI)}\\ \geq \sum\limits_{n=1}^{N}\sum\limits_{k=1}^{K}z_{nk}h_k\tilde{{\boldsymbol}\eta}_k^{\mathsf{T}}{\mathbf}{x}_n -c_n -\sum_{j=1}^K[\log(1+e^{-d_{nj}})-\frac{c_n-{\boldsymbol}\eta_j^{\mathsf{T}}{\mathbf}{x}_n-d_{nj}}{2}-\frac{1/2-g(d_{nj})}{2d_{nj}}(({\boldsymbol}\eta_j^{\mathsf{T}}{\mathbf}{x}_n-c_n)^2-d_{nj}^2)]\\ \quad\text{(Using Inequality VII)}\\ \end{array}$$ The expectation of $\log p(\{z_n\}_{n=1}^{N}|\{{\boldsymbol}\eta_k\}_{k=1}^{K},\{{\mathbf}{x}_n\}_{n=1}^{N})$ can be lower bounded as $$\begin{array}{l} \mathbb{E}[\log p(\{z_n\}_{n=1}^{N}|\{{\boldsymbol}\eta_k\}_{k=1}^{K},\{{\mathbf}{x}_n\}_{n=1}^{N})]\\ = A_p(\hat{\kappa})\sum\limits_{n=1}^{N}\sum\limits_{k=1}^{K}\phi_{nk}\frac{t_k}{u_k}\hat{{\boldsymbol}\eta}_k^{\mathsf{T}}{\mathbf}{x}_n -c_n -\sum_{j=1}^K[\log(1+e^{-d_{nj}}) -\frac{c_n-A_p(\hat{\kappa})\frac{t_j}{u_j}\hat{{\boldsymbol}\eta}_j^{\mathsf{T}}{\mathbf}{x}_n-d_{nj}}{2}\\ -\frac{1/2-g(d_{nj})}{2d_{nj}}( \frac{t_j+t_j^2}{u_j^2}\mathbb{E}[\tilde{{\boldsymbol}\eta}_{j}^{\mathsf{T}}{\mathbf}{x}_n{\mathbf}{x}^{\mathsf{T}}_n\tilde{{\boldsymbol}\eta}_{j}] -2c_nA_p(\hat{\kappa})\frac{t_j}{u_j}\hat{{\boldsymbol}\eta}_j^{\mathsf{T}}{\mathbf}{x}_n+c_n^2 -d_{nj}^2)]\\ \end{array}$$ where $$\label{eq:eta} \begin{array}{l} \mathbb{E}[\tilde{{\boldsymbol}\eta}_{k}^{\mathsf{T}}{\mathbf}{x}_n{\mathbf}{x}^{\mathsf{T}}_n\tilde{{\boldsymbol}\eta}_{k}]\\ =\mathbb{E}[\mathrm{tr}(\tilde{{\boldsymbol}\eta}_{k}^{\mathsf{T}}{\mathbf}{x}_n{\mathbf}{x}^{\mathsf{T}}_n\tilde{{\boldsymbol}\eta}_{k})]\\ =\mathbb{E}[\mathrm{tr}({\mathbf}{x}_n{\mathbf}{x}^{\mathsf{T}}_n\tilde{{\boldsymbol}\eta}_{k}\tilde{{\boldsymbol}\eta}_{k}^{\mathsf{T}})]\\ =\mathrm{tr}({\mathbf}{x}_n{\mathbf}{x}^{\mathsf{T}}_n\mathbb{E}[\tilde{{\boldsymbol}\eta}_{k}\tilde{{\boldsymbol}\eta}_{k}^{\mathsf{T}}])\\ =\mathrm{tr}({\mathbf}{x}_n{\mathbf}{x}^{\mathsf{T}}_n(\mathbb{E}[\tilde{{\boldsymbol}\eta}_{k}]\mathbb{E}[\tilde{{\boldsymbol}\eta}_{k}]^T+\textrm{cov}(\tilde{{\boldsymbol}\eta}_{k})))\\ \end{array}$$ Compute $\mathbb{E}[\log p(\{y_n\}_{n=1}^{N}|\{{\boldsymbol}\beta_k\}_{k=1}^{K},\{{\mathbf}{z}_n\}_{n=1}^{N})]$ {#compute-mathbbelog-py_n_n1nboldsymbolbeta_k_k1kmathbfz_n_n1n .unnumbered} --------------------------------------------------------------------------------------------------------------- $p(\{y_n\}_{n=1}^{N}|\{{\boldsymbol}\beta_k\}_{k=1}^{K},\{{\mathbf}{z}_n\}_{n=1}^{N})$ is defined as $$\begin{array}{l} p(\{y_n\}_{n=1}^{N}|\{{\boldsymbol}\beta_k\}_{k=1}^{K},\{{\mathbf}{z}_n\}_{n=1}^{N})\\ =\prod\limits_{n=1}^{N}p(y_n|{\mathbf}{z}_n,\{{\boldsymbol}\beta_k\}_{k=1}^{K})\\ =\prod\limits_{n=1}^{N}\frac{1}{\prod\limits_{k=1}^{K}[1+\exp(-(2y_n-1){\boldsymbol}\beta_{k}^{\mathsf{T}}{\mathbf}{x}_n)]^{z_{nk}}} \end{array}$$ $\log p(\{y_n\}_{n=1}^{N}|\{{\boldsymbol}\beta_k\}_{k=1}^{K},\{{\mathbf}{z}_n\}_{n=1}^{N})$ can be lower bounded by $$\begin{array}{l} \log p(\{y_n\}_{n=1}^{N}|\{{\boldsymbol}\beta_k\}_{k=1}^{K},\{{\mathbf}{z}_n\}_{n=1}^{N}) \\=-\sum\limits_{n=1}^{N}\sum\limits_{k=1}^{K}z_{nk}\log(1+\exp(-(2y_n-1){\boldsymbol}\beta_{k}^{\mathsf{T}}{\mathbf}{x}_n)) \\\geq\sum\limits_{n=1}^{N}\sum\limits_{k=1}^{K}z_{nk} [-\log(1+e^{-e_{nk}})+\frac{(2y_n-1){\boldsymbol}\beta_{k}^{\mathsf{T}}{\mathbf}{x}_n-e_{nk}}{2}+\frac{1/2-g(e_{nk})}{2e_{nk}}(({\boldsymbol}\beta_{k}^{\mathsf{T}}{\mathbf}{x}_n)^2-e_{nk}^2) ] \end{array}$$ $\mathbb{E}[\log p(\{y_n\}_{n=1}^{N}|\{{\boldsymbol}\beta_k\}_{k=1}^{K},\{{\mathbf}{z}_n\}_{n=1}^{N})]$ can be lower bounded by $$\begin{array}{l} \mathbb{E}[\log p(\{y_n\}_{n=1}^{N}|\{{\boldsymbol}\beta_k\}_{k=1}^{K},\{{\mathbf}{z}_n\}_{n=1}^{N})] \\\geq\sum\limits_{n=1}^{N}\sum\limits_{k=1}^{K}\phi_{nk} [-\log(1+e^{-e_{nk}})+\frac{A_p(\hat{\kappa})\frac{r_k}{s_k}\hat{{\boldsymbol}\beta}_{k}^{\mathsf{T}}{\mathbf}{x}_n-e_{nk}}{2}+\frac{1/2-\sigma(e_{nk})}{2e_{nk}}(\frac{r_k+r_k^2}{s_k^2}\mathbb{E}[\tilde{{\boldsymbol}\beta}_{k}^{\mathsf{T}}{\mathbf}{x}_n{\mathbf}{x}^{\mathsf{T}}_n\tilde{{\boldsymbol}\beta}_{k}]-e^2_{nk}) ]\\ \quad\text{(Using Inequality VII)}\\ \end{array}$$ where $\mathbb{E}[\tilde{{\boldsymbol}\beta}_{k}^{\mathsf{T}}{\mathbf}{x}_n{\mathbf}{x}^{\mathsf{T}}_n\tilde{{\boldsymbol}\beta}_{k}]$ can be computed in a similar way to Eq.(\[eq:eta\]). Compute $\mathbb{E}[\log q(z_i)]$ {#compute-mathbbelog-qz_i .unnumbered} --------------------------------- $$\mathbb{E}[\log q(z_i)]=\sum\limits_{k=1}^{K}\phi_{ik}\log \phi_{ik}$$ In the end, we can get a lower bound of the variational lower bound, then learn all the parameters by optimizing the lower bound via coordinate ascent: In each iteration, we pick up a parameter $x$ and fix all other parameters, which leads to a sub-problem defined over $x$. Then we optimize the sub-problem w.r.t $x$. For some parameters, the optimal solution of the sub-problem is in closed form. If not the case, we optimize $x$ using gradient ascent method. This process iterates until convergence. We omit the detailed derivation here since it only involves basic algebra and calculus, which can be done straightforwardly. Appendix C. Parameter Learning in the Metropolis-Hastings Algorithm {#appendix-c.-parameter-learning-in-the-metropolis-hastings-algorithm .unnumbered} =================================================================== The mutual angular Bayesian Network (MABN) prior is parameterized by several deterministic parameters including $\kappa$, ${\boldsymbol}\mu_0$, $\alpha_1$, $\alpha_2$. Among them, we tune $\kappa$ manually via cross validation and learn the others via an Expectation Maximization (EM) framework. Let ${\mathbf}{x}$ denote observed data, ${\mathbf}{z}$ denote all random variables and ${\boldsymbol}\theta$ denote deterministic parameters $\{{\boldsymbol}\mu_0,\alpha_1,\alpha_2\}$. EM is an algorithm aiming to learn ${\boldsymbol}\theta$ by maximize log-likelihood $p({\mathbf}{x};{\boldsymbol}\theta)$ of data. It iteratively performs two steps until convergence. In the E step, the posterior $p({\mathbf}{z}|{\mathbf}{x})$ is inferred with parameters ${\boldsymbol}\theta$ fixed. In the M step, ${\boldsymbol}\theta$ is learned by optimizing a lower bound of the log-likelihood $\mathbb{E}_{p({\mathbf}{z}|{\mathbf}{x})}[\log p({\mathbf}{x},{\mathbf}{z};{\boldsymbol}\theta)]$, where the expectation is computed w.r.t the posterior $p({\mathbf}{z}|{\mathbf}{x})$ inferred in the E step. In our problem, we use the Metropolis-Hastings (MH) algorithm to infer the posterior $p({\mathbf}{x};{\boldsymbol}\theta)$ at the E step, and learn parameters $\{{\boldsymbol}\mu_0,\alpha_1,\alpha_2\}$ at the M step. The parameters $\hat{\kappa}$ and $\sigma$ in proposal distributions are set manually. Appendix D. Algorithm for Posterior Regularization of BMEM {#appendix-d.-algorithm-for-posterior-regularization-of-bmem .unnumbered} ========================================================== In this section, we present the algorithmic details of posterior regularization of BMEM. Recall the problem is $$\label{eq:pr_obj1} \begin{array}{ll} \textrm{sup}_{q({\mathbf}{B},{\mathbf}{H},{\mathbf}{z})}&\mathbb{E}_{q({\mathbf}{B},{\mathbf}{H},{\mathbf}{z})}[\log p(\{y_i\}_{i=1}^{N},{\mathbf}{z}|{\mathbf}{B},{\mathbf}{H})\pi({\mathbf}{B},{\mathbf}{H})]-\mathbb{E}_{q({\mathbf}{B},{\mathbf}{H},{\mathbf}{z})}[\log q({\mathbf}{B},{\mathbf}{H},{\mathbf}{z})]\\ &+\lambda_1\Omega(\{\mathbb{E}_{q(\tilde{{\boldsymbol}\beta}_k)}[\tilde{{\boldsymbol}\beta}_k]\}_{k=1}^K)+\lambda_2\Omega(\{\mathbb{E}_{q(\tilde{{\boldsymbol}\eta}_k)}[\tilde{{\boldsymbol}\eta}_k]\}_{k=1}^K) \end{array}$$ where ${\mathbf}{B}=\{{\boldsymbol}\beta_k\}_{k=1}^{K}$, ${\mathbf}{H}=\{{\boldsymbol}\eta_k\}_{k=1}^{K}$ and ${\mathbf}{z}=\{z_i\}_{i=1}^{N}$ are latent variables and the post-data distribution over them is defined as $q({\mathbf}{B},{\mathbf}{H},{\mathbf}{z})=q({\mathbf}{B})q({\mathbf}{H})q({\mathbf}{z})$. For computational tractability, we define $q({\mathbf}{B})$ and $q({\mathbf}{H})$ to be: $q({\mathbf}{B})=\prod_{k=1}^{K}q(\tilde{{\boldsymbol}\beta}_k)q(g_k)$ and $q({\mathbf}{H})=\prod_{k=1}^{K}q(\tilde{{\boldsymbol}\eta}_k)q(h_k)$ where $q(\tilde{{\boldsymbol}\beta}_k)$, $q(\tilde{{\boldsymbol}\eta}_k)$ are von-Mises Fisher distributions and $q(g_k)$, $q(h_k)$ are gamma distributions, and define $q({\mathbf}{z})$ to be multinomial distributions: $q({\mathbf}{z})=\prod_{i=1}^{N}q(z_i|{\boldsymbol}\phi_i)$ where ${\boldsymbol}\phi_i$ is a multinomial vector. The priors over ${\mathbf}{B}$ and ${\mathbf}{H}$ are specified to be: $\pi({\mathbf}{B})=\prod_{k=1}^{K}p(\tilde{{\boldsymbol}\beta}_k)p(g_k)$ and $\pi({\mathbf}{H})=\prod_{k=1}^{K}p(\tilde{{\boldsymbol}\eta}_k)p(h_k)$ where $p(\tilde{{\boldsymbol}\beta}_k)$, $p(\tilde{{\boldsymbol}\eta}_k)$ are von-Mises Fisher distributions and $p(g_k)$, $p(h_k)$ are gamma distributions. The objective in Eq.(\[eq:pr\_obj1\]) can be further written as $$\begin{array}{ll} \mathbb{E}_{q({\mathbf}{B},{\mathbf}{H},{\mathbf}{z})}[\log p(\{y_i\}_{i=1}^{N},{\mathbf}{z}|{\mathbf}{B},{\mathbf}{H})\pi({\mathbf}{B},{\mathbf}{H})] -\mathbb{E}_{q({\mathbf}{B},{\mathbf}{H},{\mathbf}{z})}[\log q({\mathbf}{B},{\mathbf}{H},{\mathbf}{z})]+\lambda_1\Omega(\{\mathbb{E}_{q(\tilde{{\boldsymbol}\beta}_k)}[\tilde{{\boldsymbol}\beta}_k]\}_{k=1}^K)\\ +\lambda_2\Omega(\{\mathbb{E}_{q(\tilde{{\boldsymbol}\eta}_k)}[\tilde{{\boldsymbol}\eta}_k]\}_{k=1}^K)\\ =\mathbb{E}_{q({\mathbf}{B},{\mathbf}{z})}[\log p(\{y_i\}_{i=1}^{N}|{\mathbf}{z},{\mathbf}{B})]+\mathbb{E}_{q({\mathbf}{H},{\mathbf}{z})}[\log p({\mathbf}{z}|{\mathbf}{H})] +\mathbb{E}_{q({\mathbf}{H})}[\log \pi({\mathbf}{H})]+\mathbb{E}_{q({\mathbf}{B})}[\log \pi({\mathbf}{B})]\\ -\mathbb{E}_{q({\mathbf}{B})}[\log q({\mathbf}{B})] -\mathbb{E}_{q({\mathbf}{H})}[\log q({\mathbf}{H})]-\mathbb{E}_{q({\mathbf}{z})}[\log q({\mathbf}{z})]+\lambda_1\Omega(\{\mathbb{E}_{q(\tilde{{\boldsymbol}\beta}_k)}[\tilde{{\boldsymbol}\beta}_k]\}_{k=1}^K) +\lambda_2\Omega(\{\mathbb{E}_{q(\tilde{{\boldsymbol}\eta}_k)}[\tilde{{\boldsymbol}\eta}_k]\}_{k=1}^K) \end{array}$$ Among these expectation terms, $\mathbb{E}_{q({\mathbf}{B},{\mathbf}{z})}[\log p(\{y_i\}_{i=1}^{N}|{\mathbf}{z},{\mathbf}{B})]$ can be computed via Eq.(15-17), $\mathbb{E}_{q({\mathbf}{H},{\mathbf}{z})}[\log p({\mathbf}{z}|{\mathbf}{H})]$ can be computed via Eq.(11-14). $\mathbb{E}_{q({\mathbf}{H})}[\log \pi({\mathbf}{H})]$, $\mathbb{E}_{q({\mathbf}{B})}[\log \pi({\mathbf}{B})]$, $\mathbb{E}_{q({\mathbf}{B})}[\log q({\mathbf}{B})]$, $\mathbb{E}_{q({\mathbf}{H})}[\log q({\mathbf}{H})]$ can be computed in a way similar to Eq.(7). $\mathbb{E}_{q({\mathbf}{z})}[\log q({\mathbf}{z})]$ can be computed via Eq.(18). Given all these expectations, we can get an analytical expression of the objective in Eq.(19) and learn the parameters by optimizing this objective. Regarding how to optimize the mutual angular regularizers $\Omega(\{\mathbb{E}_{q(\tilde{{\boldsymbol}\beta}_k)}[\tilde{{\boldsymbol}\beta}_k]\}_{k=1}^K)$ and $\Omega(\{\mathbb{E}_{q(\tilde{{\boldsymbol}\eta}_k)}[\tilde{{\boldsymbol}\eta}_k]\}_{k=1}^K)$, please refer to [@xie2015diversifying] for details. 0.2in [^1]: Note that it does exist some examples of posterior regularization that have nice sampling-based algorithms, such as the max-margin topic models with a Gibbs classifier [@zhu2014gibbs]. [^2]: Variational inference and Gibbs sampling [@affandi2013approximate] are both not applicable. [^3]: <http://www.daviddlewis.com/resources/testcollections/reuters21578/> [^4]: <http://www.itl.nist.gov/iad/mig/tests/tdt/2004/> [^5]: <http://qwone.com/~jason/20Newsgroups/> [^6]: <https://en.wikipedia.org/wiki/N-sphere>
--- abstract: 'Experimental studies of magnetoresistance in high-mobility wide quantum wells reveal oscillations which appear with an increase in temperature to 10 K and whose period is close to that of Shubnikov-de Haas oscillations. The observed phenomenon is identified as magneto-intersubband oscillations caused by the scattering of electrons between two occupied subbands and the third subband which becomes occupied as a result of thermal activation. These small-period oscillations are less sensitive to thermal suppression than the large-period magneto-intersubband oscillations caused by the scattering between the first and the second subbands. Theoretical study, based on consideration of electron scattering near the edge of the third subband, gives a reasonable explanation of our experimental findings.' author: - 'S. Wiedmann$^{1,2}$, G. M. Gusev$^3$, O. E. Raichev$^4$, A. K. Bakarov$^5$, and J. C. Portal$^{1,2,6}$' title: Thermally activated intersubband scattering and oscillating magnetoresistance in quantum wells --- Introduction ============ Magnetoresistance oscillations caused by Landau quantization provide important information about fundamental properties of electron system in solids. Studies of the Shubnikov-de Haas (SdH) oscillations due to sequential passage of Landau levels through the Fermi level [@1] allow one to investigate the shape of the Fermi surface as well as the scattering processes leading to Landau level broadening. Apart from SdH oscillations, there exist oscillating phenomena which are not related to the position of Landau levels with respect to the Fermi level and, therefore, are less sensitive to temperature. One of the most important examples of such phenomena are the magneto-intersubband (MIS) oscillations [@2] observed in two-dimensional (2D) electron systems with two or more populated dimensional-quantization subbands, which are realized in single, double, and triple quantum wells [@3; @4; @5; @6; @7; @8; @9; @10]. Recently, magento-oscillations driven by intersubband transitions have also been reported for 2D electrons on liquid helium [@11]. The peculiar magnetoresistance properties of 2D electron systems are caused by the possibility of elastic (impurity-assisted) scattering of electrons between the subbands. The MIS oscillations occur because of a periodic modulation of the probability of intersubband transitions by the magnetic field. The maxima of these oscillations correspond to the condition when subband splitting energy $\Delta$ is a multiple of the cyclotron energy $\hbar \omega_c$, so the Landau levels belonging to different subbands are aligned. Since MIS oscillations survive an increase in temperature, they are used to study electron scattering mechanisms at elevated temperatures when SdH oscillations completely disappear in the region of weak magnetic fields [@6; @8; @9; @10]. Despite the fact that MIS oscillations are one of the most fundamental manifestations of quantum magnetotransport, their properties are not sufficiently studied. In particular, the case when one of the subbands is placed close to the Fermi energy and its filling by electrons is very small deserves a closer attention. Theoretical studies [@12] confirmed that MIS oscillations can exist at such small fillings of the upper subband, but a detailed experimental investigation of this interesting situation is still missing. From the theoretical point of view, it is important to consider scattering mechanisms of electrons near the edge of the weakly populated subband and a possibility of probing these mechanisms by magnetoresistance measurements. We have studied magnetoresistance in symmatric GaAs wide quantum wells (WQWs) with high-mobility 2D electron gas. Owing to a high electron density and a large well width, these systems form a bilayer configuration due to charge redistribution, when two quantum wells near the interfaces are separated by an electrostatic potential barrier (see Fig. 1). The presence of the two occupied subbands is confirmed by the observation of MIS oscillations in magnetoresistance. Apart from MIS and SdH oscillations, we observe unusual oscillations which appear when temperature is raised to 10 K and persist up to $T=40$ K in the region of magnetic fields from 0.35 to 2 T. The period of these oscillations is slightly smaller than the period of SdH oscillations. The dependence of resistivity on magnetic field and temperature allows us to treat these small-period oscillations as the MIS oscillations caused by electron scattering between the two lowest subbands and the third subband which is placed slightly above the Fermi energy $\varepsilon_F$ (Fig. 1) and becomes populated as a result of thermal activation. This conclusion is supported by a theoretical consideration of magnetoresistance, which also uncovers the scattering mechanism responsible for Landau level broadening and thereby explains the unusual low sensitivity of the small-period MIS oscillations to thermal suppression at elevated temperatures. The calculated magnetoresistance is in good agreement with the experimental results. ![\[fig1\](Color online) Calculated confinement potential profile of our wide quantum wells and wave functions of electrons for the first three subbands. Positions of the subbands (straight dashed lines) and the Fermi level (straight solid line) are schematically shown.](tfig1.eps){width="7.cm"} The paper is organized as follows. Section II presents experimental details and results. Section III gives theoretical calculation of magnetoresistance and its application to analysis and discussion of the experimental data. Concluding remarks are given in the last section. The Appendix is devoted to calculation of quantum lifetime of electrons in the upper subband. Magnetoresistance measurements ============================== We have studied wide GaAs quantum wells ($w$=45 nm) with an electron density of $n_{s} \simeq 9.2 \times 10^{11}$ cm$^{-2}$ and a mobility of $\mu~\simeq 1.9 \times 10^{6}$ cm$^{2}$/V s at low temperatures. To achieve both high density and high mobility, the samples have been produced according to Ref. [@13], where the barriers surrounding the quantum well are formed by short-period AlAs/GaAs superlattices. Samples in both Hall bar ($l\times w$= 250 $\mu$m $\times$ 50 $\mu$m) and van der Pauw (size 3 mm $\times$ 3 mm) geometries have been studied. The two lowest subbands are separated by the energy $\Delta_{12}=1.40$ meV, extracted from MIS oscillation periodicity [@6]. This value is in agreement with a self-consistent numerical calculation of the electron energy spectrum and wave functions (Fig. 1). The small energy separation and the symmetry of the wave functions for the two lowest subbands show that the corresponding (symmetric and antisymmetric) states are formed as a result of tunnel hybridization of the states in two quantum wells near the interfaces. Measurements of the longitudinal resistance $R_{xx}$ have been carried out in a perpendicular magnetic field $B$ up to 2.5 T in a cryostat with a variable temperature insert in the temperature range from 1.4 to 40 K. As confirmed by theoretical estimates, see Eq. (8) and Fig. \[fig6\] below, the magnetoresistance measurements in the fields below 0.5 T are performed in the regime of overlapping Landau levels. ![\[fig2\](Color online) (a) Measured longitudinal resistance of a two-subband system at different temperatures from 4.2 to 25 K. The vertical dashed lines indicate positions of the three main MIS peaks. As temperature increases, the SdH oscillations are replaced by a new kind of oscillations which have a smaller period and survive at high temperatures. We identify these oscillations as MIS oscillations associated with the third subband.](tfig2.eps){width="9.4cm"} The main results for the magnetoresistance are summarized in Fig. 2. Several groups of quantum oscillations, periodic with the inverse magnetic field, are observed. At low temperatures we see both SdH (small-period) and MIS (large-period) oscillations, the latter are caused by electron scattering between the two lowest subbands. SdH oscillations are visible at 4.2 K in the region of magnetic fields above 0.6 T. In this region, SdH oscillations are superimposed on the first MIS peak whose maximum is placed at $B \simeq 0.8$ T corresponding to the alignment condition $\hbar \omega_c= \Delta_{12}$. With increasing temperature, the SdH oscillations are rapidly damped and disappear. Disapperance of SdH oscillations for $T> 4.2$ K and in the range of magnetic fields studied in the present work can be easily confirmed applying the well-known Lifshitz-Kosevich formula containing the specific thermal damping factor $(2 \pi^2 T/\hbar \omega_c)/\sinh(2 \pi^2 T/\hbar \omega_c)$. However, another oscillating pattern is developed at $T \sim 10$ K and persists even at $T > 25$ K, when large-period MIS oscillations are strongly damped. The period of these high-temperature oscillations is close to the SdH oscillation period but does not coincide with it. For example, the SdH peak at $B \simeq 0.8$ T appears to be in phase with the high-temperature oscillations (arrow P in Fig. \[fig2\]), while the SdH peak at $B \simeq 1.2$ T stays in antiphase (arrow AP in Fig. \[fig2\]). At $B \sim 1$ T, the high-temperature oscillations replace SdH oscillations in the interval from $T=8$ to $T=10$ K. As seen from the plots for 10, 15, and 20 K, the high-temperature oscillations also exist in the region of lower magnetic fields corresponding to the second MIS peak. ![\[fig3\](Color online) Temperature dependence of the amplitudes of small-period (squares) and large-period (circles) MIS oscillations. The solid lines are the results of theoretical calculation (see Sec. III for details).](tfig3.eps){width="9.cm"} Therefore, the nature of the small-period high-temperature oscillations is obviously different from the nature of SdH oscillations. The origin of these oscillations can be understood if the third subband in our quantum well is included into consideration. Indeed, numerical calculations of WQW energy spectrum show that at $T=0$ the third subband should be weakly populated at the given electron density. The fact that we do not observe the effects associated with the third subband at low temperatures indicates a possible error within a few meV in theoretical determination of the position of this subband. If we assume that the third subband is placed slightly above the Fermi energy ($\varepsilon_3 > \varepsilon_F$) and attribute the small-period oscillations to MIS oscillations owing to electron transitions between this subband and the subbands 1 and 2, the observed properties of these oscillations become clear. First, the periodicity in this case is determined by the large splitting energies $\Delta_{13}= \varepsilon_3-\varepsilon_1$ and $\Delta_{23}=\varepsilon_3-\varepsilon_2$, which also explains a slightly smaller period of these oscillations compared to SdH oscillations. Next, the thermal activation behavior is explained by the increasing number of electrons being able to participate in the intersubband transitions with increasing temperature. Finally, the persistence of the oscillations at high temperatures follows from their nature: a suppression of MIS oscillations with temperature is not related to thermal smearing of the Fermi surface but is caused by thermal broadening of Landau levels. The problem of temperature behavior deserves a more detailed discussion. Figure 3 shows temperature dependence of the amplitudes (peak-to-peak values) for the small-period MIS oscillations superimposed on the maxima of the first and second large-period MIS peaks, $B=0.82$ and $B=0.41$ T. For comparison, a similar dependence for the amplitude of the second MIS peak is shown. The plots demonstrate a notable difference in behavior for these two kinds of MIS oscillations. While large-period MIS oscillations are monotonocally suppressed by temperature, the small-period oscillations are characterized by a non-monotonic dependence with a maximum around 10-15 K and a slower decrease with temperature. In the region of high temperatures, all the experimental plots apparently show that the logarithm of the amplitude linearly decreases with $T^2$. The slope of this decrease linearly scales with $B^{-1}$, as can be seen by comparison of the small-period MIS oscillation amplitudes at 0.41 and 0.82 T. These data suggest that the suppression of both small-period and large-period MIS oscillations is governed by the same mechanism, thermal broadening of Landau levels due to enhancement of electron-electron scattering with temperature. Such an effect is described in terms of temperature-dependent quantum lifetime of electrons, $\tau(T)$, entering the Dingle factor $d$, the latter determines the oscillation amplitude: $$\begin{aligned} d=\exp[-\pi/\omega_c \tau(T)], ~~~~~~ \nonumber \\ \frac{1}{\tau(T)}=\frac{1}{\tau(0)} + \frac{1}{\tau^{ee}(T)},~~\frac{1}{\tau^{ee}(T)}= \lambda \frac{T^2}{\hbar \varepsilon_F}, %1\end{aligned}$$ where $\tau(0)$ is the quantum lifetime due to elastic scattering. The term $1/\tau^{ee}$, where $\lambda$ is a numerical constant on the order of unity, describes the partial contribution of electron-electron scattering, which in high-mobility samples dominates starting from $T \simeq 10-15$ K. The reliability of Eq. (1) has been proved in numerous magnetoresistance experiments; see Refs. [@6; @8; @10; @14] and references therein, and constant $\lambda$ has been calculated for similar experimental conditions, see Refs. [@15; @16]. It is also worth noting that the Dingle factor in Eq. (1) can be different for MIS and SdH oscillation amplitudes. First, owing to the specific energy dependence of the electron-electron scattering time, the electron-electron interaction does not suppress SdH oscillations [@17; @18; @19]. Next, MIS oscillations are not sensitive to inhomogeneity of the electron density in contrast to SdH oscillations [@20; @21]. In our experiment, the quantum lifetime is extracted from MIS oscillations since SdH oscillations are already damped in the temperature interval under consideration. However, by comparing the slopes of high-temperature suppression of small-period and large-period MIS oscillation amplitudes in Fig. 3 at the same magnetic field (0.41 T), it is evident that the small-period oscillations are more robust with respect to increasing temperature. This interesting and unexpected result may indicate a weaker temperature dependence of the Landau level broadening in the third subband compared to subbands 1 and 2. Such an assumption is confirmed by theoretical calculations. The theoretical analysis carried out in the next section demonstrates that the consideration of electron scattering near the edge of the third subband explains the whole set of the data obtained in our magnetoresistance measurements. Theoretical study ================= In the experimentally relevant range of transverse magnetic fields, when the number of Landau levels below the Fermi energy is large, the electron transport in 2D systems is conveniently described by using either a quantum Boltzmann equation or Kubo formalism based on treatment of electron scattering within the self-consistent Born approximation [@22; @23]. These methods are straightforwardly generalized for many-subband systems [@24; @25; @10]. By considering the elastic scattering of electrons in the limit of classically strong magnetic fields (when $\omega_c$ is much larger than transport scattering rates), one can express the linear dissipative resistivity of the electron system with several occupied 2D subbands in the following way [@24]: $$\begin{aligned} \rho_d=\frac{m}{e^2 n^2_s} \sum_{jj'} \int d \varepsilon \left(-\frac{\partial f_{\varepsilon}}{\partial \varepsilon} \right) \nonumber \\ \times \frac{k^2_{j \varepsilon}+k^2_{j' \varepsilon}}{4 \pi} \nu^{tr}_{jj'} (\varepsilon) D_{j \varepsilon} D_{j' \varepsilon}, %2\end{aligned}$$ where $m$ is the effective electron mass, $e$ is the electron charge, $j$ is the subband index, $k_{j \varepsilon}=\sqrt{2 m (\varepsilon-\varepsilon_j)}/\hbar$ is the electron wave number in the subband $j$, $\varepsilon_j$ is the subband energy, $f_{\varepsilon}$ is the equilibrium (Fermi-Dirac) distribution function of electrons, and $D_{j \varepsilon}$ is the dimensionless (normalized to its zero-field value $m/\pi \hbar^2$) density of electron states in the subband $j$. The quantity $\nu^{tr}_{jj'} (\varepsilon)$ is defined as $$\begin{aligned} \nu^{tr}_{jj'} (\varepsilon)= \frac{m}{\hbar^3} \int_{0}^{2 \pi} \frac{d \theta}{2 \pi} w_{jj'}[q_{jj'}(\varepsilon)] \frac{q^2_{jj'}(\varepsilon)}{k^2_{j \varepsilon}+k^2_{j' \varepsilon}}, \\ q^2_{jj'}(\varepsilon)=k^2_{j \varepsilon}+k^2_{j' \varepsilon} - 2 k_{j \varepsilon} k_{j' \varepsilon} \cos \theta, \nonumber %\label{eq3}\end{aligned}$$ where $w_{jj'}(q)$ is the spatial Fourier transform of the correlators of random scattering potential, $q_{jj'}(\varepsilon)$ is the wave number transferred in elastic collisions and $\theta$ is the scattering angle. In many cases, energy dependence of $k_{j}$ and $w_{jj'}(q)$ can be neglected within the interval of thermal smearing of the electron distribution. Then $k_{j \varepsilon}$ are taken at the Fermi surface, $k_{j \varepsilon}=k_{j \varepsilon_F}=\sqrt{2 \pi n_j}$, where $n_j$ is the sheet electron density in the subband $j$, and $\nu^{tr}_{jj'} (\varepsilon)$ are reduced to transport scattering rates $\nu^{tr}_{jj'}$ defined, for example, in Ref. [@26]. In our samples, where two lowest subbands are closely spaced and almost equally populated while the third subband is weakly populated, we have $k^2_1 \simeq k^2_2 \gg k^2_3$. Application of Eq. (2) under these conditions gives us the following expression: $$\begin{aligned} \rho_d = \frac{m}{2 e^2 n_s} \int d \varepsilon \left(-\frac{\partial f_{\varepsilon}}{\partial \varepsilon} \right) \left[ \nu^{tr}_{11} D^2_{1 \varepsilon} + \nu^{tr}_{22} D^2_{2 \varepsilon} \right. \nonumber \\ \left. + 2 \nu^{tr}_{12} D_{1 \varepsilon}D_{2 \varepsilon} + (\nu^{tr}_{13} D_{1 \varepsilon} + \nu^{tr}_{23} D_{2 \varepsilon}) D_{3 \varepsilon} \right]. %\label{eq4}\end{aligned}$$ The scattering potential in our samples is created mostly by donor impurities localized in the side barrier regions, so the correlation between the effective scattering potentials in the layers of the double-layer system formed in our WQWs can be neglected. Under this condition, see Ref. [@24], the scattering in a symmetric double-layer system is described by equal correlators $w_{11}(q)=w_{22}(q)=w_{12}(q)$. Although this equality is based on a tight-binding description of the double-layer system, is holds with good accuracy in our WQWs, as confirmed by calculations of the wave functions for the first and second subband. Thus, in the case of closely spaced first and second subbands ($k_1 \simeq k_2$), the intrasubband and intersubband rates are almost equal, so we use $\nu^{tr}_{11}=\nu^{tr}_{22}=\nu^{tr}_{12} \equiv \nu_{tr}/2$. For the same reasons, the scattering rates between the lower subbands and the upper (third) subband are almost independent of the lower subband number. Indeed, since $k_{3 \varepsilon} \simeq 0$, one has $\nu^{tr}_{13}= mw_{13}(k_1)/\hbar^3$ and $\nu^{tr}_{23}= m w_{23}(k_2)/\hbar^3$ with $k_1 \simeq k_2$, while the symmetry of the wave functions and the absence of interlayer correlations cause $w_{13}(q) \simeq w_{23}(q)$. Therefore, under the approximations valid for our samples Eq. (2) is finally rewritten as $$\begin{aligned} \rho_d = \frac{m \nu_{tr}}{e^2 n_s} \int d \varepsilon \left(-\frac{\partial f_{\varepsilon}}{\partial \varepsilon} \right) \left[ D^2_{\varepsilon} + \eta D_{\varepsilon} D_{3 \varepsilon} \right], %\label{eq5}\end{aligned}$$ where $D_{\varepsilon}=(D_{1 \varepsilon}+D_{2\varepsilon})/2$ and $\eta=\nu^{tr}_{13}/\nu_{tr}= \nu^{tr}_{23}/\nu_{tr}$. Notice that the dimensionless parameter $\eta$ is expected to be considerably smaller than unity because of a strong suppression of the correlators $w_{jj'}(q)$ at large $q$ in the case of scattering by a smooth potential created by remote impurities. The first term in Eq. (5), proportional to $D^2_{\varepsilon}$, describes positive magnetoresistance with MIS oscillations, typical for double-layer systems [@6; @27]. The second term is a correction due to elastic scattering of electrons between the lower subbands and the third subband. This correction is essentially determined by the density of states in the third subband, $D_{3 \varepsilon}$, which experience a broadened steplike growth from zero at the edge of this subband, $\varepsilon=\varepsilon_3$. Since in our case $\varepsilon_3 > \varepsilon_F$, the third-subband contribution is not essential at low temperatures, when $-\partial f_{\varepsilon}/\partial \varepsilon$ is negligible at $\varepsilon > \varepsilon_3$. However, when temperature increases and the third subband becomes occupied, thermal activation of the elastic scattering between this subband and the two lower ones occurs. The second term in Eq. (5) then plays an important role, leading to small-period MIS oscillations which we observe in our experiment. As specified above, the magnitude of the parameter $\eta$, which determines the difference in the amplitudes between the two kinds of MIS oscillations, is affected by the spatial scale of the scattering potential. To describe the small-period MIS oscillations, it is crucial to consider the density of states in the third subband by focusing on the scattering mechanisms which are responsible for its broadening and temperature dependence. The density of states $D_{j \varepsilon}$ for subband $j$ can be found from the general expression $$\begin{aligned} D_{j \varepsilon} = \frac{\hbar \omega_c}{\pi} \sum_{n=0}^{\infty} {\rm Im} G^{A}_{\varepsilon j n}, \\ G^{A}_{\varepsilon j n}= \frac{1}{\varepsilon -\varepsilon_j-\hbar \omega_c(n+1/2)- \Sigma^{A}_{\varepsilon j n}}, \nonumber %\label{eq6}\end{aligned}$$ where $G^{A}_{\varepsilon j n}$ is the advanced (index A) Green’s function for the electron in subband $j$ in the Landau-level representation and $\Sigma^{A}_{\varepsilon j n}$ is the corresponding self-energy. The problem of determination of $\Sigma^{A}_{\varepsilon j n}$ in the general case is complicated and does not have an exact solution near the subband edge. Nevertheless, a physically reasonable result for $j=3$ can be obtained under a simplifying approach, when $\Sigma^{A}_{\varepsilon 3 n}$ is replaced by $i \hbar/2 \tau_3$, where $\tau_3$ is the quantum lifetime in the subband 3 calculated in the free-electron approximation. In clean samples like ours, the main scattering mechanism contributing to the self-energy in the important temperature region $T \geq 10$ K is the electron-electron scattering, so we estimate $\tau_3$ based on this scattering mechanism, $\tau_3 \simeq \tau^{ee}_3$. The corresponding calculation is done in the Appendix and leads to the result $$\begin{aligned} \frac{\hbar}{\tau^{ee}_{3}}= \kappa_0(T) \frac{T^{3/2}}{\sqrt{\varepsilon_F}}, %\label{eq7}\end{aligned}$$ where $\kappa_0(T)$ is a dimensionless function of temperature which depends on the form of the wave functions $\psi_{jz}$ shown in Fig. 1. This function is presented in Fig. 4, together with the corresponding broadening energy $\hbar/\tau^{ee}_3$ according to Eq. (7). For comparison, we also present the quadratic temperature dependence of the broadening energy in the lower subbands, $\hbar/\tau^{ee}= \lambda T^2/\varepsilon_F$ \[see Eq. (1)\], caused by electron-electron scattering. The constant $\lambda=2.2$ in this expression is determined from experimental data on thermal suppression of the large-period MIS oscillations in our samples. ![\[fig4\](Color online) Function $\kappa_{0}(T)$ for our WQW system and temperature dependence of the inverse quantum lifetime of electrons in the third subband due to electron-electron scattering. The dashed line corresponds to the inverse quantum lifetime of electrons in the first and second subbands, according to Eq. (1).](tfig4.eps){width="9.cm"} Figure 4 demonstrates that temperature dependence of the quantum lifetime of electrons in the third subband is weaker than in the first and second subbands. The reason for this behavior is rooted in the fact that the third subband is almost empty and contains non-degenerate electron gas (see Appendix for details). ![\[fig5\](Color online) Examples of the density of electron states in the third subband of our WQW system. The dashed line shows an “ideal” 2D density of states in the absence of the magnetic field and collision-induced broadening.](tfig5.eps){width="9.cm"} As the self-energy is defined, calculation of the density of states in the subband 3 is done according to Eq. (6) by taking the sum over Landau levels $n$ numerically. The result of the calculations for $B=0.4$ T and two chosen temperatures is demonstrated in Fig. 5 and represents a physically reasonable picture when temperature-dependent energy $\hbar/\tau^{ee}_3$ describes broadening of both the Landau levels and subband edge. In spite of the profound quantization, the Landau levels are still overlapping, and the oscillating density of states at 20 K can be approximated, with a good accuracy, by a harmonic oscillation function. Next, calculations of the density of states in the two lowest subbands are carried out within the self-consistent Born approximation, using the quantum lifetime described by Eq. (1) with elastic-scattering contribution $\tau(0)=6.6$ ps determined from low-temperature magnetoresistance. Substituting the calculated $D_{\varepsilon}$ and $D_{3 \varepsilon}$ into Eq. (5), we finally describe the resistivity. Apart from this numerical calculation, it is useful to present an analytical result $$\begin{aligned} \rho_d = \frac{m \nu_{tr}}{e^2 n_s} \left[1+ \eta f_{\varepsilon_3} + d^2 \left(1+ \cos \frac{2 \pi \Delta_{12}}{\hbar \omega_c} \right) \right. \nonumber \\ \left. + \eta f_{\varepsilon_3} d d_3 \left( \cos \frac{2 \pi \Delta_{13}}{\hbar \omega_c} + \cos \frac{2 \pi \Delta_{23}}{\hbar \omega_c} \right) \right], %8\end{aligned}$$ based on the approximate (single-harmonic) representation of the densities of states: $D_{\varepsilon} \simeq 1-d( \cos[ 2 \pi (\varepsilon-\varepsilon_1)/\hbar \omega] + \cos[ 2 \pi (\varepsilon-\varepsilon_2)/\hbar \omega])$ and $D_{3 \varepsilon} \simeq 1- 2 d_3 \cos[ 2 \pi (\varepsilon-\varepsilon_3)/\hbar \omega]$ at $\varepsilon > \varepsilon_3$. The Dingle factor $d$ is given by Eq. (1), while the Dingle factor for the third subband is $d_3=\exp[-\pi/\omega_c \tau^{ee}_3(T)]$. Equation (8) is valid when $T$ exceeds the broadening energy $\hbar/\tau^{ee}_3$ and when $2 \pi^2 T \gg \hbar \omega_c$, so the SdH oscillations are thermally averaged out. The terms associated with the third subband in Eq. (8) describe small-period oscillations due to large subband separation energies $\Delta_{13}$ and $\Delta_{23}$. The amplitude of these oscillations is governed, apart from the product of the Dingle factors $d d_3$, by a small factor $\eta f_{\varepsilon_3}$. The same factor determines a correction to the background (non-oscillating) resistivity. The relative contribution of this correction at $T < 30$ K does not exceed 6% and does not lead to an appreciable increase in the resistivity with temperature. The increase in the background resistivity observed in experiment, see Fig. \[fig2\], is described by thermal enhancement of electron scattering by acoustic phonons. ![\[fig6\](Color online) Measured and calculated magnetoresistance for our WQW system. Two kinds of theoretical plots are shown: based on a numerical calculation (thick line, red) and on analytical expression Eq. (8) (narrow line, blue). The regions around 0.4 T are blown up.](tfig6.eps){width="9.cm"} The comparison of the measured and calculated resistivity at two chosen temperatures, 10 and 15 K, is shown in Fig. 6. First of all, by comparing the periodicity of small-period oscillations we determine the position of the third subband and find that it is placed at 2.2 meV above the Fermi energy (in other words, $\Delta_{13}=19.36$ meV and $\Delta_{23}=17.96$ meV). Below 0.6 T the theoretical and experimental plots show a good agreement, while in the region of the first MIS peak ($B \sim 0.8$ T) the experiment shows a considerably smaller magnetoresistance than it is expected from the theory. This deviation occurs only in the temperature range we have studied in the present work and is absent at low temperatures ($T < 4.2$ K). We do not know exactly the reason for this deviation, probably it is associated with some mechanisms of Landau level broadening not taken into account or with the influence of electron-phonon scattering on magnetotransport [@28]. As concerns the small-period MIS oscillations, the best fit of their amplitudes to experimental values is obtained in the whole region of magnetic fields by considering $\eta$ as a single adjustable parameter of our theory, set at a reasonable value $\eta=0.2$ (a calculation of $\eta$ is possible in general, but requires a detailed knowledge of impurity nature and distribution). Notice that the results of numerical and analytical methods of calculations are in a reasonable accordance, which indicates the reliability of the analytical approach of Eq. (8) and validity of the conditions of overlapping Landau levels. Similar calculations at the other temperatures also demonstrate a good agreement. We also applied the results of our calculations for description of the temperature dependence of MIS oscillations of both kinds. As found experimentally, the small-period MIS oscillations experience a thermal-activated behavior at low temperatures and a weaker (compared to large-period MIS oscillation) thermal suppression at larger temperatures. Although both these features are now qualitatively understood, it is instructive to compare directly the experimental data on peak-to-peak amplitudes shown in Fig. 3 with corresponding theoretical results following from the analytical expression (8). Such theoretical plots are added to Fig. 3 and show a good agreement with experiment concerning both non-monotonic temperature dependence of the small-period MIS oscillation amplitudes in the region of $T=8-15$ K and the decrease in these amplitudes at elevated temperatures. Conclusions =========== We have designed a WQW structure with high electron density, where two lowest closely spaced subbands are occupied by electrons, while the third subband is placed slightly above the Fermi energy and, therefore, is not occupied at low temperatures (Fig. 1). By measuring the magnetoresistance of this system, we have detected thermally-activated MIS oscillations caused by elastic scattering of electrons between the third subband and the two lower subbands. These small-period MIS oscillations demonstrate an unusually slow suppression at higher temperatures as compared to the well-established temperature dependence of the large-period MIS oscillations caused by electron scattering between the lowest (occupied) subbands. Our theoretical study has uncovered the reasons for this behavior. The temperature dependence for both kinds of MIS oscillations is determined by the influence of electron-electron scattering on the Landau level broadening. However, the widely accepted $T^2$ scaling of the broadening energy cannot be applied to the case of an almost empty subband, where the carriers have kinetic energies on the order of $T$ and form a non-degenerate electron gas near the subband bottom. For such a case, a detailed calculation shows that the broadening energy scales with temperature slower than $T^{3/2}$, depending on the shape of the wave functions determined by the confinement potential. The theoretical dependence of the resistivity on the magnetic field and temperature explains all essential details of our experimental results. To the best of our knowledge, thermally-activated MIS oscillations have not been reported previously in magnetoresistance measurements. In our research, based on a particular structure design, we have demonstrated that the existence of such oscillations opens different ways in applications of magnetotransport experiments to study electron energy spectrum and scattering mechanisms in multi-subband systems. We assume that this research will stimulate further experimental and theoretical studies in this direction. This work was supported by COFECUB-USP (Project number U$_{c}$ 109/08), CNPq and FAPESP. Quantum lifetime of electrons in the upper subband ================================================== We start our consideration of electron-electron scattering by neglecting the processes in which electrons are transferred between subbands, because such processes require large transferred momenta $\hbar {\bf q}$ and, for this reason, are strongly suppressed, especially in wide quantum wells. The remaining processes can be viewed as scattering of electrons in the subband $j$ by electrons resting either in the same subband or in the other subbands. The imaginary part of the self-energy due to electron-electron scattering is written, in the momentum representation, as $$\begin{aligned} {\rm Im} \Sigma^{(ee) A}_{\varepsilon j {\bf p}} = 2 \pi \sum_{j'} \int \frac{d {\bf p}'}{(2 \pi \hbar)^2} \int \frac{d {\bf q}}{(2 \pi)^2} U^2_{jj'}(q) \nonumber \\ \times \int d \varepsilon' \int d E \delta (\varepsilon +E - \varepsilon_{j {\bf p}+\hbar {\bf q}} ) \delta (\varepsilon' -E -\varepsilon_{j' {\bf p'}-\hbar {\bf q}} ) \nonumber \\ \times \delta (\varepsilon'-\varepsilon_{j'{\bf p'}} ) \left[f_{\varepsilon'} (1-f_{\varepsilon+E}-f_{\varepsilon'-E} ) + f_{\varepsilon+E} f_{\varepsilon'-E} \right],~~ %A1\end{aligned}$$ where $\varepsilon_{j {\bf p}}=\varepsilon_{j}+p^2/2m$, $E$ is the energy transferred in collisions, and $U_{jj'}(q)$ is the effective interaction potential: $$\begin{aligned} U_{jj'}(q) = \frac{2 \pi e^2}{\epsilon (q+q_0)} I_{jj'}(q),~~~~ \\ I_{jj'}(q)=\int d z \int dz' e^{-q|z-z'|} |\psi_{jz}|^2 |\psi_{j'z'}|^2. \nonumber %A2\end{aligned}$$ Here $\epsilon$ is the dielectric constant, $q_0=2 e^2 m/\hbar^2 \epsilon$ is the inverse screening length, and $\psi_{jz}$ is the envelope wave function for subband $j$. The overlap factor $I_{jj'}(q)$ is often set to unity in description of the scattering with small transferred momentum which is essential at $T \ll \varepsilon_F$. However, in wide quantum wells (like those used in our experiment) this factor becomes important and leads to a significant (2-3 times) suppression of the Coulomb interaction, so we keep it in our consideration. To find $I_{jj'}(q)$, we use the eigenstates $\psi_{jz}$ obtained in the self-consistent calculation of the energy spectrum of our WQW (Fig. 1). Equation (A1) can be derived, for example, from the expressions for Green’s functions and self-energies presented in Ref. [@29]; a generalization of these expressions for multi-subband systems is straightforward. Notice that the Green’s functions of electrons in this derivation are taken in the free-particle approximation, which leads to the appearance of the $\delta$-functions of energies in Eq. (A1). The processes contributing to the self-energy of the third subband, $\Sigma^{(ee)A}_{\varepsilon 3 {\bf p}}$, include scattering between electrons in the same (third) subband, $j'=j=3$, as well as the scattering between electrons in the third and in the lower subbands, $j'=1,2$. Since the occupation of the third subband by electrons is low, and $f_{\varepsilon_3} \ll 1$ corresponds to the case of non-degenerate electron gas, the latter processes are more significant and are considered in the following. For this kind of scattering, the integrals over the variables $\varepsilon'$, ${\bf p}'$, and over the angle of the vector ${\bf q}$ in Eq. (A1) can be calculated analytically. Further, expressing the quantum lifetime for the third subband according to $1/\tau^{ee}_{3 \varepsilon}= (2/\hbar) {\rm Im} \Sigma^{(ee)A}_{\varepsilon 3 {\bf p}} |_{\varepsilon_{3 {\bf p}}=\varepsilon}$, we get $$\begin{aligned} \frac{\hbar}{\tau^{ee}_{3 \varepsilon}}= \sum_{j=1,2} \frac{1}{\pi v_{Fj} \sqrt{2m} } \int_{-(\varepsilon-\varepsilon_3)}^{\infty} d E \int_{\varepsilon^{(-)}_q}^{\varepsilon^{(+)}_q} d \varepsilon_q \nonumber \\ \times \frac{E}{\sqrt{\varepsilon_q [4\varepsilon_q (\varepsilon-\varepsilon_3)-(E-\varepsilon_q)^2]}} \frac{I^2_{3j}(q)}{(1+q/q_0)^2} \nonumber \\ \times \left[f_{\varepsilon+E} + \frac{1}{e^{E/T} -1} \right], %A3\end{aligned}$$ where $v_{Fj}$ is the Fermi momentum for one of the lower subbands, $\varepsilon_q=\hbar^2q^2/2m$, $\varepsilon^{(\pm)}_q=(\sqrt{\varepsilon-\varepsilon_3+E} \pm \sqrt{\varepsilon-\varepsilon_3})^2$ Under conditions of our experiment, the difference between $v_{F1}$ and $v_{F2}$ is not essential, so $v_{F1}=v_{F2}=\sqrt{2 \varepsilon_F/m}$, where $\varepsilon_F=\hbar^2 \pi n_s/2m$. Next, since $f_{\varepsilon_3} \ll 1$, the term proportional to $f_{\varepsilon+E}$ can be neglected in Eq. (A3). As a result, the quantum lifetime is represented as $$\begin{aligned} \frac{\hbar}{\tau^{ee}_{3 \varepsilon}}= \kappa_{\varepsilon-\varepsilon_3}(T) \frac{T^{3/2}}{\sqrt{\varepsilon_F}}, %A4\end{aligned}$$ where $\kappa$ is a dimensionless function of energy and temperature. Numerical calculation of $\kappa$ shows that its energy dependence near the edge of the third subband ($\varepsilon_3 < \varepsilon < \varepsilon_3+ T$) appears to be weak in the important temperature interval from 10 to 30 K. Therefore, below we treat the quantum lifetime as energy-independent quantity by taking $\kappa$ at the edge of the third subband, $\kappa_{\varepsilon-\varepsilon_3}(T) \simeq \kappa_{0}(T)$. Then we obtain the expression $$\begin{aligned} \kappa_0(T) = \int_0^{\infty} dx \frac{\sqrt{x}}{e^x-1} F_T(x), \\ F_T(x)=\frac{I^2_{31}(q_x)+I^2_{32}(q_x)}{2 (1+q_x/q_0)^2},~~~q_x=\frac{\sqrt{2m T x}}{\hbar}, \nonumber %A5\end{aligned}$$ where we introduced a dimensionless variable $x=E/T$. Temperature dependence of $\kappa_0(T)$ takes place mostly because of the sensitivity of the squared overlap factors $I^2_{31}$ and $I^2_{32}$ to transferred energy $E$ (which, near the edge of the third subband, is directly connected to the transferred momentum, $E \simeq \varepsilon_q$). This sensitivity is caused by the large width of our quantum well and is essentially determined by the form of the wave functions $\psi_{jz}$ shown in Fig. 1. The function $\kappa_0(T)$ calculated according to Eq. (A5) is presented in Fig. 4. In narrow wells, where the characteristic wave number $q$ is small in comparison to the inverse well width, the suppression factor $F_T(x)$ in Eq. (A5) can be approximated by unity, leading to the universal result $\kappa_0(T) \simeq 2.3$. Then we obtain the dependence $1/\tau^{ee}_{3} \propto T^{3/2}$ which is weaker than $1/\tau^{ee} \propto T^{2}$ characterizing temperature dependence of quantum lifetime in the lower subbands \[Eq. (1)\]. In wide wells, where the wave number $q$, limited by temperature, may exceed the inverse well width, an additional weakening takes place, so the temperature dependence of $1/\tau^{ee}_{3}$ calculated for our system and shown in Fig. 4 is different from $T^{3/2}$ and roughly resembles a linear function. [29]{} D. Shoenberg, *Magnetic Oscillations in Metals* (Cambridge University Press, 1984). V. Polyanovsky, Fiz. Tekh. Poluprovodn. [**22**]{}, 2230 (1988) \[Sov. Phys. - Semicond. [**22**]{}, 1408 (1988)\]. P. T. Coleridge, Semicond. Sci. Technol. [**5**]{}, 961 (1990). D. R. Leadley, R. Fletcher, R. J. Nicholas, F. Tao, C. T. Foxon, and J. J. Harris, Phys. Rev. B [**46**]{}, 12439 (1992). T. H. Sander, S. N. Holmes, J. J. Harris, D. K. Maude, and J. C. Portal, Phys. Rev. B [**58**]{}, 13856 (1998). N. C. Mamani, G. M. Gusev, T. E. Lamas, A. K. Bakarov, and O. E. Raichev, Phys. Rev. B [**77**]{}, 205327 (2008). A. A. Bykov, D. P. Islamov, A. V. Goran, A. I. Toropov, JETP Lett. [**87**]{}, 477 (2008). A. V. Goran, A. A. Bykov, A. I. Toropov, and S. A. Vitkalov, Phys. Rev. B [**80**]{}, 193305 (2009). A. A. Bykov, A. V. Goran, and S. A. Vitkalov, Phys. Rev. B [**81**]{}, 155322 (2010). S. Wiedmann, N. C. Mamani, G. M. Gusev, O. E. Raichev, A. K. Bakarov, and J. C. Portal, Phys. Rev. B [**80**]{}, 245306 (2009). D. Konstantinov and K. Kono, Phys. Rev. Lett. [**103**]{}, 266808 (2009). N. S. Averkiev, L. E. Golub, S. A. Tarasenko, and M. Willander, J. Phys. - Cond. Matter [**13**]{}, 2517 (2001). K.-J. Friedland, R. Hey, H. Kostial, R. Klann, and K. Ploog, Phys. Rev. Lett. [**77**]{}, 4616 (1996). A. T. Hatke, M. A. Zudov, L. N. Pfeiffer, and K. W. West, Phys. Rev. Lett. [**102**]{}, 066804 (2009). I. A. Dmitriev, M. G. Vavilov, I. L. Aleiner, A. D. Mirlin, and D. G. Polyakov, Phys. Rev. B [**71**]{}, 115316 (2005). I. A. Dmitriev, M. Khodas, A. D. Mirlin, D. G. Polyakov, and M. G. Vavilov, Phys. Rev. B [**80**]{}, 165327 (2009). G. W. Martin, D. L. Maslov, and M. Yu. Reizer, Phys. Rev. B [**68**]{}, 241309(R) (2003). S. Engelsberg and G. Simpson, Phys. Rev. B [**2**]{}, 1657 (1970). Y. Adamov, I. V. Gornyi, and A. D. Mirlin, Phys. Rev. B [**73**]{}, 045426 (2006). I. B. Berkutov, V. V. Andrievskii, Yu. F. Komnik, O. A. Mironov, M. Mironov, and D. R. Leadley, Low Temp. Phys. [**35**]{}, 141 (2009). S. Syed, M. J. Manfra, Y. J. Wang, R. J. Molnar, and H. L. Stormer, Appl. Phys. Lett. [**84**]{}, 1507 (2004). M. G. Vavilov and I. L. Aleiner, Phys. Rev. B [**69**]{}, 035303 (2004). I. A. Dmitriev, F. Evers, I. V. Gornyi, A. D. Mirlin1, D. G. Polyakov, and P. Wolfle, Phys. Status Solidi B [**245**]{}, 239 (2008). O. E. Raichev, Phys. Rev. B [**78**]{}, 125304 (2008). O. E. Raichev, Phys. Rev. B [**81**]{}, 195301 (2010). N. C. Mamani, G. M. Gusev, E. C. F. da Silva, O. E. Raichev, A. A. Quivy, and A. K. Bakarov, Phys. Rev. B, [**80**]{}, 085304 (2009). N. C. Mamani, G. M. Gusev, O. E. Raichev, T. E. Lamas, and A. K. Bakarov, Phys. Rev. B [**80**]{}, 075308 (2009). Both theory and experiment show a considerable reduction of the phonon-assisted part of the resistivity in the region of magnetic fields $\omega_c > 2 k_F s_l$, where $k_F$ is the Fermi wave number ($k_F \simeq \sqrt{\pi n_s}$ for our two-subband system) and $s_l$ is the acoustic phonon velocity: see, e.g., O. E. Raichev, Phys. Rev. B [**80**]{}, 075318 (2009) and references therein. Although the first MIS peak in our experiment appears in this region of fields, the magnitude of the effect, according to our estimates, is still weak for satisfactory explanation of the deviation. O. E. Raichev, Phys. Rev. B [**81**]{}, 165319 (2010).
--- bibliography: - 'social-advertising-1.bib' --- *“When we boost posts, we see twice as much engagement, twice as much website traffic and often double our sales."* – Jaron Schneider, Features Editor, Fstoppers Introduction {#sec-intro} ============ Social advertising (or social promotion) is an effective approach that produces a significant cascade of adoption through influence in the online social networks. During a typical social advertising campaign, advertisers attempt to persuade influential consumers to promote their products or services among his friends. With more people using social networking services, recent days have witnessed a boom of social networking sites that offer social advertising (SA) services. To name a few, the primary SA mechanisms adopted by Facebook are Facebook Ads, Promoted Posts and Boost Posts; Twitter allows businesses to promote their accounts and Tweets as well as promote “trends"; LinkedIn users can create an advert, sponsor content or use Sponsored InMail to launch an email marketing campaign. Take Facebook as an example, *boosting posts* is considered as an effective way to get more exposure for one’s posts, offers or special events. It allows businesses to pay for posts to be more predominantly displayed on news feeds. Facebook users will see promoted posts labeled with “Sponsored" in the news feed (not in the right rail where Facebook ads live) both on desktop and mobile. Promoted posts have the same targeting ability that organic posts do, thus they can propagate across the network through “reposts" or “shares". Recent field studies [@bakshy2012social][@tucker2012social] find that social advertising is more effective than conventional demographically targeted or untargeted ads. The goal of this work is to optimize the ad allocation from the platform’s perspective. We consider the cost per engagement (CPE) model, the advertiser buy a block of “engagements" such as impressions or clicks from the platform owner via contracts, and the advertiser pays the platform an amount $\alpha_i$ per engagement that is delivered from its ad $a_i$. Each advertiser also sets his budget $B_i$ that specifies the maximum amount of money he would like to pay. It was worth noting that this budget is fixed regardless of the actual amount of engagements that are received at the end of the campaign. Therefore, due to the uncertainty of virality, it is possible that an advertiser may receive more engagements than would be expected under his budget. Unfortunately, this uncertainty may utterly destroy the truthfulness of the advertiser, i.e., the advertiser tends to declare lower budget, hoping to obtain more engagements. Then it is interesting to observe that, on the one hand, the platform would like to maximize revenue earned from each advertiser by exposing their ads to as many people as possible, one the other hand, the platform wants to reduce free-riding to ensure the truthfulness of the advertiser. To access this tradeoff, we adopt the concept of *regret* [@viral2015social] to measure the performance of an ad allocation scheme. In addition, as the promoted posts are displayed along with organic posts, it is possible to impede the user experience by pushing too many promoted posts to one user. One way to mitigate this is to set a limit, called *attention constraint*, on the maximum number of promoted posts that can be pushed to each individual user as well as the whole community. **Our Results:** In this paper, we propose and study two social advertising problems: *budgeted social advertising problem* and *unconstrained social advertising problem*. In the first problem, we aim at selecting a set of seeds for each advertiser that minimizes the regret while setting budget constraints on the attention cost; in the second problem, we propose to optimize a linear combination of the regret and attention costs. We first prove that both problems are NP-hard by reducing them from traditional influence maximization problem. Then we develop two constant factor approximation algorithms for each problem. Preliminaries ============= Matroid ------- A matroid $M = (\Omega, \mathcal{I})$ is defined on a finite ground set $\Omega$, and $\mathcal{I}$ is a family of subsets of $\Omega$ which are called independent sets. For $M$ to be a matroid, $\mathcal{I}$ must satisfy two properties: - (I1) if $X \subseteq Y$ and $Y \ni \mathcal{I}$ then $X \in \mathcal{I}$, - (I2) if $X \in I$ and $Y \in \mathcal{I}$ and $|Y|>|X|$ then $\exists e \in Y\backslash X: X\cup\{e\}\in \mathcal{I}$. Property (I1) says that every subset of an independent set is independent. Property (I2), which is also called *independent set exchange property*, says that if $X$ and $Y$ are independent sets, and $Y$ has more elements than $X$, there exists an element in $Y\backslash X$ that by adding that element to $X$ gives larger independent set. According to Property (I2), one can verify that all maximal independent sets have the same cardinality. A maximal independent set is called a base of the matroid. Submodular function ------------------- Consider an arbitrary function $f(S)$ that maps subsets of a finite ground set $\Omega$ to non-negative real numbers. We say that $f$ is submodular if it satisfies a natural “diminishing returns" property: the marginal gain from adding an element to a set $S$ is at least as high as the marginal gain from adding the same element to a superset of $S$. Formally, a submodular function satisfies the follow property: For every $X, Y \subseteq \Omega$ with $X \subseteq Y$ and every $x \in \Omega \backslash Y$, we have that $$f(X\cup \{x\})-f(X)\geq f(Y\cup \{x\})-f(Y)$$ We say a submodular function $f$ is monotone if $f(X) \leq f(Y)$ whenever $X \subseteq Y$. Propagation Model ----------------- To capture the dynamics of ads propagation in social networks, one of the most widely used model, called *Independent Cascade Model*, is investigated recently in the context of marketing [@Goldenberg1][@Goldenberg2][@kdd03]. To account for the heterogeneity of ads propagation under different ads, we adopted an extended propagation model, called topic-aware propagation model (TIC). Let $G_i=(\mathcal{V}, p_i(E))$ denote the diffusion graph under ad (or topic) $a_i$, where $\mathcal{V}$ represent the set of all users in the network, $p_i(u,v)$ represents the diffusion probability between $u$ and $v$ for ad $a_i$. TIC describes a spreading process comprising of seed nodes and non-seed nodes. The process unfolds in discrete timesteps. In each timestep, when a user $u$ clicks an ad $a_i$, it has one chance of influencing each inactive neighbor $v$ with success probability $p_i(u,v)$. More formally, the input to the independent cascade model is an initial set of seed nodes $S_i\subseteq \mathcal{V}$ for each ad $a_i$. Let $\sigma_i(S_i)$ denote the expected number of clicks (or engagements) received from ad $a_i$ under seed set $S_i$. Let $\alpha_i$ denote the cost per-engagement for ad $a_i$, the expected revenue received from $a_i$ is $\alpha_i\cdot \sigma_i(S_i)$. As proved in [@kdd03], $\alpha_i\cdot \sigma_i(S_i)$ is a submodular and monotone function. Problem Statement ================= Given a hyper social graph $\mathcal{G}=(G_1, G_2, \cdots, G_{|\mathcal{A}|})$, where each graph $G_i$ represents the diffusion network under topic or ad $a_i$. Assume there are $K$ advertisers that participate in the campaign, denoted by $\mathcal{A}=\{a_1, a_2, \cdots, a_{|\mathcal{A}|}\}$, each advertiser $a_i$ has a finite budget $B_i$. The host needs to identify and allocate a set of users to each of the advertisers. On the one hand, the platform would like to maximize revenue earned from each advertiser by exposing their ads to as many people as possible, one the other hand, the platform wants to reduce free-riding to ensure the truthfulness of the advertiser. To access this tradeoff, we introduce the concept of regret to measure the performance of an ads allocation scheme. The regret under seed set $\mathcal{S}=\{S_1, S_2, \cdots, S_{|\mathcal{A}|}\}$ is defined as $\sum_{a_i\in \mathcal{A}}|\alpha_i\cdot \sigma_i(S_i)-B_i|$. To minimize the regret is equivalent to maximizing the following utility function: $$U(\mathcal{S})=\sum_{a_i\in \mathcal{A}} U_i(\mathcal{S})$$ where $$\label{equ:23} U_i(\mathcal{S})= \begin{cases} \alpha_i\cdot \sigma_i(S_i) &\mbox{ if $\alpha_i\cdot\sigma_i(S_i)\leq B_i$}\\ 2\cdot B_i - \alpha_i\cdot \sigma_i(S_i) &\mbox{ if $\alpha_i\cdot\sigma_i(S_i)> B_i$} \end{cases}$$ Any ad allocation can be represented using a $|\mathcal{V}|\times|\mathcal{A}|$ matrix $\mathbf{X}$, called allocation matrix, where $$X_{ij}=\begin{cases} %(1) \enspace D_i(t) = d_i \mathbf{I}_{[s_i, s_i + \tau_i]}(t)\\ %(1) \enspace D_t \triangleq \sum_{i=1}^{n} d_i \mathbf{I}_{[s_i, s_i + \tau_i]}(t)\\ 1 & \mbox{if user $v_i$ is assigned to ad $a_j$}\\ 0 & \mbox{otherwise} %\sum_{i=1}^{|\mathcal{V}|} \min\{1,\sum_{j=1}^{|\mathcal{A}|} X_{ij}\} \leq P \mbox{ (\textbf{C3:} population constraint)} \end{cases}$$ Then the individual attention cost on user $v_i$ is $\sum_{j=1}^{|\mathcal{A}|} X_{ij}$ and the overall attention cost is $\sum_{i=1}^{|\mathcal{V}|} \sum_{j=1}^{|\mathcal{A}|} X_{ij}$. In the rest of this paper, we use $\mathcal{S}$ and $\mathbf{X}$ interchangeably to represent an ad allocation. Selection of good seeds with high utility and small attention cost is a critical decision faced by every platform. In this paper, we propose and study two problems that allow to combine the two objectives: *Budgeted Social Advertising problem* and *Unconstrained Social Advertising problem*. In the first problem, we aim at selecting a set of seeds for each advertiser that maximizes the utility while setting budget constraints on the attention cost; in the second problem, we propose to maximize a linear combination of the two measures. Budgeted Social Advertising Problem ----------------------------------- In the budgeted social advertising problem, we set hard constraints on individual attention cost and overall attention cost: let $\kappa_i$ denote the individual attention limit of user $v_i$ and $K$ denote the overall attention limit, then $\forall v_i \in \mathcal{V}:\mbox{ } \sum_{j=1}^{|\mathcal{A}|} X_{ij} \leq \kappa_i $ and $\sum_{i=1}^{|\mathcal{V}|} \sum_{j=1}^{|\mathcal{A}|} X_{ij}\leq K$. \[thm:np1\] The budgeted social advertising problem is NP-hard. *Proof:* We will reduce our problem from the traditional influence maximization problem. Let $\kappa_i=\infty$ for each $v_i$, then the only constraint is the overall attention constraint $K$. Assume there is only one ad, i.e., $|\mathcal{A}|$=1, our problem under this setting is equivalent traditional influence maximization problem [@kdd03], i.e., finding a set of seeds that maximize the utility function $U(\mathcal{S})$ subject to cardinality constraint $K$. $\Box$ Unconstrained Social Advertising Problem ---------------------------------------- Given a seeds selection $\mathcal{S}$, we first define the cost function $C(\mathcal{S})$. $$C(\mathcal{S}) = \lambda_1 \cdot \underbrace{\sum_{i=1}^{|\mathcal{V}|}\exp(\max\{0, \sum_{j=1}^{|\mathcal{A}|} X_{ij}-\kappa_i\})}_{\mathrm{Part I}}+ \lambda_2 \cdot \underbrace{\exp(\max\{0, \sum_{i=1}^{|\mathcal{V}|} \sum_{j=1}^{|\mathcal{A}|}X_{ij} - K\})}_{\mathrm{Part II}}$$ where Part I (resp. Part II) is the penalty resulting from exceeding the individual attention budget (resp. the overall attention budget), $\lambda_1$ (resp. $\lambda_2$) is a parameter that determines how strictly we would like to penalize for exceeding the individual budget (resp. the overall budget). Then the objective function is defined as $U(\mathcal{S})-C(\mathcal{S})$. The unconstrained social advertising problem is NP-hard. *Proof:* Similar to the proof of Theorem \[thm:np1\], we still reduce our problem from traditional influence maximization problem. Consider the following problem setting: assume there is only one advertiser, and $\lambda_1=0$, $\lambda_2=\infty$. Then one necessary condition to ensure the optimality for any given solution is to strictly obey the overall attention budget $K$. This problem is equivalent to the traditional influence maximization problem [@kdd03], i.e., finding a set of seeds that maximize the utility function $U(\mathcal{S})$ subject to cardinality constraint $K$. $\Box$ Budgeted Social Advertising Problem =================================== We instead turn to an alternative approach by first introducing a new utility function $V(\mathcal{S})$. Let $$V(\mathcal{S})=\sum_{a_i\in \mathcal{A}} V_i(\mathcal{S})$$ where $$V_i(\mathcal{S})= \min \{\alpha_i\cdot \sigma_i(S_i), B_i\}$$ Recall that $B_i$ is the maximum amount advertiser $a_i$ is willing to pay regardless of the seed selection, thus $V_i(\mathcal{S})$ can be interpreted as the actual payoff from $a_i$ under seed set $\mathcal{S}$. Now, we are ready to introduce the following *revenue maximization problem* (RMP): In order to find a solution for **P.1**, we first work with RMP and develop an algorithm with provable performance bound. Later, we slightly modify this solution, and obtain an approximation algorithm for **P.1**. Let $\widehat{\mathcal{S}^{\textbf{P.1}}} = \{\widehat{S^{\textbf{P.1}}_1}, \widehat{S^{\textbf{P.1}}_2}, \cdots, \widehat{S^{\textbf{P.1}}_{|\mathcal{A}|}}\}$ (resp. $\widehat{\mathcal{S}^{R}}=\{\widehat{S^{R}_1}, \widehat{S^{R}_2}, \cdots, \widehat{S^{R}_{|\mathcal{A}|}}\}$) denote the optimal solution of problem **P.1** (resp. RMP), i.e., $$\widehat{\mathcal{S}^{\textbf{P.1}}} = {\arg \max}_{\mathcal{S}} U(\mathcal{S}), \mbox{ }\widehat{\mathcal{S}^{R}}={\arg\max}_{\mathcal{S}} V(\mathcal{S})$$ subject to constraints $C1$ and $C2$. In the following lemma, we first establish a relation between **P.1** and RMP. \[lem:11\] $$V(\widehat{\mathcal{S}^{R}}) \geq U(\widehat{\mathcal{S}^{\textbf{P.1}}})$$ *Proof:* We prove this lemma through contradiction. First of all, since both $\mathcal{S}^*_{\textbf{P.1}}$ and $\mathcal{S}^*_{\mathrm{RMP}}$ must satisfy constraints $C1$ and $C2$, thus $\widehat{\mathcal{S}^{\textbf{P.1}}}$ and $\widehat{\mathcal{S}^{R}}$ are feasible solutions to both **P.1** and RMP. Assume by contradiction that $V(\widehat{\mathcal{S}^{R}}) < U(\widehat{\mathcal{S}^{\textbf{P.1}}})$, then for each $\widehat{S^{\textbf{P.1}}_i} \in \widehat{\mathcal{S}^{\textbf{P.1}}}$, either one of the following holds: (1) $\alpha_i\cdot\sigma_i(\widehat{S^{\textbf{P.1}}_i} )\leq B_i$, or (2) $\alpha_i\cdot\sigma_i(\widehat{S^{\textbf{P.1}}_i} )> B_i$. According to the definition of $U(\mathcal{S})$ and $V(\mathcal{S})$: 1. If (1) holds, $U_i(\widehat{\mathcal{S}^{\textbf{P.1}}}) = \alpha_i\cdot\sigma_i(\widehat{S^{\textbf{P.1}}_i} )$ and $V_i(\widehat{\mathcal{S}^{\textbf{P.1}}}) = \alpha_i\cdot\sigma_i(\widehat{S^{\textbf{P.1}}_i} )$; 2. Otherwise, if (2) holds, $U_i(\widehat{\mathcal{S}^{\textbf{P.1}}}) = 2\cdot B_i - \alpha_i\cdot \sigma_i(\widehat{S^{\textbf{P.1}}_i})$ and $V_i(\widehat{\mathcal{S}^{\textbf{P.1}}}) = B_i$. In either case, we have $$V_i(\widehat{\mathcal{S}^{\textbf{P.1}}})\geq U_i(\widehat{\mathcal{S}^{\textbf{P.1}}})$$ Then we have $$V(\widehat{\mathcal{S}^{\textbf{P.1}}})=\sum_{i=1}^{|\mathcal{A}|} V_i(\widehat{\mathcal{S}^{\textbf{P.1}}}) \geq \sum_{i=1}^{|\mathcal{A}|} V_i(\widehat{\mathcal{S}^{\textbf{P.1}}}) \geq U(\widehat{\mathcal{S}^{\textbf{P.1}}})$$ Together with the assumption that $V(\widehat{\mathcal{S}^{R}}) < U(\widehat{\mathcal{S}^{\textbf{P.1}}})$, we have $$V(\widehat{\mathcal{S}^{\textbf{P.1}}}) > V(\widehat{\mathcal{S}^{R}})$$ This contradicts to the assumption that $\widehat{\mathcal{S}^{R}}$ is an optimal solution of RMP. $\Box$ **Input:** Social network $\mathcal{G}$, budget $\mathcal{B}$, individual attention constraint $\kappa_i$, overall attention constraint $K$.\ **Output:** Seed set $\mathcal{S}$. $S_i = \emptyset$ without violating constraints $C1\sim C2$, add $v_i$ to $S_j$ that gives the highest marginal value in $V(\mathcal{S})$. $\mathcal{S}$. **Input:** Social network $\mathcal{G}$, budget $\mathcal{B}$, individual attention constraint $\kappa_i$, overall attention constraint $K$.\ **Output:** Seed set $\mathcal{S}$. Call Algorithm \[alg:greedy-peak1\] as a subroutine to find an initial seed set $\mathcal{S}$. assume $u$ is the last seed that has been added to $S_i$ by Algorithm \[alg:greedy-peak1\] $S_i = S_i\backslash \{u\}$ $\mathcal{S}$. \[lem:111\] $V(\cdot)$ is a monotone submodular function. *Proof:* It is easy to prove that $V(\cdot)$ is a monotone function. We next prove the submodularity of $V(\cdot)$. The main idea is to first prove that $V_i(\cdot)$ is a submodular function, then the lemma follows from the fact that the sum of positive submodular functions is submodular. Given two sets of seeds $X$ and $Y$ such that $X\subseteq Y$, and consider the quantity $$V_i(X \cup\{v\})- V_i(X)=\min \{\alpha_i\cdot \sigma_i(X\cup\{v\}), B_i\}- \min \{\alpha_i\cdot \sigma_i(X), B_i\}$$ $$V_i(Y \cup\{v\})- V_i(Y)=\min \{\alpha_i\cdot \sigma_i(Y\cup\{v\}), B_i\}- \min \{\alpha_i\cdot \sigma_i(Y), B_i\}$$ For ease of notation, let $\Delta_X = V_i(X \cup\{v\})- V_i(X)$ and $\Delta_Y = V_i(Y \cup\{v\})- V_i(Y)$. - Case 1: $\alpha_i\cdot \sigma_i(X\cup\{v\})\geq B_i$ and $\alpha_i\cdot \sigma_i(X) \geq B_i$. This case is trivial $\Delta_X= \Delta_Y=0$. - Case 2: $\alpha_i\cdot \sigma_i(X\cup\{v\})\geq B_i$ and $\alpha_i\cdot \sigma_i(X) < B_i$. It follows that $\Delta_X= B_i - \alpha_i\cdot \sigma_i(X)$ and $\Delta_Y = B_i - \min\{\alpha_i\cdot \sigma_i(Y), B_i\} \leq B_i - \alpha_i\cdot \sigma_i(X) = \Delta_X$. - Case 3: $\alpha_i\cdot \sigma_i(X\cup\{v\}) < B_i$ and $\alpha_i\cdot \sigma_i(X) < B_i$ - Case 3.1: $\alpha_i\cdot \sigma_i(Y\cup\{v\})< B_i$ and $\alpha_i\cdot \sigma_i(Y)< B_i$. We have $\Delta_X = \alpha_i\cdot \sigma_i(X\cup\{v\}) - \alpha_i\cdot \sigma_i(X)$ and $\Delta_Y = \alpha_i\cdot \sigma_i(Y\cup\{v\}) - \alpha_i\cdot \sigma_i(Y)$. Based on the fact that $\sigma_i(\cdot)$ is a submodular function [@kdd03], we have $\Delta_X \geq \Delta_Y$. - Case 3.2: $\alpha_i\cdot \sigma_i(Y\cup\{v\})\geq B_i$ and $\alpha_i\cdot \sigma_i(Y)< B_i$. We have $\Delta_X = \alpha_i\cdot \sigma_i(X\cup\{v\}) - \alpha_i\cdot \sigma_i(X)$ and $\Delta_Y = B_i - \alpha_i\cdot \sigma_i(Y)\leq \alpha_i\cdot \sigma_i(Y\cup\{v\}) - \alpha_i\cdot \sigma_i(Y)$. Based on the fact that $\sigma_i(\cdot)$ is a submodular function, we have $\Delta_X \geq \alpha_i\cdot \sigma_i(Y\cup\{v\}) - \alpha_i\cdot \sigma_i(Y) \geq \Delta_Y$. - Case 3.3: $\alpha_i\cdot \sigma_i(Y\cup\{v\})\geq B_i$ and $\alpha_i\cdot \sigma_i(Y)\geq B_i$. We have $\Delta_X = \alpha_i\cdot \sigma_i(X\cup\{v\}) - \alpha_i\cdot \sigma_i(X)$ and $\Delta_Y=B_i-B_i=0$. Therefore, $\Delta_X \geq \Delta_Y$. Therefore $\Delta_X \geq \Delta_Y$. $\Box$ \[lem:matroid\] Given a finite ground set $\mathcal{X} = \{X_{ij}:0\leq i \leq |\mathcal{A}|; 0\leq j \leq |\mathcal{V}|\}$, and the independent sets $\mathcal{I}$ are defined as $$\mathcal{I} = \{T \subseteq \mathcal{X}: C1\mbox{ and }C2 \mbox{ are satisfied when } X_{ij}=1 \mbox{ (resp. $X_{ij}=0$) for each } X_{ij}\in T \mbox{ (resp. $X_{ij}\notin T$)}\}$$ then $(\mathcal{X}, \mathcal{I})$ is a matroid. *Proof:* It is easy to prove that property (I1) is satisfied, i.e., if $A \in \mathcal{I}$ and $B \subseteq A$, then $B\in I$. We next prove that (I2) also holds. Let $\mathcal{X}_{*j}=\{X_{1j}, X_{2j}, \cdots, X_{|\mathcal{V}|j}\}$. If $A ,B \in \mathcal{I}$ and $|B|> |A|$, there must exist $j$ such that $|\mathcal{X}_{*j}\cap B| > |\mathcal{X}_{*j}\cap A|$, together with fact that $|A|<|B|\leq K$, and this means that adding any seed in $\mathcal{X}_{*j}\cap (B\backslash A)$ to $A$ will maintain independence, i.e., both C1 and C2 still hold. $\Box$ \[lem:22\] Algorithm \[alg:greedy-peak1\] provides a 1/2-factor approximation for RMP. *Proof:* According to [@fisher1978analysis], the greedy algorithm achieves 1/2-factor approximation for submodular maximization subject to one matroid constraint, then this lemma follows immediately from Lemma \[lem:111\] and Lemma \[lem:matroid\]. $\Box$ \[thm:main1\] Assume for each $a_i\in \mathcal{A}$, the minimum number of seeds needed to reach budget $B_i$ is at least 2, Algorithm \[alg:greedy-peak\] provides a 1/4-factor approximation for **P.1**. *Proof:* Let $\mathcal{S}^{\mathrm{Alg}_1}=\{S^{\mathrm{Alg}_1}_1, S^{\mathrm{Alg}_1}_2, \cdots, S^{\mathrm{Alg}_1}_{|\mathcal{A}|}\}$ denote the seed set returned from Algorithm \[alg:greedy-peak1\], Lemma \[lem:22\] indicates that $V(\mathcal{S}^{\mathrm{Alg}_1}) \geq V(\widehat{\mathcal{S}^{R}})$. Notice that in Algorithm \[alg:greedy-peak\], we remove the last added seed $u$ from $\mathcal{S}^{\mathrm{Alg}_1}_i$ if and only if $V_i(S^{\mathrm{Alg}_1}_i)=B_i$, and $U_i(S^{\mathrm{Alg}_1}_i\backslash \{u\})\geq U_i(S^{\mathrm{Alg}_1}_i)$. We next prove that the total loss caused by removing all those seeds can be bounded. Since the minimum number of seeds needed to reach budget $B_i$ is at least 2 for any ad $a_i$, we have $|S^{\mathrm{Alg}_1}_i|\geq 2$. Then based on the submodularity of $V_i(\cdot)$ and the greedy manner of Algorithm \[alg:greedy-peak1\], we have $$V_i(S^{\mathrm{Alg}_1}_i\backslash \{u\}) \geq V_i(S^{\mathrm{Alg}_1}_i) - V_i(S^{\mathrm{Alg}_1}_i\backslash \{u\})$$ $$\Rightarrow V_i(S^{\mathrm{Alg}_1}_i\backslash \{u\}) \geq \frac{V_i(S^{\mathrm{Alg}_1}_i)}{2}$$ It follows that $$V_i(S^{\mathrm{Alg}_1}_i\backslash \{u\}) = U_i(S^{\mathrm{Alg}_1}_i\backslash \{u\})\geq \frac{V_i(S^{\mathrm{Alg}_1}_i)}{2}$$ Let $\mathcal{S}^{\mathrm{Alg}_2}=\{S^{\mathrm{Alg}_2}_1, S^{\mathrm{Alg}_2}_2, \cdots, S^{\mathrm{Alg}_2}_{|\mathcal{A}|}\}$ denote the seed set returned from Algorithm \[alg:greedy-peak\], together with Lemma \[lem:22\], we have $$\begin{aligned} U(\mathcal{S}^{\mathrm{Alg}_2}) = \sum_{i=1}^{|\mathcal{A}|} U_i(S^{\mathrm{Alg}_2}_i) &=& \sum_{i\in H} U_i(S^{\mathrm{Alg}_1}_i\backslash \{u\}) + \sum_{i\in \mathcal{S}^{\mathrm{Alg}_2}\backslash H} U_i(S^{\mathrm{Alg}_1}_i)\\ &\geq& \sum_{i\in H} \frac{V_i(S^{\mathrm{Alg}_1}_i)}{2} + \sum_{i\in \mathcal{S}^{\mathrm{Alg}_2}\backslash H} V_i(S^{\mathrm{Alg}_1}_i)\\ &\geq& \frac{V(\mathcal{S}^{\mathrm{Alg}_2})}{2} \geq \frac{V(\widehat{\mathcal{S}^{R}})}{4}\end{aligned}$$ Based on Lemma \[lem:11\], we have $$U(\mathcal{S}^{\mathrm{Alg}_2}) \geq \frac{V(\widehat{\mathcal{S}^{R}})}{4} \geq \frac{U(\widehat{\mathcal{S}^{\textbf{P.1}}})}{4}$$ This finishes the proof of this theorem. $\Box$ Unconstrained Social Advertising ================================ In this section, we study the unconstrained social advertising problem. Notice that the original objective function $U(\mathcal{S})-C(\mathcal{S})$ may take negative value, this may cause trouble for applying the concept of multiplicative approximation guarantee. To this end, instead of directly maximizing the original objective function, we equivalently maximize the following *shifted* objective function $$f(\mathcal{S})=U(\mathcal{S})-C(\mathcal{S})+ \phi$$ where $\phi$ is some constant to ensure $f(\mathcal{S}) \geq 0$ for any $\mathcal{S}$. In practise, we may choose $\phi$ as the maximum cost that can be incurred when allocating each ad to all users. In the rest of this section, we use $C^+(\mathcal{S})$ to denote $(C(\mathcal{S})+ \phi)$ for ease of notation. **Input:** Social network $\mathcal{G}$, individual attention constraint $\kappa_i$, overall attention constraint $K$.\ **Output:** Ad allocation $\mathcal{S}$. $O_t = \emptyset$; $Q_t = \mathcal{V}$; $a_i \leftarrow f'(O_t \cup \{v_j\})-f(O_t)$; $b_i \leftarrow f'(Q_t \backslash \{v_j\})-f(Q_t)$; $a'_i = \max\{0,a_i\}$; $b'_i=\max\{0,b_i\}$; **with probability** $a_i'/(a_i'+b_i')$ **do** $O_t \leftarrow O_t \cup \{v_j\}$ **else** $Q_t \leftarrow Q_t \backslash \{v_j\}$ return $\mathcal{S}=\{O_1, O_2, \cdots, O_{|\mathcal{A}|}\}$ **Input:** Social network $\mathcal{G}$, individual attention constraint $\kappa_i$, overall attention constraint $K$.\ **Output:** Seed set $\mathcal{S}$. call Algorithm \[alg:greedy-peak2\] as a subroutine to find an initial seed set $\mathcal{S}$. sort users in $S_i$ according to their marginal gains in terms of $V_i(\cdot)$ (same as in Algorithm \[alg:greedy-peak\]) assume $u$ has the smallest marginal gain $S_i = S_i\backslash \{u\}$ $\mathcal{S}$. Similar to the approach used in the previous section, we first introduce the unconstrained revenue maximization problem (U-RMP) with the following objective function: Let $\widehat{\mathcal{S}^{\textbf{P.2}}}$ (resp. $\widehat{\mathcal{S}^{U}}$) denote the optimal solution of problem **P.2** (resp. U-RMP), i.e., $$\widehat{\mathcal{S}^{\textbf{P.2}}} = {\arg \max}_{\mathcal{S}} f(\mathcal{S}), \mbox{ }\widehat{\mathcal{S}^{R}}={\arg\max}_{\mathcal{S}} f'(\mathcal{S})$$ Similar to Lemma \[lem:11\], we first prove that \[lem:33\] $$f'(\widehat{\mathcal{S}^{R}}) \geq f(\widehat{\mathcal{S}^{\textbf{P.2}}})$$ The proof of Lemma \[lem:33\] is similar to Lemma \[lem:11\] thus omitted here. \[lem:222\] The function $f'(\mathcal{S})$ is submodular. *Proof:* To prove this lemma, it suffices to show that $V(\mathcal{S})$ is submodular and $C^+(\mathcal{S})$ is supermodular. The first part immediately follows from Lemma \[lem:111\]. Now we focus on proving that $C^+(\mathcal{S})$ is supermodular. We expand $C^+(\mathcal{S})$ as follows $$C^+(\mathcal{S}) = \lambda_1 \cdot \underbrace{\sum_{i=1}^{|\mathcal{V}|}\exp(\max\{0, \sum_{j=1}^{|\mathcal{A}|} X_{ij}-\kappa_i\})}_{\mathrm{Part I}}+ \lambda_2 \cdot \underbrace{\exp(\max\{0, \sum_{i=1}^{|\mathcal{V}|} \sum_{j=1}^{|\mathcal{A}|}X_{ij} - K\})}_{\mathrm{Part II}}+\phi$$ It is easy to verify that both Part I and Part II are supermodular. Since nonnegative linear combination of supermodular functions is supermodular, then together with the fact that $\phi$ is a constant, we can prove that $C^+(\mathcal{S})$ is supermodular. This finishes the proof of this lemma. $\Box$ Now we are ready to present a linear-time 1/2-approximation algorithm for U-RMP, which is adapted from [@buchbinder2012tight]. The detailed description can be found in Algorithm \[alg:greedy-peak2\]. The algorithm maintains two candidate sets $O_t$ and $Q_t$ for each $S_t$. Initially, we set $O_t = \emptyset$; $Q_t = \mathcal{V}$. In each iteration we either adds $v_i$ to $O_t$ or removes it from $Q_t$. The decision is made randomly with probability derived from the marginal gain of each of the two options, i.e., $a'_i$ and $b'_i$. The algorithm terminates when $O_t$ and $Q_t$ are equal. Algorithm \[alg:greedy-peak2\] provides a 1/2-factor approximation for U-RMP. *Proof:* According to Theorem I.2 in [@buchbinder2012tight], the greedy algorithm achieves 1/2-factor approximation for unconstrained submodular maximization, then this lemma follows immediately from Lemma \[lem:222\]. $\Box$ Assume for each $a_i\in \mathcal{A}$, the minimum number of seeds needed to reach budget $B_i$ is at least 2, Algorithm \[alg:greedy-peak222\] provides a 1/4-factor approximation for **P.2**. *Proof:* Similar to Theorem \[thm:main1\], we prove that the total loss can be bounded after removing some users from Algorithm \[alg:greedy-peak2\]. Assume $u$ has been removed from $S_i$ in Algorithm \[alg:greedy-peak222\], based on the submodularity of $V_i(\cdot)$ and the fact that $u$ has the smallest marginal gain, we have $$V_i(S^{\mathrm{Alg}_3}_i\backslash \{u\}) \geq V_i(S^{\mathrm{Alg}_3}_i) - V_i(S^{\mathrm{Alg}_3}_i\backslash \{u\})$$ $$\Rightarrow V_i(S^{\mathrm{Alg}3}_i\backslash \{u\}) \geq \frac{V_i(S^{\mathrm{Alg}_3}_i)}{2}$$ It follows that $$V_i(S^{\mathrm{Alg}_3}_i\backslash \{u\}) = U_i(S^{\mathrm{Alg}_3}_i\backslash \{u\}) = U_i(S^{\mathrm{Alg}_4}_i) \geq \frac{V_i(S^{\mathrm{Alg}_3}_i)}{2}$$ On the other hand, removing any user can only decrease the cost $C^+(\mathcal{S})$. Then we have $$f(\mathcal{S}^{\mathrm{Alg}_4})=U(\mathcal{S}^{\mathrm{Alg}_4}) + C^+(\mathcal{S}^{\mathrm{Alg}_4})\geq \frac{V(\mathcal{S}^{\mathrm{Alg}_3})}{2}+ C^+(\mathcal{S}^{\mathrm{Alg}_3})\geq \frac{f'(\mathcal{S}^{\mathrm{Alg}_3})}{2}\geq \frac{f'(\widehat{\mathcal{S}^{R}})}{4}$$ Then based on Lemma \[lem:33\], we have $$f(\mathcal{S}^{\mathrm{Alg}_4})\geq \frac{f(\widehat{\mathcal{S}^{\textbf{P.2}}})}{4}$$ $\Box$ Conclusion ========== In this paper, we propose and study two social advertising problems: *budgeted social advertising problem* and *unconstrained social advertising problem*. We first prove that both problems are NP-hard by reducing them from traditional influence maximization problem, then develop two constant factor approximation algorithms for each problem.
-1cm -1.5cm [**Stringy Sphalerons and Gauss–Bonnet Term**]{}\ \ [*Laboratory of High Energies,\ Joint Institute for Nuclear Research, 141980 Dubna, Russia,\ e–mail: donets@lhe26.jinr.dubna.su* ]{}\ and\ [**Dmitri V. Gal’tsov,**]{}\ [*Department of Theoretical Physics, Physics Faculty,\ Moscow State University, 119899 Moscow, Russia,\ e–mail: galtsov@grg.phys.msu.su* ]{}\ [**Abstract**]{} The effect of the Gauss–Bonnet term on the $SU(2)$ non–Abelian regular stringy sphaleron solutions is studied within the non–perturbative treatment. It is found that the existence of regular solutions depends crucially on the value of the numerical factor $\beta$ in front of the Gauss–Bonnet term in the four–dimensional effective action. Numerical solutions are constructed in the $N=1, 2, 3$ cases for different $\beta$ below certain critical values $\beta_N$ which decrease with growing $N$ ($N$ being the number of nodes of the Yang–Mills function). It is proved that for any static spherically symmetric asymptotically flat regular solution the ADM mass is exactly equal to the dilaton charge. No solutions were found for $\beta$ above critical values, in particular, for $\beta=1$. Since the Bartnik and McKinnon’s discovery [@bm] of the regular particle–like solutions to the coupled system of the Einstein–Yang–Mills (EYM) equations there was a growing interest in revealing their possible physical significance. It was shown that these solution could play at the ultramicroscopic distances a role analogous to that of electroweak sphalerons [@gv]. Sphaleron interpretation is supported by the existence of the odd–parity YM negative modes [@onm] (apart from the previously known even–parity ones [@sz]), as well as the fermion zero modes and the level–crossing phenomenon [@fer]. Natural question arises whether the EYM sphalerons survive in more sophysticated field models suggested by the theory of superstrings. It was shown recently that regular sphaleron solutions exist within the context of the Einstein–Yang–Mills–Dilaton (EYMD) theory [@dg1], [@lm], [@tm], [@dg2], [@oneill]. Remarkably, they have a dilaton charge exactly equal to the ADM mass. This property is similar to that of the extremal dilaton black holes which are likely (at least some) to represent exact solutions of the string theory. To further investigate possible relevance of the EYMD sphalerons to the string theory we study here the EYMD system with the Gauss–Bonnet (GB) term which is typically present in stringy gravity as the lowest order curvature correction. Similar problem for Abelian dilatonic black holes was studied recently within the perturbative approach [@msm]. However, for regular solutions, one needs a more precise treatment. In order to see in a continuous way how EYMD solutions are modified by the GB term we introduce into the lagrangian a numerical factor $\beta$ so that $\beta=0$ corresponds to the pure EYMD system. It turns out that series expansion of the regular solution near the origin is essentially $\beta$–dependent. Also, computing the GB contribution into the effective energy density on the background EYMD solutions, one can observe that the GB effect becomes non–small for $\beta$ of the order of unity. For this reason we avoid any perturbative treatment of the GB term and attack the problem numerically. Starting with $\beta=0$ we increase gradually the value of this parameter and search (using the shooting strategy) for solutions interpolating smoothly between the regular asymptotic expansion near the origin and an asymptotically flat expansion at infinity. Although the leading terms of expansions near infinity are not modified by the GB corrections, those near the origin [*are affected*]{} substantially. We construct numerical solutions for $N=1, 2, 3$ and some $\beta \neq 0$ and show that regular solution cease to exist above certain critical values $\beta_N$ depending of the number of nodes $N$ of the YM function. For all solutions found within the domains of existence, modifications due to GB term are relatively small, and all characteristic functions still preserve the typical behaviour they have in the pure EYMD case. We also prove analytically that the dilaton charge of any regular solution (with an exact account for the GB term) [*is equal*]{} to its ADM mass independently on the value of $\beta$. For $\beta=0$ a stronger relation holds between $g_{00}$ and the dilaton factor everywhere. We start with the following bosonic part of the heterotic string effective action in four dimensions in the Einstein frame : $$S = \frac{1}{16\pi} \, \int \;\left\{(-{\it R} + \, 2\partial_{\mu} \Phi \partial^{\mu} \Phi ) - \alpha'\exp (-2 \Phi) (F_{a\mu\nu} \, F_a^{\mu\nu} - \beta {G})\right\}\, \sqrt{-g} d^4x \; ,$$ where $\Phi$ is the dilaton, $F$ is the Yang-Mills field strength and $ G$ is the Gauss–Bonnet term which can be presented as the divergence of the topological current $$G = R_{\mu\nu\lambda\tau} R^{\mu\nu\lambda\tau} - 4 R_{\mu\nu} R^{\mu\nu} + R^2 = \nabla_{\mu} K^{\mu} \; .$$ Integrating by parts the GB term in (1) one can rewrite the action in somewhat simpler form (both in (1) and (2) we ignore surface terms which are not relevant for the present analysis) $$S = \frac{1}{16\pi} \, \int \, \,\left\{((-{\it R} + \, 2\partial_{\mu} \Phi \partial^{\mu} \Phi ) - \alpha' e^{-2 \Phi} (F_{a\mu\nu} \, F_a^{\mu\nu} - 2 \beta (\partial_{\mu} \Phi) { K^{\mu}})\right\}\,\sqrt{-g} d^4x \; .$$ We parametrize the metric of the static spherically symmetric spacetime as $$ds^2 = W dt^2 - \frac{dr^2}{w} - R^2 (d\theta^2 + \sin^2 \theta d\phi^2)\; ,$$ where $ W = w \sigma^2$ and all functions depend on the single variable $r$. In this case only the radial component of the topological current is relevant $$K^r = \frac{4 (w \sigma^2)' (wR'^2 - 1)}{R^2\sigma^2} \;.$$ Here and below primes mean derivatives with respect to $r$. A magnetic part of the static spherically symmetric $SU(2)$ Yang–Mills connection can be expressed in terms of the single function of the radial variable $f(r)$ $$A^a_{\mu} T_a dx^\mu = (f-1)(L_\phi d\theta - L_\theta\sin \theta d\phi)\, ,$$ where $L_r =T_a n^a$, $L_\theta=\partial_\theta L_r$, $L_\phi= (\sin \theta)^{-1}\partial_\phi L_r\;,$ $n^a = (\sin \theta \cos \phi, \sin \theta \sin \phi, \cos \theta)$ is the unit vector and $T_a$ are normalized Hermitean generators of the $SU(2)$ group. Integrating out the angular variables in (2) and eliminating some total derivatives one obtains the following reduced effective action $$S = \int dt dr \left(L_{g} + L_{m} + L_{ GB} \right)\; ,$$ where $$L_{g} = \frac{\sigma}{2} \left( R'(wR)' + 1\right) + w \sigma'RR'\; ,$$ is the gravitational part, $$L_{m} = - \frac{1}{2} wR^2 \Phi'^2 - \frac{\alpha'}{2} { F} e^{-2\Phi}\; ,$$ is the matter part, $$L_{GB} = 2 \alpha' \beta \sigma^{-1} \Phi' W' (wR'^2 - 1) e^{-2\Phi}\; ,$$ is the Gauss–Bonnet contribution, and $${ F} = 2wf'^2 + \frac{(1 - f^2)^2}{R^2}\; .$$ Note that an arbitrary rescaling of the slope parameter $\alpha' \rightarrow k \alpha'$ together with the corresponding rescaling of the radial variable $r \rightarrow \sqrt{k} r$ is a symmetry transformation of the effective action (7). Choosing Planck units $\alpha'=1$ we are left with the only dimensionless parameter $\beta$. The equations of motion (including an Einstein constraint) can be obtained by direct variation of (7) over $\sigma, w, R, f, \Phi$. Then fixing the gravitational gauge as $R=r$ one finds the following set of equations $$\frac{\sigma'}{\sigma} = r\Phi'^2 + \frac{2f'^2 e^{-2\Phi}}{r} + \frac{4\beta}{r}\Big((\frac{\Phi'(w-1) e^{-2\Phi}}{\sigma})' \sigma - \frac{W' \Phi'}{\sigma^2} e^{-2\Phi} \Big)\; ,$$ $$w'\left(1 - \frac{4 \beta \Phi' (1 - 3w) e^{-2\Phi}}{r}\right) + \frac{{ F}}{r}e^{-2\Phi} + rw \Phi'^2 = \frac{(1 - w)}{r} \left(1 - 4 \beta w (e^{-2\Phi})''\right)\; ,$$ $$\frac{1}{2}\left(\frac{W'}{\sigma}\right)' r + \left(\frac{W}{\sigma}\right)' + \sigma \left(wr\Phi'^2 -\frac{(1 - f^2)^2}{r^3} e^{-2\Phi}\right) + 4 \beta \left(\frac{W'w \Phi'e^{-2\Phi}}{\sigma}\right)' = 0 \; ,$$ $$\left(w \sigma f' e^{-2\Phi}\right)' + \frac{\sigma f (1 - f^2) e^{-2\Phi}}{r^2} = 0\; ,$$ $$\left(\sigma r^2 w \Phi'\right)' + \sigma { F} e^{-2\Phi} + 2 \beta \left(\frac{W'(1 - w)}{\sigma}\right)' e^{-2\Phi} = 0\; .$$ It is useful also to compute an effective energy density as it enters the standard Einstein equations with account for the GB term $$2 T^0_0 = w \Phi'^2 + \frac{{ F}}{r^2} e^{-2\Phi} + \frac{4 \beta}{r^2}\left\{2w(w -1)(\Phi'' - 2\Phi'^2) + \Phi' w'(3w - 1)\right\} e^{-2\Phi}\; .$$For $\beta=0$ the system reduces to that of [@dg1] and the corresponding solutions exibit typical BK structure of the YM function: solutions start from $f=\pm 1$ and goes asymptotically to $\mp 1$ either monotonically $(N=1)$ or after $N-1$ oscillations around zero. As a first step of the analysis we calculate the GB term and the corresponding density $\sqrt{-g}G$ substituting the sphaleron solutions found without an account for the GB term. Numerical results are shown on the Fig. 1. One can see that the value of GB term increases with growing number of nodes of the YM function. It can be anticipated that its influence on the sphaleron solutions will increase for higher $N$. We have also calculated the effective energy density (17) for the background EYMD solutions. Fig. 2 clearly shows that relative contribution of the GB term for $\beta=1$ is not small. This presumably invalidate any attempt to treat the GB term perturbatively, so we are faced with the problem of constructing numerical solutions to the system (12)–(16). To define the ADM mass $M$ and the dilaton charge $D$ one writes asymptotic expansions for $W$ $$W = 1 - \frac{2M}{r} - \frac{2D^2 M}{r^3} + O(\frac{1}{r^4})\; ,$$and the dilaton $$\Phi = \Phi_{\infty} + \frac{D}{r} + \frac{DM}{r^2} + \frac{8M^2 D - D^3}{6r^3} + O(\frac{1}{r^4})\; .$$The corresponding expansion of $\sigma$ reads $$\sigma = 1 - \frac{D^2}{2r^2} - \frac{4D^2 M}{3r^3} + O(\frac{1}{r^4})\; .$$ To ensure asymptotic flatness it is sufficient (as it is for $\beta=0$) to have for the Yang–Mills function $f$ $$f = \pm 1 + O(\frac{1}{r})\; .$$Clearly, GB–induced terms do not influence the leading behaviour of solutions near infinity. In contrary, an expansion of regular solutions near the origin [*is*]{} affected by the curvature terms. From the system (12)–(16) one finds $$\begin{aligned} f&=& -1 + br^2 + O(r^4)\,,\\ \Phi&=& \Phi_0 + \Phi_2 r^2 + O(r^4)\,,\\ \sigma&=&\sigma_0 + \sigma_2 r^2 + O(r^4)\,,\\ W&=&W_0 + W_2 r^2 + O(r^4)\,, \end{aligned}$$ or in terms of $w$: $$w = 1 + w_2 r^2 + O(r^4)\,,$$ where the following relations hold $$W_0 = \sigma_0^2, \quad W_2 = 2 \sigma_0 \sigma_2 + w_2 \sigma_0^2 .$$ Let us prove that for any regular solution to the system (12)–(16) (if exists), the ADM mass $M$ is exactly equal to the dilaton charge $D$. Combining Eqs. (12), (14) and (16), after some rearrangment one can find the following identity $$\left(2 \sigma r^2 w \Phi' + \frac{W' r^2}{\sigma}\right)' = 4 \beta Q'\;,$$ where $$Q = \sigma^{-1} \left\{(w - 1)(W' + 2W \Phi') - 2r \Phi w W\right\}\; .$$ Integrating this relation over the semiaxis with account for (18)–(20) on gets $$\left.\left(\frac{W}{\sigma}r^2 \Phi' + \frac{W' r^2}{2\sigma}\right)\right|^{\infty}_{0}\equiv M - D = 2 \beta \left[Q(\infty) - Q(0)\right]\; .$$ Now from the expansions (18)–(21) and (22)–(26) it can be found that both above boundary values of $Q$ are equal to zero, what proves the exact equality $M=D$. Remarkably, this property of regular EYMD solutions observed first in [@dg1], remains true with account for the GB term for any value of $\beta$. There is an important difference, however. In the case $\beta=0$ a stronger identity $$W = \exp (-2\Phi)$$holds, which is similar to the well–known relation for the extremal magnetic dilatonic black holes, where it ensures regularity of the metric in the string frame. When GB term is taken into account this is no longer true while the relation $M=D$ exibiting the validity of (31) in the asymptotic region still holds. Similarly to the system of Einstein–Yang–Mills–Dilaton equations [@dg1], [@lm], [@tm], [@oneill] without Gauss–Bonnet term there are three independent parameters in the series solutions of the system (12–16) near the origin: $b$, $\Phi_0$ and $\sigma_0$. From them the quantity $\Phi_0$ is somewhat trivial because of the symmetry of the system under a dilaton shift accompanied by suitable rescaling of the radial coordinate (if desired, $\exp (-2\Phi_0)$ may be absorbed into redefinition of parameters in (22)–(26)). However, there is a substantial complication as compared with the pure EYMD theory. In order to fulfil the system (12–16) in the first leading order, the coefficient $\Phi_2$ has to be one of the real roots of the following algebraic equation of the forth order $$\left(\Phi_2 + 2 b^2 e^{-2\Phi_0}\right) \left(1 + 16 \beta \Phi_2 e^{-2\Phi_0}\right)^3 + 32 \beta b^4 e^{-6\Phi_0} \left(1 + 8 \beta \Phi_2 e^{-2\Phi_0}\right) = 0 .$$ Once $\Phi_2$ is found, two other coefficients $w_2$ and $\sigma_2$ can be obtained as $$\begin{aligned} w_2 &=& - \frac{4b^2 e^{-2\Phi_0}}{1 + 16 \beta \Phi_2 e^{-2\Phi_0}}\,,\\ \sigma_2 &=& \frac{\sigma_0 e^{-2\Phi_0} (4b^2 + 4 \beta w_2 \Phi_2)}{1 + 16 \beta \Phi_2 e^{-2\Phi_0}}\; . \end{aligned}$$ It is convenient to regard the Eq. (32) as giving the value of $\Phi_2$ as a function of $b$, while $\Phi_0$ is fixed. In fact, a dilaton shift $$\Phi_0 \rightarrow \Phi_0 + \delta \Phi_0$$ leads to a solution related with the initial one by a radial rescaling. Physically the normalization $\Phi_{\infty}=0$ is preferable since it ensures a unique mass scale for all solutions. But technically is is convenient to solve the system first by fixing $\Phi_0$ arbitrarily, say, $\Phi_0=0$. Then the rescaled solution will result from $$b \rightarrow b \exp (2\delta \Phi_0) ,\quad \Phi_2 \rightarrow \Phi_2 \exp (2\delta \Phi_0) ,\quad \sigma_0 \rightarrow \sigma_0 .$$ At the final stage of the calculation we rescaled solutions imposing the condition $\Phi_{\infty}=0$ in order to fix a unique mass scale for all of them. The numerical strategy consists in solving the system (12)–(16) starting from the series solution (22)–(26) near the origin. The crucial role is played by the parameter $b$ which shoul take a discrete sequence of values. For $\beta=0$ the solution of Eq. (32) reads $\Phi_2=-2b^2\exp (-2\Phi_0)$, and clearly this does not impose any restriction on this parameter. But for $\beta\neq 0$ it turns out that real solutions for $\Phi_2$ do not exist in some region of $b$. Hence, in addition to the problem of “quantization” of $b$ one has to ensure that $b$ belongs to region where the real roots of the Eq. (32) exist. It happens that if $\beta$ is greater than some ($N$–dependent) critical value $\beta_N$, the allowed region of $b$ does not contain those quantized values for which regular solutions exist. Only for $\beta < \beta_N$ regular solutions exist and exibit behaviour similar to that of the EYMD solutions. Real roots of the algebraic equation (32) form two branches as shown on the Fig. 3a,b in terms of the quantities ${\tilde b}=b\exp (-2\Phi_0),\, {\tilde \Phi_2}=\Phi_2 \exp (-2\Phi_0)$. For roots from the second branch (Fig. 3b) we didn’t find any solution, they seem to correspond to $b$ outside the above quantization domain. Note that this branch does not contain the EYMD root corresponding to $\beta=0$. The first branch 3a [*has*]{} a solution for $\beta=0$, while for any $\beta\neq 0$ there are two negative solutions with absolute values $\Phi_2^{max}(b)$ and $\Phi_2^{min}(b)$. From these two, it is just the second one, $\Phi_2^{min}(b)$, which has the limiting value $\Phi_2=-2b^2\exp (-2\Phi_0)$ when $\beta \rightarrow 0$. No regular solutions to the system (12)–(16) corresponding to $\Phi^{max}(b)$ were found neither. Starting with the known $\beta=0$, $N=1$ EYMD solution [@dg1] we increased gradually the value of $\beta$ searching for the desired quantized $b$ related to $\Phi_2^{min}(b)$. Numerical integration of the system (12)–(16) was done using the Runge–Kutta fourth order scheme. The values of the parameters for $N=1$ case, found numerically for some $\beta$ together with the corresponding ADM mass $M=D$ are given in the Table 1. The solutions were rescaled to ensure $\Phi_{\infty}=0$. **Table 1. N=1.** $\beta$ $b$ $\Phi_0$ $\sigma _0$ $\Phi_2^{min}$ $\Phi_2^{max}$ $M=D$ $w_2 $ --------- -------- ---------- ------------- ---------------- ---------------- ------- --------- 0. 1.073 0.9311 0.3936 -0.3576 — 0.578 -0.7153 0.1000 1.026 0.9199 0.3840 -0.3523 -3.390 0.573 -0.7344 0.2000 0.9866 0.9122 0.3744 -0.3566 -1.475 0.568 -0.7697 0.3000 0.9619 0.9120 0.3597 -0.3833 -0.8376 0.563 -0.8496 0.3700 0.9657 0.9231 0.3421 -0.4938 -0.5198 0.560 -1.0933 One can observe that with increasing $\beta$ two real roots $\Phi_2^{min}(b)$ and $\Phi_2^{max}(b)$ converge and merge together for a limiting value $\beta_1$ approximately equal to $0.37$. For $\beta>\beta_1$ there are no such $b$ which could generate asymptotically flat solutions with $N=1$ compatible with the existence of the real root $\Phi_2(b)$ of the Eq. 32. Similar situation was encountered for higher–$N$ solutions. Numerical results for $N=2$ and $N=3$ are presented in the Tables 2, 3. Figures 4–8 depict the corresponding numerical curves for some values of $\beta$ and $N$. **Table 2. N=2.** $\beta$ $b$ $\Phi_0$ $\sigma _0$ $\Phi_2^{min}$ $\Phi_2^{max}$ $M=D$ $w_2$ --------- -------- ---------- ------------- ---------------- ---------------- ------- -------- 0. 8.3612 1.7923 0.1665 -3.8796 — 0.685 -7.760 0.1000 7.1902 1.7481 0.1529 -3.5165 -15.982 0.673 -7.558 0.2000 6.4017 1.7297 0.1370 -3.6597 -5.7461 0.660 -8.161 0.2208 6.3344 1.7343 0.1320 -4.2904 -4.3127 0.657 -9.478 **Table 3. N=3.** $\beta$ $b$ $\Phi_0$ $\sigma _0$ $\Phi_2^{min}$ $\Phi_2^{max}$ $M=D$ $w_2$ --------- --------- ---------- ------------- ---------------- ---------------- -------- --------- 0. 53.8351 2.6920 0.0678 -26.600 — 0.7042 -53.202 0.2000 36.5453 2.5930 0.0504 -22.348 -30.543 0.6744 -49.817 0.21117 36.2683 2.5956 0.0492 -25.151 -25.212 0.6726 -55.558 Note, that the numerical values of ADM mass/dilaton charge monotonically decrease with growing $\beta$ for each fixed $N$. Also, it can be observed that the limiting values of $\beta_{N}$ decrease with the increasing $N$: ($\beta_{1}=0.37, \beta_{2}=0.22, \beta_{3}=0.21, ...$). It can be anticipated that $\beta_{N}$ has a limiting value $\beta_{\infty} $ as $N \rightarrow \infty$, which would presumably give an absolut bound of the existence of static spherically symmetric regular EYMD–Gauss–Bonnet solutions. It is also interesting to note, that although the contribution of the GB terms to the energy density for $\beta$ close to $\beta_N$ is not small (as it is shown on Fig. 5), the behaviour of $f$ and metric functions is very similar to that of pure EYMD solutions. It has also to be noted that, when the limiting value $\beta_{N}$ is approached, neither singularities no other numerical problems arise; so the only reason for the absence solutions when $\beta$ exceeds the above critaical value is an intrinsic incompatibility of the series expansion near the origin. We conclude with the following remarks. When Gauss–Bonnet term is included, the total number of derivatives in the system of equations increases, as well as the dergee of its non–linearity. However, in a limited region of the numerical factor $\beta$ the behaviour of solutions remains qualitatively the same as in the pure EYMD case. Moreover, the remarkable equality of the ADM mass to the dilaton charge remains unaffected by the GB term for any $\beta$. However it is likely that EYMD sphalerons are destroyed by the Gauss–Bonnet term for sufficiently large values of $\beta$. The most persistent is the $N=1$ solution, which exists up to $\beta=0.37$. Higher $N$ solutions cease to exist for lower $\beta$, the limiting value is likely to be of the order of 0.2. This work was supported in part by the ISF Grant M79000 and by the Russian Foundation for Fundamental Research Grant 93–02–16977. [91]{} R. Bartnik and J. McKinnon, Phys. Rev. Lett. [**61**]{} (1988) 141. D.V. Gal’tsov, M.S. Volkov, Phys. Lett. [**B 273**]{} (1991) 255. M.S. Volkov and D.V. Gal’tsov, Phys. Lett. [**B 341**]{} (1995) 279. N. Straumann and Z. Zhou, Phys. Lett. [**B 237**]{} (1990) 35; [**B 243**]{} (1990), 33. G.W. Gibbons, A.R. Steif, Phys. Lett. [**B 314**]{} (1993) 13; M.S. Volkov, Phys. Lett. [**B 328**]{} (1994) 89, [**B 334**]{} (1994) 40. E.E. Donets and D.V. Gal’tsov, Phys. Lett. [**B 302**]{} (1993) 411. G. Lavrelashvili and D. Maison, Nucl.Phys. [**B 410**]{} (1993), 407. K. Torii, K. Maeda, Phys. Rev. [**D 48**]{} (1993), 1643. C.M. O’Neill, Phys.Rev. [**D 50**]{} (1994) 865. E.E. Donets and D.V. Gal’tsov, Phys. Lett. [**B 312**]{} (1993) 392. S. Mignemi, N.R. Stewart, Charged Black Holes in Effective String Theory, Preprint CNRS/URA 769 (1992); Makoto Natsuume, Higher Order Corrections to the GHS String Black Holes, Preprint NSF–ITP–94–66, hep-th/9406079. Fig. 1. GB term (B) and GB density (A), calculated for pure EYMD $N=1$ solutions [@dg1], curves (C) and (D) – GB density for $N=2,3$ EYMD solutions. Fig. 2. Contributions to the energy density $r^2 * T^0_0$, from YMD ($\beta=0$) and GB parts ($\beta=1$) calculated using EYMD solutions : (A): $N=1$, YMD; (B): $N=1$, GB; (C): $N=2$, YMD; (D): $N=2$, GB. Fig. 3a,b. Real roots of Eq. 32 (two different branches), $\beta=0.1, 0.2, 0.37, 0.5,1$ in terms of ${\tilde b}=b\exp (-2\Phi_0),\, {\tilde \Phi_2}=\Phi_2 \exp (-2\Phi_0)$. Fig. 4. “Gauss–Bonnet” mass distribution (contribution to ADM mass from $\beta$ –dependent terms) for solutions with $\beta=0.2$, $N=1,2,3$. Fig. 5. Energy density for $N=3$, $\beta=0.2$. (A): total energy density; (B): contribution from $\beta$ -independent terms; (C): GB contribution. Fig. 6. Yang-Mills function $f$ for $N=1,2,3$. Solid lines: solutions with GB term ($\beta=0.2$), dashed lines: purely EYMD solutions Fig. 7. Metric function $W=g_{00}$ (dashed lines) and $\exp (-2\Phi)$ (solid lines) for $N=1,2,3$, $\beta=0.2$. Fig. 8. Metric function $\sigma$ for $N=1,2,3$. Solid lines: solutions with GB term ($\beta=0.2$); dashed lines: purely EYMD solutions ($\beta=0$).
--- abstract: 'We report results on multiband observations from radio to $\gamma$-rays of the two radio-loud narrow-line Seyfert 1 (NLSy1) galaxies PKS2004$-$447 and J1548$+$3511. Both sources show a core-jet structure on parsec scale, while they are unresolved at the arcsecond scale. The high core dominance and the high variability brightness temperature make these NLSy1 galaxies good $\gamma$-ray source candidates. [*Fermi*]{}-LAT detected $\gamma$-ray emission only from PKS2004$-$447, with a $\gamma$-ray luminosity comparable to that observed in blazars. No $\gamma$-ray emission is observed for J1548$+$3511. Both sources are variable in X-rays. J1548$+$3511 shows a hardening of the spectrum during high activity states, while PKS2004$-$447 has no spectral variability. A spectral steepening likely related to the soft excess is hinted below 2 keV for J1548$+$3511, while the X-ray spectra of PKS2004$-$447 collected by [*XMM-Newton*]{} in 2012 are described by a single power-law without significant soft excess. No additional absorption above the Galactic column density or the presence of an Fe line is detected in the X-ray spectra of both sources.' date: 'Received ; accepted ?' title: 'Investigating powerful jets in radio-loud Narrow Line Seyfert 1s' --- \[firstpage\] galaxies: active – gamma-rays: general – radio continuum: general – galaxies: Seyfert Introduction ============ Narrow Line Seyfert 1 (NLSy1) galaxies represent a rare type of classical Seyfert galaxies. The strong featureless X-ray continuum and the strong high ionization lines shown by NLSy1 are common in Seyfert 1. However, the optical permitted emission lines are narrow, i.e. more similar to Seyfert 2 galaxies, indicating a combination of properties from both types. Their optical spectra are characterized by narrow permitted lines (full width at half-maximum, FWHM $\leq$ 2000 km s$^{-1}$), weak \[O$_{\rm III}$\]/$\lambda$5007 emission line (\[O$_{\rm III}$\]/H$\beta < 3$) and usually strong Fe$_{\rm II}$ emission lines [@osterbrock85].\ NLSy1 are usually hosted in spiral galaxies, although some objects are associated with early-type S0 galaxies (see e.g. Mrk705 and Mrk1239; Markarian et al. 1989).\ Extreme characteristics are observed in the X-ray band, where strong and rapid variability is observed more frequently than in classical Seyfert 1. The X-ray spectrum of NLSy1 is usually described by a power law which dominates in the 2–10 keV energy range, and a soft X-ray excess at lower energies. The X-ray spectrum between 0.3 and 10 keV is steeper than in Seyfert 1 [@grupe10], while the photon index of the hard X-ray spectrum is similar [@panessa11]. The complex X-ray spectrum is interpreted in terms of either relativistic blurred disc reflection, or ionized/neutral absorption covering the X-ray source [@fabian09; @miller10].\ About 7 per cent of NLSy1 are radio-loud (RL), with a smaller fraction ($\sim$2.5 per cent) exhibiting a high radio-loudness parameter[^1] (${\rm R} >100$; Komossa et al. 2006). In the radio band, RL-NLSy1 usually show a compact morphology with a one-sided jet emerging from the bright core and extending up to a few parsecs. In some objects the radio emission extends on kpc scales [@richards15; @doi12; @anton08]. The high values of both brightness temperature and core dominance suggest the presence of non-thermal emission from relativistic jets [@doi11]. The measurement of superluminal motion in the RL-NLSy1 SBS0846$+$513 indicates Doppler boosting effects in relativistic jets [@dammando13b]. It is worth noting that some evidence of pc-scale jet-like structure is also found in some radio-quiet (RQ) NLSy1, but the nature of the outflow is still under debate [e.g. @doi13].\ Strong evidence in favour of highly-relativistic jets in RL-NLSy1 is the detection by the Large Area Telescope (LAT) on board the [ *Fermi*]{} satellite of $\gamma$-ray emission from 6 RL-NLSy1: PMNJ0948$+$0022, 1H0323$+$342, PKS1502$+$036, PKS2004$-$447 [@abdo09], SBS0846$+$513 [@dammando12], and FBQSJ1644$+$2619 [@dammando15b]. The $\gamma$-ray emission is variable, showing flaring activity accompanied by a moderate hardening of the spectrum. The peculiar multiwavelength properties together with the $\gamma$-ray flares make the RL-NLSy1 more similar to blazars rather than classical Seyfert galaxies, at least at high energies.\ Relativistic jets produced by nuclear objects which are thought to be hosted mainly in spiral galaxies are somehow puzzling. RL-NLSy1 are usually found at higher redshift than RQ-NLSy1 and no optical morphological studies have been carried out so far, with the exception of 1H0323$+$342. The optical morphology of the host galaxy is compatible with either a one-armed spiral [@zhou07] or a ring-like structure produced by a recent merger [@leon14; @anton08].\ In this paper we present results of a multiwavelength study, from radio to $\gamma$-rays, of the RL-NLSy1 J1548$+$3511 and PKS2004$-$447. J1548$+$3511 is a NLSy1 at redshift $z=0.478$ with a radio loudness R$\sim$110 and an estimated black hole mass $M_{\rm BH} = 10^{7.9} M_{\odot}$ [@yuan08], while PKS2004$-$447, at redshift $z=0.24$, has $1710< {\rm R} < 6320$ and the estimated black hole mass is 10$^{6.7} M_{\odot}$ [@oshlack01]. These sources have been selected on the basis of their high variability brightness temperature T’$_{\rm B, var} = 10^{13}$ – $10^{15} K$ that is considered a good indicator for the presence of Doppler boosting in relativistic jets. Among the RL-NLSy1 presented in the sample by @yuan08, J1548$+$3511 is the only source with T’$_{\rm B, var}$ as high as those found in $\gamma$-ray-loud NLSy1. However, no $\gamma$-ray emission has been detected from this source so far. In this paper we aim at investigating differences and similarities between the $\gamma$-ray emitter PKS2004$-$447 and the $\gamma$-ray silent J1548$+$3511. The results for these two sources are then compared to what has been found for the other $\gamma$-ray emitting RL-NLSy1 in the literature, in order to investigate the peculiarity of this sub-class of objects. The information from the multiwavelength data of J1548$+$3511 and PKS2004$-447$ is then used to model the spectral energy distribution (SED) of these two sources.\ The paper is organized as follows: in Sections 2, 3, 4 and 5 we report the radio, $\gamma$-ray [*Fermi*]{}-LAT, X-ray [*Swift*]{} and [ *XMM-Newton*]{} data analysis. In Section 6 we present the results from the multiwavelength analysis. The discussion and the presentation of the SED modelling are given in Section 7, while a brief summary is in Section 8.\ Throughout this paper, we assume the following cosmology: $H_{0} = 71\; {\rm km \; s^{-1} \; Mpc^{-1}}$, $\Omega_{\rm M} = 0.27$ and $\Omega_{\rm \Lambda} = 0.73$, in a flat Universe.\ Radio data ========== VLBA observations {#vlba_obs} ----------------- Multifrequency Very Long Baseline Array (VLBA) observation (project code BO045) of J1548$+$3511 was carried out at 5, 8.4, and 15 GHz on 2013, January 2. The observation was performed in phase reference mode with a recording bandwidth of 16 MHz per channel in dual polarization at 512 Mbps data rate. The target source was observed for 45 min at 5 and 8.4 GHz, and for 75 min at 15 GHz, spread into several scans and cycling through frequencies and calibrators in order to improve the [*uv*]{}-coverage. The strong source 3C345 was used as fringe finder and bandpass calibrator. The source J1602$+$3326 was used as phase calibrator.\ Data reduction was performed using the NRAO’s Astronomical Image Processing System (`AIPS`). A priori amplitude calibration was derived using measurements of the system temperature and the antenna gains. The uncertainties on the amplitude calibration ($\sigma_{c}$) were found to be approximately 5 per cent at 5 and 8.4 GHz, and about 7 per cent at 15 GHz. Final images were produced after a number of phase self-calibration iterations (Fig. \[1548\_vlba\]). The 1$\sigma$ noise (rms) level measured on the image plane is about 0.08 mJy beam$^{-1}$ at 5 and 8.4 GHz, and about 0.12 mJy beam$^{-1}$ at 15 GHz. The restoring beam is 3.4$\times$1.3 mas$^{2}$, 2.1$\times$0.8 mas$^{2}$, and 1.1$\times$0.4 mas$^{2}$ at 5, 8.4 and 15 GHz, respectively. In addition to the full-resolution images, at 5 and 8.4 GHz we produced “low-resolution” images using natural grid weighting and a maximum baseline of 100 M$\lambda$. The low-resolution image at 8.4 GHz is presented in Fig. \[1548\_natural\]. The restoring beam is 2.6$\times$1.8 mas$^{2}$.\ To study the parsec-scale structure of PKS2004$-$447 we retrieved archival VLBA data at 1.4 GHz (project code BD050). The observation was performed on 1998, October 13 with a recording bandwidth of 8 MHz per channel in dual polarization at 256 Mbps data rate. In this observing run PKS2004$-$447 was observed as phase calibrator, for a total time of about 30 minutes. Seven VLBA antennas participated in the observing run. Due to the southern declination of the source the restoring beam is highly elongated in the North-South direction and is 16.0$\times$4.1 mas$^{2}$.\ The data were reduced following the same procedure described for J1548$+$3511. The uncertainties on the amplitude calibration are $\sigma_{c}$ $\sim$ 10 per cent. Final images were produced after a number of phase self-calibration iterations. Amplitude self-calibration was applied using a solution interval longer than the scan length to remove residual systematic errors at the end of the self-calibration process. The final image is presented in Fig. \[2004\_vlba\]. The 1$\sigma$ noise level measured on the image plane is $\sim$0.3 mJy beam$^{-1}$.\ The total flux density of each source was measured by using the `AIPS` verb TVSTAT, which performs an aperture integration over a selected region on the image plane. In case of bright and compact components, like the core, we used the task JMFIT, which performs a Gaussian fit to the source component on the image plane. For more extended sub-components, like jets, the flux density was measured by TVSTAT. The uncertainty in the flux density arises from both the calibration error $\sigma_{c}$ and the measurement error $\sigma_{m}$. The latter represents the off-source rms noise level measured on the image plane and is related to the source size $\theta_{\rm obs}$ normalized by the beam size $\theta_{\rm beam}$ as $\sigma_{m} = {\rm rms} \times$($\theta_{\rm obs}/ \theta_{\rm beam}$)$^{1/2}$. The flux density error $\sigma_{\rm S}$ reported in Table \[flux\_vlba\] takes both uncertainties into account and corresponds to $\sigma_{\rm S} = \sqrt{\sigma_{c}^{2} + \sigma_{m}^{2}}$.\ Archival VLA data ----------------- To investigate possible flux density variability we retrieved Very Large Array (VLA) archival data for both J1548$+$3511 and PKS2004$-$447 (see Table \[vla\_flux\]). Data were reduced following the standard procedure implemented in the `AIPS` package. Primary calibrators are 3C286 and 3C48 and the uncertainties on the amplitude calibration are between 3 per cent and 5 per cent. VLA images were obtained after a few phase-only self-calibration iterations. Both sources are unresolved on arcsecond scale.\ The errors on the VLA flux densities are dominated by the uncertainties on the amplitude calibration, being $\sigma_{m} \sim$0.1 mJy beam$^{-1}$.\ --------------- ------- --------------- -------------- --------------- -------------- Freq. Comp. $S_{\rm 1.4}$ $S_{\rm 5}$ $S_{\rm 8.4}$ $S_{\rm 15}$ GHz mJy mJy mJy mJy J1548$+$3511 C - 12.9$\pm$0.6 16.0$\pm$0.8 20.8$\pm$1.5 J - 3.8$\pm$0.2 0.8$\pm$0.1 - J0 - - 1.5$\pm$0.1 - J1 - 9.7$\pm$0.6 5.1$\pm$0.3 - Ext - 11.0$\pm$0.8 4.4$\pm$0.5 - Tot - 37.4$\pm$2.0 27.8$\pm$1.4 20.8$\pm$1.5 PKS2004$-$447 C 224$\pm$22 - - - J 100$\pm$10 - - - J1 210$\pm$21 - - - Tot 534$\pm$53 - - - --------------- ------- --------------- -------------- --------------- -------------- \[flux\_vlba\] ------------- ------ ------------ ------- -------------------- ----------- ------------ Source Freq Date Code Beam Obs. time Flux GHz arcsec$^{2}$ min mJy J1548+3511 1.4 1999-04-13 AL485 49.31$\times$46.17 1 136$\pm$4 1.4 2003-09-05 AL595 1.46$\times$1.17 1.2 124$\pm$6 4.8 1994-05-07 AK360 0.74$\times$0.39 1.0 74$\pm$2 8.4 1995-08-14 AM484 0.24$\times$0.22 0.6 76$\pm$2 8.4 1999-04-13 AL485 8.46$\times$7.65 0.8 54$\pm$2 PKS2004-447 8.4 1988-12-23 AP001 1.29$\times$0.19 1 330$\pm$17 8.4 1995-07-15 AK394 1.15$\times$0.19 4.5 270$\pm$11 8.4 2005-03-16 AK583 4.63$\times$0.56 3 219$\pm$11 8.4 2005-04-21 AK583 3.65$\times$0.60 6 440$\pm$18 ------------- ------ ------------ ------- -------------------- ----------- ------------ \[vla\_flux\] [*Fermi*]{}-LAT Data: Selection and Analysis {#FermiData} ============================================ The LAT on board the [*Fermi*]{} satellite is a $\gamma$-ray telescope operating from $20$MeV to $>300$GeV, with a large peak effective area ($\sim$ $8000$cm$^2$ for $1$GeV photons), an energy resolution typically $\sim$10 per cent, and a field of view of about 2.4sr with single-photon angular resolution (68 per cent containment radius) of 06 at $E = 1$ GeV on-axis. Details about the LAT are given by @atwood09. The LAT data reported in this paper for PKS2004$-$447 and J1548$+$3511 were collected over the first 6 years of [*Fermi*]{} operation, from 2008 August 4 (MJD 54682) to 2014 August 4 (MJD 56873). During this time, the [*Fermi*]{} observatory operated almost entirely in survey mode. The analysis was performed with the `ScienceTools` software package version v9r33p0[^2]. Only events belonging to the ‘Source’ class were used. The time intervals when the rocking angle of the LAT was greater than 52$^{\circ}$ were rejected. In addition, a cut on the zenith angle ($< 100^{\circ}$) was applied to reduce contamination from the Earth limb $\gamma$ rays, which are produced by cosmic rays interacting with the upper atmosphere. The spectral analysis was performed with the instrument response functions `P7REP_SOURCE_V15` using an unbinned maximum-likelihood method implemented in the tool `gtlike`. Isotropic (‘iso\_source\_v05.txt’) and Galactic diffuse emission (‘gll\_iem\_v05\_rev1.fit’) components were used to model the background[^3]. The normalizations of both components were allowed to vary freely during the spectral fitting.\ We analysed a region of interest of $10^{\circ}$ radius centred at the location of our two targets. We evaluated the significance of the $\gamma$-ray signal from the source by means of the maximum-likelihood test statistic ${\rm TS} = 2 \times({\rm log}L_{1} - {\rm log}L_{0}$), where $L$ is the likelihood of the data given the model with ($L_1$) or without ($L_0$) a point source at the position of our target [e.g., @mattox96]. The source model used in `gtlike` includes all the point sources from the third [*Fermi*]{}-LAT catalogue [3FGL; @acero15] that fall within $15^{\circ}$ of the target. The spectra of these sources were parametrized by power-law, log-parabola, or exponential cut-off power-law model, as in the 3FGL catalogue.\ A first maximum-likelihood analysis was performed to remove from the model the sources having ${\rm TS} < 10$ and/or a predicted number of counts based on the fitted model $N_{\rm pred} < 1 $. A second maximum-likelihood analysis was performed on the updated source model. In the fitting procedure, the normalization factors and the photon indices of the sources lying within 10$^{\circ}$ of the target were left as free parameters. For the sources located between 10$^{\circ}$ and 15$^{\circ}$ from the target, we kept the normalization and the photon index fixed to the values from the 3FGL catalogue.\ For PKS2004$-$447, the fit with a power-law model to the data integrated over 72 months of [*Fermi*]{}-LAT operation in the 0.1–100 GeV energy range results in ${\rm TS} = 164$, with an average flux of ($1.59 \pm 0.16$) $\times$10$^{-8}$ ph cm$^{-2}$ s$^{-1}$, and a photon index $\Gamma_{\gamma} = 2.39 \pm 0.06$, corresponding to an energy flux of (1.09$\pm$0.11)$\times$10$^{-12}$ erg cm$^{-2}$ s$^{-1}$. Figure \[LAT\_2004\] shows the $\gamma$-ray light curve of PKS2004$-$447 for the period 2008 August 4–2014 August 4 using 3-month time bins. For each time bin, the photon index was frozen to the value resulting from the likelihood analysis over the whole period. The systematic uncertainty in the effective area [@ackermann12] amounts to 10 per cent below 100 MeV, decreasing linearly with the logarithm of energy to 5 per cent between 316 MeV and 10 GeV, and increasing linearly with the logarithm of energy up to 15 per cent at 1 TeV [^4]. Statistical errors dominate over the systematics uncertainty. All errors relative to $\gamma$-ray data reported throughout the paper are statistical only.\ For J1548$+$3511, the fit with a power-law model over 6 years of [*Fermi*]{}-LAT operation results in ${\rm TS} = 1$. The 2$\sigma$ upper limit is 3.35$\times$10$^{-8}$ ph cm$^{-2}$ s$^{-1}$ in the 0.1–100 GeV energy range (assuming a photon index $\Gamma_{\gamma} = 2.4)$, corresponding to an energy flux $<$5.3$\times$10$^{-11}$ erg cm$^{-2}$ s$^{-1}$.\ Swift data: observations and analysis {#SwiftData} ===================================== The [*Swift*]{} satellite [@gehrels04] performed twenty seven observations of PKS2004$-$447 between 2011 May 15 and 2014 March 16. J1548+3511 has not been observed by [*Swift*]{} so far.\ The observations of PKS2004$-$447 were carried out with all three instruments on board: the X-ray Telescope [XRT; @burrows05 0.2–10.0 keV], the Ultraviolet/Optical Telescope [UVOT; @roming05 170–600 nm] and the Burst Alert Telescope [BAT; @barthelmy05 15–150 keV]. The source was not present in the [*Swift*]{} BAT 70-month hard X-ray catalogue [@baumgartner13]. The XRT data of PKS2004$-$447 were processed with standard procedures (`xrtpipeline v0.13.0`), filtering, and screening criteria using the `HEAsoft` package (v6.15). The data were collected in photon counting mode for all the observations. The source count rate was low ($< 0.5$ counts s$^{-1}$); thus pile-up correction was not required. Source events were extracted from a circular region with a radius of 20 pixels (1 ${\rm pixel} = 2.36$ arcsec), while background events were extracted from a circular region with radius of 50 pixels away from the source region and from other bright sources. Ancillary response files were generated with `xrtmkarf`, and account for different extraction regions, vignetting and point-spread function corrections. We used the spectral redistribution matrices in the Calibration data base maintained by HEASARC[^5]. Short observations performed during the same month were summed in order to have enough statistics to obtain a good spectral fit. The spectra with low numbers of photons collected ($< 200$ counts) were rebinned with a minimum of 1 count per bin and the Cash statistic [@cash79] was used. We fitted the spectra with an absorbed power-law using the photoelectric absorption model `tbabs` [@wilms00], with a neutral hydrogen column density fixed to its Galactic value [3.17$\times$10$^{20}$cm$^{-2}$; @kalberla05]. The fit results are reported in Table \[XRT\].\ UVOT data of PKS2004$-$447 in the $v$, $b$, $u$, $w1$, $m2$, and $w2$ filters were reduced with the task `uvotsource` included in the `HEAsoft` package v6.15 and the 20130118 CALDB-UVOTA release. We extracted the source counts from a circle with 5 arcsec radius centred on the source and the background counts from a circle with 10 arcsec radius in a nearby empty region. The observed magnitudes are reported in Table \[uvot\]. We converted the magnitudes into de-reddened flux densities by using the E(B-V) value of 0.029 from @schlafly11, the extinction laws by @cardelli89 and the magnitude-flux calibrations by @bessell98 and @breeveld11.\ ------------------------- --------------------- ------- ----------------- ---------------- 55686 2011-05-05 6778 1.65 $\pm$ 0.25 $5.6 \pm 0.9$ 55756/55761/55765 2011-07-14/19/23 7101 1.62 $\pm$ 0.29 $5.5 \pm 0.7$ 55783/55786 2011-08-10/13 2258 1.15 $\pm$ 0.49 $6.4 \pm 2.1$ 55809/55821 2011-09-05/17 8239 1.38 $\pm$ 0.19 $11.8 \pm 1.3$ 55880 2011-11-15 7095 2.02 $\pm$ 0.30 $4.4 \pm 0.7$ 56000 2012-03-14 7306 1.52 $\pm$ 0.30 $4.5 \pm 0.8$ 56111/56120 2012-07-3/12 7120 1.65 $\pm$ 0.24 $6.5 \pm 0.9$ 56182/56192/56200 2012-09-12/22/30 17016 1.63 $\pm$ 0.18 $5.9 \pm 0.6$ 56480/56487 2013-07-07/14 22963 1.42 $\pm$ 0.12 $10.4 \pm 0.7$ 56562 2013-09-27 8361 1.45 $\pm$ 0.20 $14.6 \pm 1.5$ 56578/56585/56592/56594 2013-10-13/20/27/29 12699 1.41 $\pm$ 0.24 $8.7 \pm 1.2$ 56599/56618 2013-11-03/19 8241 1.58 $\pm$ 0.18 $6.8 \pm 0.6$ 56619 2013-11-20 12179 1.43 $\pm$ 0.16 $15.5 \pm 1.2$ 56730 2014-03-14 7507 1.68 $\pm$ 0.26 $6.7 \pm 1.0$ 56732 2014-03-16 9642 1.48 $\pm$ 0.19 $9.0 \pm 1.0$ ------------------------- --------------------- ------- ----------------- ---------------- \[XRT\] ------- ------------ ---------------- ---------------- ---------------- ---------------- ---------------- ---------------- 55686 2011-05-05 18.48$\pm$0.18 20.08$\pm$0.28 19.17$\pm$0.19 19.77$\pm$0.32 $>$ 19.86 $>$ 20.19 55756 2011-07-14 $>$ 18.98 19.69$\pm$0.30 19.31$\pm$0.28 19.29$\pm$0.29 $>$ 19.40 $>$ 19.85 55761 2011-07-19 18.85$\pm$0.35 – $>$ 18.89 $>$ 19.35 $>$ 19.19 $>$ 19.61 55765 2011-07-23 $>$ 19.33 20.25$\pm$0.36 19.25$\pm$0.20 19.53$\pm$0.28 $>$ 19.70 $>$ 20.01 55783 2011-08-10 – – – 19.41$\pm$0.18 – – 55786 2011-08-13 18.48$\pm$0.28 19.56$\pm$0.31 19.29$\pm$0.33 19.67$\pm$0.35 $>$ 19.87 20.37$\pm$0.35 55809 2011-09-05 $>$18.24 $>$19.12 18.59$\pm$0.32 $>$ 19.06 $>$ 18.98 19.74$\pm$0.34 55821 2011-09-17 18.28$\pm$0.25 18.99$\pm$0.16 18.77$\pm$0.18 18.84$\pm$0.24 $>$ 19.30 $>$ 19.50 55880 2011-11-15 $>$ 18.61 19.63$\pm$0.32 $>$ 19.25 $>$ 19.33 $>$ 19.40 $>$ 19.52 56000 2012-03-14 $>$ 18.94 $>$ 19.76 $>$ 19.41 $>$ 19.59 $>$ 19.70 19.55$\pm$0.25 56111 2012-07-03 – – – 19.48$\pm$0.11 – – 56120 2012-07-12 – – – – 19.94$\pm$0.35 – 56182 2012-09-12 18.45$\pm$0.24 19.67$\pm$0.30 18.91$\pm$0.22 19.46$\pm$0.28 19.62$\pm$0.14 20.00$\pm$0.23 56192 2012-09-22 $>$ 18.90 19.36$\pm$0.23 18.72$\pm$0.19 19.37$\pm$0.32 19.70$\pm$0.27 20.27$\pm$0.30 56200 2012-09-30 – – 19.10$\pm$0.09 – – – 56480 2013-07-07 – – 18.87$\pm$0.09 – – 19.91$\pm$0.20 56487 2013-07-14 – – 18.56$\pm$0.07 19.36$\pm$0.10 – – 56562 2013-09-27 – – – 19.03$\pm$0.09 19.28$\pm$0.11 – 56578 2013-10-13 – – – – 19.81$\pm$0.23 – 56585 2013-10-20 – – – – – 19.77$\pm$0.15 56592 2013-10-27 – – 18.69$\pm$0.21 – – 19.54$\pm$0.25 56594 2013-10-29 – – – – 19.91$\pm$0.23 – 56599 2013-11-03 – – – 19.28$\pm$0.27 – – 56618 2013-11-19 – – – 18.85$\pm$0.15 – – 56619 2013-11-20 – – 18.30$\pm$0.06 – – – 56730 2014-03-14 – – – 19.67$\pm$0.16 $>$ 19.56 – 56732 2014-03-16 – – 18.59$\pm$0.11 – – – ------- ------------ ---------------- ---------------- ---------------- ---------------- ---------------- ---------------- \[uvot\] [*XMM-Newton*]{} data: observations and analysis {#XMM} ================================================ EPIC Observations and data reduction {#observations} ------------------------------------ J1548$+$3511 was observed by [*XMM-Newton*]{} [@jansen01] on 2011 August 8 and 20 for a total observing time of 28 ks in both cases (observation IDs 0674320301 and 0674320401). For this analysis we focus on the data from the EPIC pn, which was operated in the Full Window mode, with net exposure times of 19.4 ks and 22.9 ks for the two observations.\ [*XMM-Newton*]{} observed PKS2004$-$447 on 2012 May 1 and October 18 for a total duration of 36 ks in both cases (observation IDs 0694530101 and 0694530201). The EPIC pn and the EPIC MOS cameras (MOS1 and MOS2) were operated in the full frame mode. The first observation has net exposure times of 18, 20, and 20 ks for the pn, MOS1 and MOS2, respectively; the second observation has net exposure times of 29, 35, and 35 ks. Differently from the first observation, in the second observation there is a significant increase of counts ($\sim$70 per cent) adding the data collected by MOS1 and MOS2 to the pn data. Therefore, for the first observation of PKS2004$-$447 we focus on the data from the EPIC pn only, while for the second observation both the pn and MOS data were analysed.\ The data were reduced using the [*XMM-Newton*]{} Science Analysis System ([SAS v13.5.0]{}), applying standard event selection and filtering[^6]. Neither observation was affected by background flaring. The source spectra were extracted from a circular region of radius 30 arcsec centred on the source, and the background from a nearby region on the same chip. To allow for $\chi^2$ fitting the spectra were binned to contain at least 20 counts per bin. All errors are quoted at the 90 per cent confidence level for the parameter of interest (corresponding to $\Delta \chi^2 = 2.7$). X-ray spectral analysis ----------------------- ### J1548+3511 The spectral fits were performed over the 0.3–10 keV energy range using XSPEC v.12.8.1. Although we present only the fits to the EPIC pn, the results were cross-checked for consistency with the MOS spectra. Galactic absorption of $2.19\times 10^{20}~\rm{cm^{-2}}$ was included in all fits using the [*tbabs*]{} model. In the two weeks between the observations the unabsorbed flux increased from $F_{\rm 0.3-10~keV} = (4.2 \pm 0.2) \times 10^{-13} {\rm erg\ cm^{-2}\ s^{-1}}$ to $F_{\rm 0.3-10~keV} = (5.3 \pm 0.2) \times 10^{-13} {\rm erg\ cm^{-2}\ s^{-1}}$. No strong variability was seen during the individual observations.\ The results of fitting a single power law and a broken power-law to the two spectra are summarised in Table \[j1548\_table\] and shown in Fig. \[j1548\_fits\]. While the single power-law model leaves positive residuals above 3 keV, the broken power-law is a good fit to the data. The improvement between the models is more significant in the second observation, when the source was brighter. In this model the spectral shape changes from a soft slope of $\Gamma_1 = 2.6\pm0.1$ ($2.5\pm0.1$) below $E_{\rm{break}} = 2.0^{+0.8}_{-0.5}~\rm{keV}$ ($1.7\pm 0.2 ~\rm{keV}$) to $\Gamma_2 = 1.9^{+0.3}_{-0.4}$ ($\Gamma_{2} = 1.7\pm 0.2$) above $E_{\rm{break}}$ for the first (second) observation. If we add to the models another neutral absorber at the redshift of the source the fits do not improve, showing that no intrinsic absorption is required. Furthermore, there is no detection of an Fe line, with 90 per cent upper limits on a narrow line at 6.4 keV of $EW< 0.26$ keV and $EW< 0.13$ keV for the first and second observation, respectively. Unfortunately, the data quality is not sufficient to obtain meaningful constraints on more complex models for the emission. [llll]{}\ \[-4pt\] Model & Parameter & Value Obs 1 & Value Obs 2\ \ \ Power law & $\Gamma$ & $2.47\pm0.06$ & $2.31\pm0.05$\ & Norm & $7.8\pm 0.2 \times10^{-5}$ & $9.0\pm 0.2 \times 10^{-5}$\ & $\chi^2$/d.o.f. & 129/129 & 210/164\ Broken power-law & $\Gamma_1$ & $2.57^{+0.09}_{-0.08}$ & $2.49^{+0.09}_{-0.08}$\ & $E_{\rm{break}}~(\rm{keV})$ & $2.0^{+0.8}_{-0.5}$ & $1.7^{+0.5}_{-0.3}$\ & $\Gamma_2$ & $1.9^{+0.3}_{-0.4}$ & $1.7\pm 0.2$\ & Norm & $7.5^{+0.3}_{-0.4}\times10^{-5}$ & $8.4^{+0.3}_{-0.4}\times10^{-5}$\ & $\chi^2$/d.o.f. & 110/127 & 154/162\ \[j1548\_table\] ### PKS2004$-$447 As for J1548$+$3511, the spectral fits were performed over the 0.3$-$10 keV energy range using XSPEC v.12.8.1. Although for the first observation we present only the fits to the EPIC pn, the results were cross-checked for consistency with the MOS spectra. Galactic absorption of $3.17\times 10^{20}~\rm{cm^{-2}}$ was included in all fits using the [*tbabs*]{} model. In the five months between the observations the unabsorbed flux increased from $F_{0.3-10~\rm{keV}} = (4.5~\pm 0.1) \times 10^{-13}~\rm{erg\ cm^{-2}\ s^{-1}}$ to $F_{0.3-10~\rm{keV}} = (6.8~\pm 0.2) \times 10^{-13}~\rm{erg\ cm^{-2}\ s^{-1}}$. No strong variability was seen during the individual observations.\ The results of fitting a single power law and a broken power-law to the two spectra are summarised in Table \[2004\_XMM\]. No significant soft X-ray excess is observed below 2 keV. A simple power-law model (Fig. \[2004\_fits\]) is sufficient to describe the data of the first observation. For the second observation the power-law is not an optimal fit ($\chi^{2} = 273/241$, Table \[2004\_XMM\]), but no significant improvement was obtained using a broken power-law model. We note a dip in the residuals of the second observation at $\sim$0.7 keV. Adding to the model an absorption edge, the fit is slightly better ($\chi^{2} = 263/239$) with a threshold energy $E_{\rm c} = 0.77^{+0.05}_{-0.11}$ keV, compatible with the O$_{\rm VII}$ absorption, and a maximum optical depth $\tau = 0.21^{+0.14}_{-0.11}$. However, observations with better statistics are required to confirm this feature.\ In both observations, the photon index is $\Gamma_{\rm X}$ $\sim$1.7, consistent with a jet emission component. Similarly to J1548+3511, if we add to the models another neutral absorber at the redshift of the source the fits do not improve, showing that no intrinsic absorption is required. Moreover, there is no detection of an Fe line, with 90 per cent upper limits on a narrow line at 6.4 keV of $EW< 0.12$ keV and $EW< 0.05$ keV for the first and second observation, respectively.\ [llll]{}\ \[-4pt\] Model & Parameter & Value Obs 1 & Value Obs 2\ \ \ Power law & $\Gamma$ & $1.72\pm0.05$ & $1.69\pm0.03$\ & Norm & $7.0\pm 0.2 \times10^{-5}$ & $10.0\pm 0.3 \times 10^{-5}$\ & $\chi^2$/d.o.f. & 97/102 & 273/241\ Broken power-law & $\Gamma_1$ & $1.76\pm 0.07$ & $1.71\pm 0.03$\ & $E_{\rm{break}}~(\rm{keV})$ & $2.0^{+2.0}_{-0.9}$ & $3.3^{+1.2}_{-0.6}$\ & $\Gamma_2$ & $1.63\pm 0.19$ & $1.51\pm 0.14$\ & Norm & $7.0\times10^{-5}$ & $10.4\pm 0.3\times10^{-5}$\ & $\chi^2$/d.o.f. & 96/100 & 268/239\ \[2004\_XMM\] Optical Monitor data -------------------- The Optical Monitor [OM; @mason01] on board [*XMM-Newton*]{} is a 30 cm telescope carrying six optical/UV filters, and two grisms. We used the SAS task `omichain` to reduce the data and the tasks `omsource` and `omphotom` to derive the source magnitude.\ The OM was operated during all the observations described in Section 5.1. J1548$+$3511 was observed twice in optical and UV bands. Average observed magnitudes for J1548$+$3511 are reported in Table \[OM\_1548\]. No significant change of activity was observed for J1548$+$3511 between the two OM observations. PKS2004$-$447 was observed by OM in $u$ band, with observed magnitudes $u = 19.20\pm0.03$ and $u = 18.92\pm0.03$ for 2012 May 1 and October 18, respectively. Therefore, on a half-year time-scale a difference of $\sim$0.3 mag was observed in optical for PKS2004$-$447. ------- ------------ ---------------- ---------------- ---------------- ---------------- ---------------- ---------------- 55781 2011-08-08 18.10$\pm$0.10 18.30$\pm$0.03 17.37$\pm$0.02 16.99$\pm$0.02 16.62$\pm$0.08 16.77$\pm$0.07 55793 2011-08-20 17.99$\pm$0.07 – 17.28$\pm$0.02 16.99$\pm$0.02 16.68$\pm$0.08 16.78$\pm$0.09 ------- ------------ ---------------- ---------------- ---------------- ---------------- ---------------- ---------------- \[OM\_1548\] Results ======= The NLSy1 J1548$+$3511 ---------------------- ### Radio properties In the radio band, the NLSy1 J1548$+$3511 shows a core-jet structure with a total angular size of about 70 mas corresponding to a linear size of 420 pc (Fig. \[1548\_vlba\]). Component C accounts for the majority of the VLBA flux density, from about 34 per cent at 5 GHz up to 100 per cent at 15 GHz. Its inverted spectrum ($\alpha \sim -0.4$; $S_{\nu} \propto \nu^{- \alpha}$) indicates that is the source core. From the core region a one-sided jet emerges with a position angle of about 10$^{\circ}$ and bends to the east (position angle of about 30$^{\circ}$) at a distance of $\sim$40 mas (240 pc) from the core, where the component J1 is observed. The higher resolution provided by 8.4-GHz data allows us to resolve the innermost part of the jet into two compact components (labelled J and J0 in Fig. \[1548\_vlba\]), which are likely jet knots. An extended low-surface brightness structure (labelled Ext in Figs. \[1548\_vlba\] and \[1548\_natural\]) is observed at $\sim$60 mas (360 pc) from the core. At 15 GHz only the core component is detected.\ The source is unresolved on the arcsecond scale sampled by VLA images. The similar flux density ($\sim$141 mJy) reported in the NVSS [@condon98] and the FIRST [@becker95] indicates that no extended emission on arcsecond scale is present. On the other hand, VLBA observations at 5 and 8.4 GHz could recover only about 50 per cent or less of the VLA flux density. This may be due to a combination of both variability and the presence of extended jet structure that cannot be imaged by the short baselines of the VLBA. The spectral index computed using VLA data between 1.4 and 8.4 GHz results in a moderately flat spectrum with $\alpha \sim 0.3$–0.4. However, the spectral index values are strongly subject to the flux density variability observed in this source.\ A small part of the flux density from the extended low-surface brightness structure is recovered in the low resolution image at 8.4 (Fig. \[1548\_natural\]), where the total flux density is $\sim$28 mJy instead of 21 mJy measured on the high-resolution image. No significant difference between the flux density measured in low- and high-resolution images is found at 15 GHz.\ We investigated the source variability by the analysis of archival VLA data. Although the data sets considered are not homogeneous (i.e. different VLA configurations), the lack of extended emission on arcsecond scale implies that variation in the flux density should be intrinsic to the source and not related to the lack of short baselines in the extended VLA configurations. Evidence for intrinsic variability comes from observations at 8.4 GHz, where the highest flux density was measured when the VLA was in the most extended configuration (Table \[vla\_flux\]).\ For each frequency we computed the variability index $V$ following @hovatta08: $$V = \frac{(S_{\rm max} - \sigma_{\rm max}) - (S_{\rm min} + \sigma_{\rm min})}{(S_{\rm max} - \sigma_{\rm max}) + (S_{\rm min} + \sigma_{\rm min})} \label{var_index}$$ where $S_{\rm max}$ and $S_{\rm min}$ are the maximum and minimum flux density, whereas $\sigma_{\rm max}$ and $\sigma_{\rm min}$ are their associated errors, respectively.\ In addition to the VLA flux density reported in Table \[vla\_flux\], at 1.4 GHz we considered the values from the FIRST and NVSS, while at 5 GHz we considered the values reported in the 87GB catalogue [@gregory91] and in the second MIT-Green Bank survey [@langston90]. VLBA flux densities were not taken into account due to the possible missing flux from extended jet structures on parsec scales. From Eq. \[var\_index\] we found that $V$ is 1 per cent, 11 per cent and 14 per cent, at 1.4, 5, and 8.4 GHz respectively, indicating larger variability at higher frequencies as usually found in blazars. The low variability ($V=0.01$) estimated at 1.4 GHz is comparable to the uncertainties. We note that the flux density variability may be underestimated due to the poor time sampling of the observations.\ ### X-ray properties In the two weeks between the observations the source brightened by about 25 per cent. The [*XMM-Newton*]{} spectra of J1548$+$3511 are well fitted by a broken power-law model, with a possible Seyfert component below $\sim$2 keV and a jet component dominating at higher energies. The low-energy component, with a steep photon index of $\sim$2.5, may be associated with the soft X-ray excess. On the other hand, the relatively hard X-ray spectrum above the energy break ($\Gamma_{2} = 1.7$–1.9) may suggest a significant contribution of inverse Compton radiation from a relativistic jet.\ ### Optical and UV properties No significant change of activity was observed for J1548$+$3511 in the optical and UV bands between the two [*XMM*]{}-OM observations performed two weeks apart (Table \[OM\_1548\]). The NLSy1 PKS2004$-$447 ----------------------- ### Radio properties The radio source PKS2004$-$447 has a core-jet structure with a total angular size of about 40 mas, which corresponds to $\sim$150 pc at the redshift of the source. The radio emission is dominated by the source core, labelled C in Fig. \[2004\_vlba\], which accounts for $\sim$42 per cent of the total flux density at 1.4 GHz. The jet structure emerges from the core component with a position angle of about $-$90$^{\circ}$, then at 20 mas ($\sim$75 pc) it bends to a position angle of about $-$60$^{\circ}$. The jet structure is resolved into two subcomponents, J and J1, which are enshrouded by diffuse emission. The lack of multifrequency observations does not allow us to study the spectral index distribution across the source.\ The source is unresolved in VLA images at 8.4 GHz, in agreement with previous studies at arcsecond-scale resolution by @gallo06.\ Archival VLA observations at 8.4 GHz point out some level of flux density variability (Table \[vla\_flux\]). From Eq. \[var\_index\] we computed the variability index for PKS2004$-$447, which turns out to be 27 per cent, consistent with the flux density variability derived from the Ceduna observations at 6.65 GHz [@gallo06].\ ### $\gamma$-ray properties During the first six years of [*Fermi*]{}-LAT observations, the 0.1–100 GeV averaged flux is $\sim$1.6$\times$10$^{-8}$ ph cm$^{-2}$ s$^{-1}$. The LAT light curve indicates variable $\gamma$-ray emission with flux ranging between (1.3–4.2)$\times$10$^{-8}$ ph cm$^{-2}$ s$^{-1}$, interleaved by periods of low activity, when the source is not detected by [*Fermi*]{}-LAT (Fig. \[LAT\_2004\]). No $\gamma$-ray flares from this source have been detected so far.\ ### X-ray properties The X-ray light curve collected by [*Swift*]{}-XRT indicates significant flux variability between 2011 and 2014, ranging between (5–16)$\times$10$^{-13}$ erg cm$^{-2}$ s$^{-1}$ (Fig. \[MWL\]). In 2011 September the increase of the X-ray flux occurs when the $\gamma$-ray emission is in a maximum, suggesting a possible correlation between the emission in these two energy bands. On the other hand, between 2013 September and November the X-ray light curve shows two high-activity episodes in which the flux is F$_{\rm 0.3-10\,keV} \sim$15$\times$10$^{-13}$ erg cm$^{-2}$ s$^{-1}$, interleaved by low-activity state. During the X-ray high-activity state in 2013 November, the $\gamma$-ray flux is roughly 2.6 times the average value, whereas in 2013 August–October the source is not detected in $\gamma$-rays. No significant X-ray photon index variability is observed (Fig. \[MWL\]).\ In the five months between the [*XMM-Newton*]{} observations the source brightened by about 35 per cent. The photon index derived for the two [*XMM-Newton*]{} observations is in good agreement with the [*Swift*]{}-XRT results, and is slightly softer than the value obtained in 2001 by @gallo06. This may be due to the higher flux observed in 2001 (F$_{0.3-10\,keV} = 1.5 \times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$) than that observed in the 2012 [*XMM-Newton*]{} observations. In the 2012 [*XMM-Newton*]{} observations, there is no evidence of soft X-ray excess.\ ### Optical and UV properties On monthly time-scales a difference of $\sim$0.3 mag was observed for PKS2004$-$447 in $u$ band by [*XMM*]{}-OM. During the [*Swift*]{}-UVOT observations the change of magnitudes spanned in the various band is about 0.6, 1.3, 1.0, 0.9, 0.7, 0.8 going from the $v$ to $w2$ filter (Table \[uvot\]). A first peak of the optical/UV activity was observed on 2011 September 5, during a high X-ray and $\gamma$-ray activity period. A second peak of activity was observed in the $u$ and $w1$ bands on 2013 November 19 and 20. At that time, the maximum X-ray flux was observed together with a high $\gamma$-ray flux level. A contemporaneous increase of activity between the optical-UV and the X-ray and $\gamma$-ray bands indicates that the jet emission is dominant also in optical and UV, in agreement with the lack of a significant disc emission suggested by @abdo09. Discussion ========== Relativistic jets in RL-NLSy1 ----------------------------- High-energy emission has been detected in RL-NLSy1. On the other hand, no $\gamma$-ray emission has been found in RQ-NLSy1, suggesting a possible intrinsically different nature between these two sub-populations. Different $\gamma$-ray properties have been observed in the six RL-NLSy1 detected by [*Fermi*]{}-LAT. Three objects, PMNJ0948$+$0022, SBS0846$+$513, and 1H0323$+$342 show $\gamma$-ray flares, reaching an intrinsic apparent luminosity as high as 10$^{48}$ erg s$^{-1}$ [@dammando15; @dammando12], i.e. comparable to those shown by FSRQ [e.g. @ackermann11]. On the other hand, PKS2004$-$447, PKS1502$+$036, and FBQSJ1644$+$2619 have not shown strong flares so far.\ During the first 6 years of [*Fermi*]{} operation, the apparent luminosity of PKS2004$-$447 ranges between $L_{\gamma} = ($1.3–4.2)$\times$10$^{45}$ erg s$^{-1}$. A similar behaviour was found for PKS1502$+$036 with a slightly larger luminosity [$L_{\gamma} \sim 10^{46}$ erg s$^{-1}$, @dammando13]. FBQSJ1644+2619 had an intermediate behaviour with high activity episodes interleaved by long quiescent periods [@dammando15b].\ The RL-NLSy1 J1548$+$3511 is not detected in $\gamma$-rays and the upper limit to the apparent isotropic luminosity is $L_{\gamma} < 1.7 \times 10^{46}$ erg s$^{-1}$.\ The detection of $\gamma$-ray emission in a handful of RL-NLSy1 proves the presence of relativistic jets in this peculiar sub-class of active galactic nuclei (AGN). In the photon index versus $\gamma$-ray luminosity plane, the $\gamma$-ray loud NLSy1 are located in the low-luminosity tail of FSRQ distribution, where also the $\gamma$-ray emitting steep-spectrum radio quasars are found [@ackermann15; @abdo10].\ A jet component contribution is likely observed above 2 keV in the RL-NLSy1 J1548$+$3511, where the X-ray photon index is $\Gamma_{\rm X} \sim 1.7$–1.9. Below 2 keV the spectrum is softer, compatible with the presence of soft X-ray excess usually observed in NLSy1 [@grupe10], as well as in the $\gamma$-ray NLSy1 PMNJ0948$+$0022 [@dammando14] and 1H0323$+$342 [@paliya14]. An indication of a weak soft excess was reported by @gallo06 for PKS2004$-$447 in 2004. The X-ray spectra of PKS2004$-$447 observed in 2012 are well reproduced by a single power law with hard photon index $\Gamma_{\rm X} \sim 1.7$ and no significant soft X-ray excess is needed below 2 keV. This may be due to the relatively low statistics of the 2012 observations. The study of the multiwavelength variability of PKS2004$-$447 indicates that during the high activity state observed in $\gamma$-rays between 2011 August and October, when the $\gamma$-ray flux was a factor of 2.6 times the average value, also the X-ray, UV and optical emission reached a maximum, suggesting a common origin for the multiband variability. It is worth mentioning that not all the episodes of flux increase in X-rays/UV are associated with a high activity state in $\gamma$-rays (Fig. \[MWL\]), like in the case of the high X-ray activity observed in 2013 September. Such X-ray/UV flares with no obvious counterpart in other bands were observed in the RL-NLSy1 PMNJ0948$+$0022 [e.g. @foschini12], as well as in many blazars [e.g., 3C279, @abdo10b].\ No systematic studies of complete samples of NLSy1 have been carried out at Very High Energy (VHE) so far. PKS2004$-$447 was observed by the High Energy Stereoscopic System [H.E.S.S., @aharonian06]. No detection was obtained and the estimated upper limit was 0.9 per cent of the Crab Units [@abramowski14]. Upper limits of 10 per cent and 1.9 per cent of the Crab Units at VHE were obtained for the NLSy1 1H0323$+$342 [@falcone04] and PMNJ0948$+$0022 [@dammando15], respectively. The lack of VHE detection may be due either to an intrinsically soft $\gamma$-ray spectrum, or to $\gamma$-$\gamma$ absorption within the source, or to pair production from $\gamma$-ray photons of the source and the infrared photons from the extragalactic background light (EBL), although the latter scenario is disfavoured by the fact that the redshift of the most distant FSRQ detected at VHE, PKS1441$+$25 [@mirzoyan15; @mukherjee15], is higher ($z=0.939$) than the redshifts of the RL-NLSy1 investigated so far.\ Physical properties ------------------- Milliarcsecond resolution observations are a fundamental requirement for describing the morphology and understanding the physical properties of RL-NLSy1. Given their compactness on arcsecond scale, the high angular resolution provided by VLBA observations allows us to investigate the presence of a jet structure emerging from the core region and to constrain the physical characteristics of the core emission without substantial contamination from the jet.\ We computed the brightness temperature $T_{\rm B}$ of the core component by using: $$T_{\rm B} = \frac{1}{2 k} \frac{S(\nu)}{\Omega} \left( \frac{c}{\nu} \right)^{2} \label{eq_brightness}$$ where $k$ is the Boltzmann constant, $\Omega$ is the solid angle of the emitting regions, $S(\nu)$ is the source-frame flux density at the observed frequency, and $c$ is the speed of light. We computed the source-frame flux density as $S(\nu) = S_{\rm obs}(\nu) \times (1+z)^{1-\alpha}$, where $S_{\rm obs}(\nu)$ is the observer-frame flux density, $z$ is the source redshift, and $\alpha$ is the radio spectral index that we assume to be equal to 0 for the self-absorbed core component. The solid angle is given by: $$\Omega = \frac{\pi}{4} \theta_{\rm min} \theta_{\rm maj} \label{omega}$$ where $\theta_{\rm maj}$ and $\theta_{\rm min}$ are the major axis and minor axis, respectively. In both sources the core region is unresolved by our VLBA observations and the major and minor axes must be considered as upper limits. This ends up in a lower limit to the brightness temperature.\ If in Eq. \[eq\_brightness\] we consider the values derived from the VLBA image at 15 GHz, we obtain a brightness temperature $T_{\rm B} > 4.4\times 10^{9}$ K for J1548$+$3511. In the case of PKS2004$-$447 we computed the core brightness temperature making use of the values derived from the VLBA image at 1.4 GHz, and we obtained $T_{\rm B} > 2\times 10^{10}$ K.\ For PKS2004$-$447, the availability of two VLA observations separated by about one month allowed us to estimate the rest-frame variability brightness temperature, $T'_{\rm B, var}$ by using the flux density variability. Following @dammando13 we computed $T'_{\rm B}$ by: $$T'_{\rm B, var} = \frac{2}{\pi k} \frac{|\Delta S| D_{\rm L}^{2}}{\Delta t^{2} \nu^{2} (1+z)^{1 + \alpha}} \label{var_bright}$$ where $|\Delta S|$ is the flux density variation, $\Delta t$ is the time lag between the observations, and $D_{\rm L}$ is the luminosity distance. If in Eq. \[var\_bright\] we consider $|\Delta S|=221$ mJy and $\Delta t = 36$ days, i.e. the flux density variation measured between the last two VLA observations, we obtain $T'_{\rm B, var} \sim 1.7 \times 10^{14}$K. This high variability brightness temperature is similar to the value derived by @gallo06 on the basis of Ceduna monitoring campaign. In the case of J1548$+$3511, @yuan08 estimated a variability brightness temperature $T'_{\rm B, var} = 10^{13}$K on the basis of the flux density variation measured in 207 days at 4.9 GHz. Assuming that such high values are due to Doppler boosting, we can estimate the variability Doppler factor $\delta_{\rm var}$, by using: $$\delta_{\rm var} = \left( \frac{T'_{\rm B, var}}{T_{\rm int}} \right)^{\frac{1}{3+\alpha}} \label{doppler}$$ where $T_{\rm int}$ is the intrinsic brightness temperature. Assuming a typical value $T_{\rm int}=5 \times 10^{10}$K, as derived by e.g. @readhead94 and @hovatta09, and a flat spectrum with $\alpha=0$, we obtain $\delta =15$ and 5.8 for PKS2004$-$447 and J1548$+$3511, respectively. These values are in agreement with the range of variability Doppler factor found for the RL-NLSy1 SBS0846$+$513 [@dammando13b], PKS1502$+$036 [@dammando13], as well as for blazars [@savolainen10].\ We estimated the ranges of viewing angles $\theta$ and of the bulk velocity in terms of speed of light $\beta$ from the jet/counter-jet brightness ratio. Assuming that the source has two symmetrical jets of the same intrinsic power, we used the equation: $$\frac{B_{\rm j}}{B_{\rm cj}} = \left( \frac{1 + \beta {\rm cos}\theta}{1 - \beta {\rm cos}\theta} \right)^{2 + \alpha} \label{brightness}$$ where $B_{\rm j}$ and $B_{\rm cj}$ are the jet and counter-jet brightness, respectively. We prefer to compare the surface brightness instead of the flux density because the jet has a smooth structure without clear knots. The jet brightness for J1548$+$3511 and PKS2004$-$447 is 0.93 mJy beam$^{-1}$ and 14.0 mJy beam$^{-1}$, respectively. In the case of the counter-jet, which is not visible, we assumed an upper limit for the surface brightness that corresponds to 0.2 mJy beam$^{-1}$ and 0.6 mJy beam$^{-1}$ for J1548$+$3511 and PKS2004$-$447, respectively, i.e. 1$\sigma$ noise level measured on the image. From the brightness ratio estimated from Eq. \[brightness\] we obtain $\beta$cos$\theta > 0.61$ for J1548$+$3511 and 0.52 for PKS2004$-$447, implying that the minimum velocity is $\beta > 0.6$ and 0.5 and a maximum viewing angle $\theta = 52^{\circ}$ and 59$^{\circ}$, respectively. These limits do not provide a tight constraint on the physical parameters of these objects.\ SED modelling ------------- We created an average SED for the two NLSy1 studied here. The SED of PKS2004$-$447 includes the 6-year average [*Fermi*]{}-LAT spectrum, the [*XMM-Newton*]{} EPIC-pn data collected on 2012 October 18, and the [*Swift*]{}-UVOT data collected on 2012 September 12. In addition, we included the IR data collected by WISE on 2010 April 14 and by Siding Spring Observatory in 2004 April 10 – 12, the radio VLA and VLBA data presented in this paper, and the ATCA data from @gallo06. The SED of J1548$+$3511 includes the upper limit estimated over 6 years of [*Fermi*]{} observations, the [*XMM-Newton*]{} (EPIC-pn and OM) data collected on 2011 August 8, the IR data collected by WISE on 2010 January 30 and by 2MASS on 1998 April 3, and the radio VLA data presented here. The multiwavelength data are not simultaneous.\ We modelled the two SED with a combination of synchrotron, synchrotron self-Compton (SSC), and external Compton (EC) non-thermal emission. The synchrotron component considered is self-absorbed below $10^{11}$ Hz and thus cannot reproduce the radio emission. This emission is likely from the superposition of multiple self-absorbed jet components [@konigl81]. We also included thermal emission by an accretion disc and dust torus. The modelling details can be found in @finke08 and @dermer09. Additionally, a soft excess was observed in the X-ray spectrum of J1548$+$3511. We note that the origin of this soft X-ray emission is uncertain. In order to account for this feature, as a possible origin, we included emission from the disc, which is Compton scattered by an optically thin thermal plasma near the accretion disc (i.e., a corona). This was done using the routine “[SIMPL]{}” [@steiner09]. This routine has two free parameters: the fraction of disc photons scattered by the corona ($f_{\rm sc}$), and the power-law photon index of the scattered coronal emission ($\Gamma_{\rm sc}$). The mass of the BH was chosen to be the same as the one reported in @foschini15.\ The results of the modelling are reported in Table \[table\_fit\] and Figure \[SED\_fig\] [for a description of the model parameters see @dermer09]. This model assumes that the emitting region is outside the Broad Line Region, where dust torus photons are likely the seed photon source. This seed photon source was modelled as being an isotropic, monochromatic radiation source with dust parameters chosen to be consistent with the relation between inner radius, disc luminosity, and dust temperature from @nenkova08.\ The Compton component of the SED of PKS2004$-$447 and J1548$+$3511 is modelled with an external Compton scattering of dust torus seed photons, as for the SED of SBS0846$+$513 [@dammando13] and PMNJ0948$+$0022 [@dammando15]. In PKS2004$-$447 the IR data are not well fitted by the model. This may be due to the fact that the data considered in the SED are not simultaneous and some variability may be present (note that these data are not well-fitted by the modelling done by @paliya13 either). We note that the disc luminosity of PKS2004$-$447 is particularly weak. This value is similar to the disc luminosity of SBS0846$+$513, while it is significantly lower than that of PMNJ0948$+$0022. The disc luminosity of PKS2004$-$447 is consistent with the value estimated by @foschini15 on the basis of the optical spectrum, and the lack of a blue bump. This is in contrast to the modelling of PKS2004$-$447 by @paliya13 who used a much brighter dust component, which was not consistent with the disc luminosity estimated by @foschini15. It is worth mentioning that the model parameters shown here are not unique, and other model parameters could reproduce the SED equally well. This is particularly true for J1548$+$3511, since without a LAT detection, its Compton component is not well-constrained.\ Parameter Symbol PKS2004$-$447 J1548$+$3511 ---------------------------------------------------------------------------- ----------------------------- ---------------------- -------------------- -- Redshift $z$ 0.24 0.478 Bulk Lorentz Factor $\Gamma$ 30 30 Doppler factor $\delta_D$ 30 30 Magnetic Field \[G\] $B$ 0.5 2.0 Variability Timescale \[s\] $t_v$ $10^5$ $10^5$ Comoving radius of blob \[cm\] $R^{\prime}_b$ 7.3$\times$10$^{16}$ $6.1\times10^{16}$ Low-Energy Electron Spectral Index $p_1$ 2.5 2.5 High-Energy Electron Spectral Index $p_2$ 3.8 3.6 Minimum Electron Lorentz Factor $\gamma^{\prime}_{\rm min}$ 1.0 3.0 Break Electron Lorentz Factor $\gamma^{\prime}_{\rm brk}$ $6.0\times10^2$ $1.0\times10^2$ Maximum Electron Lorentz Factor $\gamma^{\prime}_{\rm max}$ $4.0\times10^3$ $1.0\times10^4$ Black hole mass \[$M_\odot$\] $M_{\rm BH}$ $4.3\times10^6$ $8.3\times10^7$ Disc luminosity \[${\mathrm{erg}}\ {\mathrm{s}}^{-1}$\] $L_{\rm disc}$ $1.8\times10^{42}$ $1.4\times10^{45}$ Inner disc radius \[$R_g$\] $R_{\rm in}$ 6.0 2.0 Outer disc radius \[$R_g$\] $R_{\rm out}$ 200 200 Fraction of disc photons scattered by corona $f_{\rm sc}$ 0 0.08 Corona spectral index ${\Gamma}_{\rm sc}$ N/A 2.6 Seed photon source energy density \[${\mathrm{erg}}\ {\mathrm{cm}}^{-3}$\] $u_{\rm seed}$ $8.3\times10^{-6}$ $2.7\times10^{-5}$ Seed photon source photon energy ${\epsilon}_{\rm seed}$ $5\times10^{-7}$ $5\times10^{-7}$ Dust torus luminosity \[${\mathrm{erg}}\ {\mathrm{s}}^{-1}$\] $L_{\rm dust}$ $7.5\times10^{40}$ $1.9\times10^{44}$ Dust torus radius \[cm\] $R_{\rm dust}$ $6.5\times10^{15}$ $4.1\times10^{18}$ Dust temperature \[K\] $T_{\rm dust}$ 1000 1000 Jet Power in Magnetic Field \[${\mathrm{erg}}\ {\mathrm{s}}^{-1}$\] $P_{\rm j,B}$ $8.9\times10^{45}$ $1.0\times10^{47}$ Jet Power in Electrons \[${\mathrm{erg}}\ {\mathrm{s}}^{-1}$\] $P_{\rm j,e}$ $2.7\times10^{44}$ $6.6\times10^{43}$ RL-NLSy1 and the young radio source population ---------------------------------------------- The low black hole mass and the high accretion rate commonly estimated in NLSy1 suggested that these objects may be in an early evolutionary stage [@grupe99; @mathur00]. Some RL-NLSy1 have been proposed to be compact steep spectrum (CSS) radio sources, on the basis of their compactness and flat/inverted spectrum which turns over at a few hundred MHz or around one GHz [@yuan08]. CSS are powerful radio sources whose linear size is $<20$ kpc. Due to their intrinsically compact size they are considered radio sources in an early evolutionary stage (see e.g. O’Dea 1998 for a review). Kinematic and radiative studies provided ages of 10$^{3-5}$ years, strongly favouring the youth scenario [see e.g. @polatidis03; @murgia03].\ Many CSS sources are hosted in galaxies that recently underwent major mergers, which may have triggered the onset of the radio emission. A similar scenario was suggested by @leon14 for the RL-NLSy1 1H0323$+$342. A link between NLSy1 and CSS was also proposed by @caccianiga14 for the RL-NLSy1 SDSSJ143244.91$+$301435.3 on the basis of its compact size, absence of variability, and a steep radio spectrum. In this context the variable and $\gamma$-ray emitting RL-NLSy1 may be the aligned population of RL-NLSy1 where the CSS properties are hidden by dominant boosting effects. However, at least three (1H0323$+$342, PMNJ0948$+$0022, and FBQSJ1644$+$2619) out of the six RL-NLSy1 detected in $\gamma$-rays have extended structures with linear size of 20-50 kpc [@doi12], challenging this interpretation. Among the remaining three objects, PKS2004$-$447 was suggested as a possible CSS source by @gallo06. However, the X-ray spectra in CSS sources are typically highly obscured with column density N$_{\rm H} \gtrsim 10^{22}$ cm$^{-2}$ [@tengstrand09], while no absorber in addition to the Galactic one is needed for modelling the X-ray spectrum of PKS2004$-$447.\ The information available so far is not enough to firmly link the RL-NLSy1 to the young radio source population. Statistical multiwavelength studies on a large sample of RL-NLSy1 are required for investigating a possible connection between these sub-class of radio-loud AGN.\ Conclusions =========== We presented results on a multiwavelength study, from radio to $\gamma$-rays, of the RL-NLSy1 J1548$+$3511 and PKS2004$-$447. The conclusions from this investigation can be summarized as follows: - PKS2004$-$447 is detected in $\gamma$-rays by LAT with an average flux between 0.1 and 100 GeV of $\sim$1.6$\times$10$^{-8}$ phcm$^{-2}$ s$^{-1}$, corresponding to a luminosity of $\sim$1.6$\times$10$^{45}$ erg s$^{-1}$, which is comparable to the values found in the other $\gamma$-ray emitting NLSy1. No strong flares have been detected so far.\ - J1548$+$3511 has not been detected in $\gamma$-rays by LAT during the first 6 years of observations. The upper limit to the luminosity is 1.7$\times$10$^{46}$ erg/s.\ - Both sources have a clear core-jet structure on parsec scales. The majority of the radio emission comes from the core component. On arcsecond scale the radio structure is unresolved.\ - PKS2004$-$447 shows significant variability from radio to $\gamma$-rays. In particular, at the end of 2011 and at the end of 2013 the high activity state observed in $\gamma$-rays is simultaneous to a local maximum in X-rays, UV and optical bands, suggesting a common origin. The X-ray spectra collected by [*XMM-Newton*]{} in 2012 from 0.3 to 10 keV are fitted by a single power law. No significant X-ray photon index variability is observed in the period considered in this paper.\ - The X-ray spectrum of J1548$+$3511 is well fitted by a soft component at low energy, and by a hard component above 2 keV. The X-ray flux increases of about 25 per cent in the two weeks between the [*XMM-Newton*]{} observations. The brightening is accompanied by a hardening of the spectrum, in particular above the energy break. This is a further indication that the emission in the high-energy part of the X-ray spectrum is dominated by the non-thermal jet emission, while a Seyfert component may be present in the low-energy part of the spectrum.\ - The broadband SED of both sources can be reproduced with synchrotron, SSC and external Compton scattering of the IR seed photons from the dust torus. The soft excess in J1548$+$3511 can be modelled as emission from a thermal corona. The disc luminosity of PKS2004$-$447 turns out to be very weak.\ - The variability brightness temperature and the Doppler factor derived for both sources are similar to those found in $\gamma$-ray blazars. These characteristics, together with the high radio-loudness are a good proxy for the presence of relativistic jet. However, they are not good tools in the selection of $\gamma$-ray emitting NLSy1.\ Acknowledgments {#acknowledgments .unnumbered} =============== We thank the referee Dirk Grupe for helpful and valuable comments that improved the manuscript. The VLBA and VLA are operated by the US National Radio Astronomy Observatory which is a facility of the National Science Foundation operated under a cooperative agreement by Associated Universities, Inc.\ The [*Fermi*]{} LAT Collaboration acknowledges generous ongoing support from a number of agencies and institutes that have supported both the development and the operation of the LAT as well as scientific data analysis. These include the National Aeronautics and Space Administration and the Department of Energy in the United States, the Commissariat à l’Energie Atomique and the Centre National de la Recherche Scientifique / Institut National de Physique Nucléaire et de Physique des Particules in France, the Agenzia Spaziale Italiana and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and the K. A. Wallenberg Foundation, the Swedish Research Council and the Swedish National Space Board in Sweden. Additional support for science analysis during the operations phase is gratefully acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d’Études Spatiales in France.\ This research has made use of the NASA/IPAC Extragalactic Database NED which is operated by the JPL, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, and NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology. WISE and NEOWISE are funded by the National Aeronautics and Space Administration. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. Abdo, A. A., et al. 2009, ApJ, 707, L142 Abdo, A.A., et al. 2010a, ApJ, 720, 912 Abdo, A.A., et al. 2010b, Nature, 463, 919 Abramowski, A., et al. 2014 (H.E.S.S. Collaboration), A&A, 564, 9 Acero, F., et al. 2015, ApJS submitted, arXiv:1501.02003 Ackermann, M., et al. 2011, ApJ, 743, 171 Ackermann, M. et al., 2012, ApJS, 203, 4 Ackermann, M., et al. 2015, ApJS submitted Aharonian, F., et al. 2006 (H.E.S.S. Collaboration), A&A, 457, 899 Antón, S., Browne, I.W.A., Marcha, M.J. 2008, A&A, 490, 583 Atwood, W.B., et al. 2009, ApJ, 697, 1071 Barthelmy, S.D., et al. 2005, SSRv, 120, 143 Baumgartner, W.H., et al. 2013, ApJS, 207, 19 Becker, R.H., White, R.L., Helfand, D.J. 1995, ApJ, 450, 559 Bessell, M. S., Castelli, F., & Plez, B. 1998, A$\&$A, 333, 231 Boller, Th., et al. 2002, MNRAS, 329, L1 Breeveld, A., et al. 2011, AIP Conf. Ser. Vol. 1358, Gamma Ray Bursts 2010. McEnery J. E., Racusin J. L., Gehrels N., editors. New York: Am. Inst. Phys.; 2011. p. 373 Burrows, D.N., et al. 2005, SSRv,120,165 Caccianiga, A., et al. 2014, MNRAS, 441, 172 Cardelli, J.A., Clayton, G.C., Mathis, J.S. 1989, ApJ, 345, 245 Cash, W. 1979, ApJ, 228, 939 Condon, J.J, Cotton, W.D., Greisen, E.W., Yin, Q.F., Perley, R.A., Taylor, G.B., Broderick, J.J. 1998, AJ, 115, 1693 D’Ammando, F., et al. 2012, MNRAS, 426, 317 D’Ammando, F., et al. 2013b, MNRAS, 433, 952 D’Ammando, F., et al. 2013a, MNRAS, 436, 191 D’Ammando, F., et al. 2014, MNRAS, 438, 3521 D’Ammando, F., Orienti, M., Larsson, J., Giroletti, M. 2015a, MNRAS, 452, 520 D’Ammando, F., et al. 2015b, MNRAS, 446, 2456 Dermer, C.D., Finke, J.D., Krug, H., Böttcher, M. 2009, ApJ, 692, 32 Doi, A., Asada, K., Nagai, H. 2011, ApJ, 738, 126 Doi, A., Nagira, H., Kawakatu, N., Kino, M., Nagai, H., Asada, K. 2012, ApJ, 760, 41 Doi, A., Asada, K., Fujisawa, K., Nagai, H., Hagiwara, Y., Wajima, K., Inoue, M. 2013, ApJ, 765, 69 Fabian, A.C., et al. 2009, Nature, 459, 540 Falcone, A.D., et al. 2004, ApJ, 613, 717 Finke, J.D., Dermer, C.D., Böttcher, M. 2008, ApJ, 686, 181 Foschini, L., et al. 2012, A&A, 548, 106 Foschini, L., et al. 2015, A&A, 575, 13 Gallo, L.C., et al. 2006, MNRAS, 370, 245 Gehrels, N., et al. 2004, ApJ, 611, 1005 Gregory, P.C., Condon, J.J. 1991, ApJS, 75, 1011 Grupe, D., Beuermann, K., Mannheim, K., Thomas, H-C. 1999, A&A, 350, 805 Grupe D., Komossa S., Leighly K.M., Page K.L. 2010, ApJS, 187, 64 Hovatta, T., Nieppola, E., Tornikoski, M., Valtaoja, E., Aller, M.F., Aller, H.D. 2008, A&A, 485, 51 Hovatta, T., Valtaoja, E., Tornikoski, M., Lähteenmäki, A. 2009, A&A, 2009, 494, 527 Jansen F., et al. 2001, A&A, 365, L1 Kalberla, P. M. W., Burton, W. B., Hartmann, D., Arnal, E.M., Bajaja, E., Morras, R., Pöppel, W.G.L. 2005, A&A, 440, 775 Kataoka, J., et al. 2008, ApJ, 672, 787 Kellermann, K.I., Sramek, R., Schmidt, M., Shaffer, D.B., Green, R. 1989, AJ, 98, 1195 Komossa, S., et al. 2006, AJ, 132, 531 Königl, A. 1981, ApJ, 243, 700 Langston, G.I., Heflin, M.B., Conner, S.R., Lehár, J., Carrilli, C.L., Burke, B.F. 1990, ApJS, 72, 621 León-Tavares, J., et al. 2014, ApJ, 795, 58 Markarian, B.E., Lipovetsky, V.A., Stepanian, J.A., Erastova, L.K., Shapovalova, A.I. 1989, SoSAO, 62, 5 Mason, K.O., et al. 2001, A&A, 365, 36 Mathur, S. 2000, MNRAS, 314, 17 Mattox, J.R., et al. 1996, ApJ, 461, 396 Miller, L., Turner, T.J., Reeves, J.N., Braito, V. 2010, MNRAS, 408, 1928 Mirzoyan, R. 2015, ATel, 7416 Mukherjee, R. 2015, ATel, 7433 Murgia, M. 2003, PASA, 20, 19 Nenkova, M., Sirocky, M.M., Nikutta, R., Ivezi[ć]{}, [Ž]{}., Elitzur, M. 2008, ApJ, 685, 160 O’Dea, C.P. 1998, PASP, 110, 493 Oshlack, A.Y.K.N., Webster, R.L., Whiting, M.T. 2001, ApJ, 558, 578 Osterbrock D.E., Pogge R.W. 1985, ApJ, 297, 166 Paliya, V.S., Stalin, C.S., Shukla, A., Sahayanathan, S. 2013, ApJ, 768, 52 Paliya, V.S., Sahayanathan, S., Parker, M.L., Fabian, A.C., Stalin, C.S., Anjum, A., Pandey, S.B. 2014, ApJ, 789, 143 Panessa, F., et al. 2011, MNRAS, 417, 2426 Polatidis, A.G., Conway, J.E. 2003, PASA, 20, 69 Readhead, A.C.S. 1994, ApJ, 426, 51 Richards, J.L., Lister, M.L. 2015, ApJ, 800, 8 Roming, P. W. A., et al. 2005, SSRv, 120, 95 Savolainen, T., Homan, D.C., Hovatta, T., Kadler, M., Kovalev, Y.Y., Lister, M.L., Ros, E., Zensus, J.A. 2010, A&A, 512, 24 Schlafly, E.F. & Finkbeiner, D.P. 2011, ApJ, 737, 103 Steiner, J.F., Narayan, R., McClintock, J.E.; Ebisawa, K. 2009, PASP, 121, 1279 Tengstrand, O., Guainazzi, M., Siemiginowska, A., Fonseca Bonilla, N., Labiano, A., Worrall, D.M., Grandi, P., Piconcelli, E. 2009, A&A, 89, 102 Yuan, W., et al. 2008, ApJ, 685, 801 Wilms, J., Allen, A., McCray, R. 2000, ApJ, 542, 914 Zhou, H., et al. 2007, ApJL, 658, L13 [^1]: The radio-loudness is defined as the ratio between the radio flux density at 6 cm and the optical B-band flux [@kellermann89]. [^2]: <http://fermi.gsfc.nasa.gov/ssc/data/analysis/software/> [^3]: [http://fermi.gsfc.nasa.gov/ssc/data/access/lat/Background\\Models.html](http://fermi.gsfc.nasa.gov/ssc/data/access/lat/Background\Models.html) [^4]: http://fermi.gsfc.nasa.gov/ssc/data/analysis/LAT\_caveats.html [^5]: <http://heasarc.nasa.gov/> [^6]: <http://xmm.esac.esa.int/external/xmm_user_support/documentation/sas_usg/USG/>
--- abstract: 'Let $C$ be a smooth elliptic curve embedded in a smooth complex surface $X$ such that $C$ is a leaf of a suitable holomorphic foliation of $X$. We investigate complex analytic properties of a neighborhood of $C$ under some assumptions on complex dynamical properties of the holonomy function. As an application, we give an example of $(C, X)$ in which the line bundle $[C]$ is formally flat along $C$ however it does not admit a $C^\infty$ Hermitian metric with semi-positive curvature.' address: 'Graduate School of Mathematical Sciences, The University of Tokyo 3-8-1 Komaba, Meguro-ku, Tokyo, 153-8914 Japan ' author: - Takayuki Koike title: On a neighborhood of a torus leaf of a certain class of holomorphic foliations on complex surfaces --- Introduction ============ Let $X$ be a smooth complex surface and $C$ be a smooth elliptic curve embedded in $X$. Our aim is to investigate complex analytic properties of a neighborhood of $C$ when there exists a non-singular holomorphic foliation $\mathcal{F}$ on a neighborhood of $C$ of $X$ such that $C$ is a leaf of $\mathcal{F}$. Because of a technical reason, we always assume the following two conditions: $(1)$ there exists a neighborhood $W$ of $C$ and a holomorphic submersion $\pi\colon W\to C$ such that $\pi|_C$ is the identity map, and $(2)$ there exists a generator $\gamma_1$ of the fundamental group $\pi_1(C, *)$ of $C$ such that the holonomy of $\mathcal{F}$ on $C$ along $\gamma_1$ is trivial: i.e. the holonomy morphism ${\rm Hol}_C\colon\pi_1(C, *)\to \mathcal{O}_{\mathbb{C}, 0}$ satisfies $${\rm Hol}_C(\gamma_1)(\xi)=\xi,\ {\rm Hol}_C(\gamma_2)(\xi)=f(\xi)$$ for (a germ of) some holomorphic function $f\in\mathcal{O}_{\mathbb{C}, 0}$ with $f(0)=0$ and $f'(0)\not=0$, where $\gamma_1$ and $\gamma_2$ are the generators of $\pi_1(C, *)$. The main result in the present paper is the following: \[thm:main\] Let $X$ be a smooth complex surface and $C$ be a smooth elliptic curve embedded in $X$. Assume that there exists a non-singular holomorphic foliation $\mathcal{F}$ on a neighborhood of $C$ of $X$ such that $C$ is a leaf of $\mathcal{F}$ and the conditions $(1)$ and $(2)$ above hold for some $f\in\mathcal{O}_{\mathbb{C}, 0}$ with $f(0)=0$ and $f'(0)\not=0$. Then the following holds:\ $(i)$ Assume that $f$ is a rational function and that $0$ is a repelling fixed point (i.e. $|f'(0)|>1$), an attracting fixed point (i.e. $|f'(0)|<1$), or a Siegel fixed point of $f$ (i.e. $f'(0)=e^{2\mathrm{i}\pi\theta}$ for some irrational number $\theta$ and $0$ lies in the Fatou set of $f$). Then there exits a neighborhood $W'$ of $C$ and a harmonic function $\Phi$ defined on $W'\setminus C$ such that $\Phi(p)=-\log{\rm dist}\,(p, C)+O(1)$ as $p\to C$, where ${\rm dist}\,(p, C)$ is a local Euclidean distance from $p$ to $C$. Especially, $C$ admits a pseudoflat neighborhood system.\ $(ii)$ Assume that $0$ is a rationally indifferent fixed point of $f$ (i.e. $f'(0)=e^{2\mathrm{i}\pi\theta}$ for some rational number $\theta$) and that $n$-th iterate $f^n$ of $f$ is not equal to the identity map around $0$ for each integer $n$. Then $C$ admits a strongly pseudoconcave neighborhood system.\ $(iii)$ Assume that $f$ is a polynomial and that, for each neighborhood $\Omega$ of $0$, there exists a periodic cycle $\{f(\eta), f^2(\eta), \dots, f^m(\eta)=\eta\}$ of $f$ included in $\Omega\setminus\{0\}$. Then, for any neighborhood $W'$ of $C$ and any continuous function $\psi\colon W'\to(-\infty, \infty]$ whose restriction $\psi|_{W'\setminus C}$ is psh (plurisubharmonic), $\psi$ is bounded from above on a neighborhood of $C$. Note that there actually exist examples of $(C, X)$ satisfying the assumptions in Theorem \[thm:main\] for each of the statements $(i), (ii)$, and $(iii)$. It is because, as we will see in §\[subsection:constr\], we can construct a pair $(C, X)$ which satisfies the conditions $(1)$ and $(2)$ for any fixed elliptic curve $C$ and any holnomy function $f\in\mathcal{O}_{\mathbb{C}, 0}$ (here we also need the facts on the existence of $f$ with $0$ as a Siegel fixed point [@Si] [@Be Theorem 6.6.4], or a fixed point as in $(iii)$ [@U83 §5.4]). Theorem \[thm:main\] $(i)$ can be shown by concrete construction of such a harmonic function $\Phi$, and Theorem \[thm:main\] $(ii)$ is obtained as a simple application of Ueda theory [@U83] (note that the normal bundle $N_{C/X}$ is topologically trivial, which follows directly from Camacho–Sad formula [@CS] [@S]). Thus, our main interest here is the case where $0$ is a Cremer fixed point of $f$ (i.e. $0$ is a irrationally indifferent fixed point lying in the Julia set of $f$), of which the situation of Theorem \[thm:main\] $(iii)$ is a special case. We will show Theorem \[thm:main\] $(iii)$ by the same technique as that used in the proof of [@U83 Theorem 2] (by constructing leafwise harmonic psh function on a neighborhood of $C$ instead of the function whose complex Hessian has a negative eigenvalue). Here let us explain our motivation. Our original interest is the singularity of [*minimal singular metrics*]{} on a topologically trivial line bundle on a surface which is defined by a smooth embedded curve. Minimal singular metrics of a line bundle $L$ are metrics of $L$ with the mildest singularities among singular Hermitian metrics of $L$ whose local weights are psh. Minimal singular metrics were introduced in [@DPS00 1.4] as a (weak) analytic analogue of the Zariski decomposition. Let $X$ be a surface and $C$ be a smooth embedded curve with topologically trivial normal bundle, and denote by $[C]$ the line bundle defined by the divisor $C$. Ueda classified such a pair $(C, X)$ into the tree types: $(\alpha)$ when $[C]$ is not formally flat along $C$, $(\beta)$ when $[C]$ is flat around $C$, and $(\gamma)$ when $[C]$ is formally flat along $C$ however it is not flat around $C$. In [@K3], we determined a minimal singular metric of $[C]$ when the pair $(C, X)$ is of type $(\alpha)$. From the argument in the proof of [@K2 Corollary 3.4], it can be shown that $[C]$ is semi-positive (i.e. $[C]$ admits a $C^\infty$ Hermitian metric with semi-positive curvature) when the pair $(C, X)$ is of type $(\beta)$. Then now we are interested in the case of type $(\gamma)$, especially for the example of $(C, X)$ of type $(\gamma)$ constructed in [@U83 §5.4], that is a motivation of this paper (here we note that, the setting of $(C, X)$ $(iii)$ in Theorem \[thm:main\] is a modest generalization of this Ueda’s example). As an application of Theorem \[thm:main\], we show the following: \[cor:main\] Let $(C, X, \mathcal{F}, f)$ be that in Theorem \[thm:main\]. Assume that $f$ is a polynomial of degree $d$ with $0$ as an irrationally indifferent fixed point. Denote by $\tau$ the number $f'(0)$. Then the following holds:\ $(i)$ If there exists a positive number $M$ and $k$ such that $|\tau^n-1|^{-1}\leq M\cdot n^k$ holds for each integer $n$, then $[C]$ is semi-positive.\ $(ii)$ If there is a number $A >1$ such that $\liminf_{\ell\to\infty}A^\ell\cdot|1-\tau^{\ell}|^{\frac{1}{d^\ell-1}}= 0$, then the singular Hermitian metric $|f_C|^{-2}$ is a metric on $[C]$ with the mildest singularities among singular Hermitian metrics $h$ on $[C]$ with semi-positive curvature such that $|f_C|_h$ is continuous around $C$, where $f_C\in H^0(X, [C])$ is a section whose zero divisor is $C$. Especially, there exists a pair $(C, X)$ of type $(\gamma)$ such that $[C]$ is not semi-positive. As an application of Corollary \[cor:main\], we construct a family of pairs of a surface and a line bundle defined on the surface whose semi-positivity varies pathologically depending on the parameter (see Example \[ex:cor\]). The organization of the paper is as follows. In §2, we prove the existence and uniqueness (up to shrinking $X$) of the pair $(C, X)$ for a fixed holonomy function $f\in\mathcal{O}_{\mathbb{C}, 0}$. In §3, we prove Theorem \[thm:main\] and Corollary \[cor:main\]. In §4, We give some examples. The author would like to give heartful thanks to Prof. Shigeharu Takayama and Prof. Tetsuo Ueda whose comments and suggestions were of inestimable value. He also thanks Prof. Masanori Adachi and Dr. Ryosuke Nomura for helpful comments and warm encouragements. He is supported by the Grant-in-Aid for Scientific Research (KAKENHI No.25-2869). Construction and uniqueness of $(C, X)$ ======================================= construction of $(C, X)$ {#subsection:constr} ------------------------ Let $f$ be a holomorphic function defined on a neighborhood $\Omega$ of $0$ in $\mathbb{C}$ such that $f(0)=0$ and $f'(0)\not=0$, and $C=\mathbb{C}^*/\sim_\lambda$ be a smooth elliptic curve, where $0<\lambda<1$ is a constant and $\sim_\lambda$ is the relation on $\mathbb{C}^*:=\mathbb{C}\setminus\{0\}$ generated by $z\sim_\lambda \lambda\cdot z$. We denote by $p$ the natural map $\mathbb{C}^*\to C$. In this subsection, we construct a smooth complex surface $X$ as in Theorem \[thm:main\]: i.e. the surface $X$ which includes $C$ as a submanifold, admits a non-singular holomorphic foliation $\mathcal{F}$ on a neighborhood of $C$ of $X$ such that $C$ is a leaf of $\mathcal{F}$, and satisfies the following two conditions $(1)$ and $(2)$: $(1)$ there exists a neighborhood $W$ of $C$ and a holomorphic submersion $\pi\colon W\to C$ such that $\pi|_C$ is the identity map, and $(2)$ the holonomy morphism ${\rm Hol}_C\colon\pi_1(C, *)\to \mathcal{O}_{\mathbb{C}, 0}$ satisfies $${\rm Hol}_C(\gamma_1)(\xi)=\xi,\ {\rm Hol}_C(\gamma_2)(\xi)=f(\xi),$$ where $\gamma_1:=[p(\{|z|=1\})]\in\pi_1(C, *)$ and $\gamma_2:=[p(\{z\in\mathbb{R}\mid \lambda\leq z\leq 1\})]\in\pi_1(C, *)$. First, fix a sufficiently small neighborhood $U_0$ of $0$ in $\Omega$ such that $f|_{U_0}$ is injective. Denoting by $V_0$ the image $f(U_0)\subset \mathbb{C}$, let us consider the sets $V_1:=U_0\cap V_0$ and $U_1:=(f|_{U_0})^{-1}(V_1)$. In what follows, we regard $f$ as an isomorphism from $U_1$ to $V_1$. Fixing a sufficiently small positive constant $\varepsilon_0$, define the constants $\lambda_1$ and $\lambda_2$ by $\lambda_1:=1-\varepsilon_0, \lambda_2:=1+\varepsilon_0$. Denote by $X_1$ the set $p(\{z\in\mathbb{C}\mid \lambda<|z|<1\})\times U_0$, and by $X_2$ the set $p(\{z\in\mathbb{C}\mid \lambda_1<|z|<\lambda_2\})\times U_1$. Next, we construct $X$ by gluing $X_1$ and $X_2$ as follows. Let us denote by $X^-_1$ the subset $p(\{z\in\mathbb{C}\mid \lambda_1<|z|<1\})\times U_1$ of $X_1$, and by $X^-_2$ the subset $p(\{z\in\mathbb{C}\mid \lambda_1<|z|<1\})\times U_1$ of $X_2$. We glue them up by the isomorphism $i^-\colon X^-_2\to X^-_1$ defined by $i^-(p(z), \xi):=(p(z), \xi)$. Denote by $X^+_1$ the subset $p(\{z\in\mathbb{C}\mid 1<|z|<\lambda_2\})\times V_1$ of $X_1$, and by $X^+_2$ the subset $p(\{z\in\mathbb{C}\mid 1<|z|<\lambda_2\})\times U_1$ of $X_2$. We glue them up by the isomorphism $i^+\colon X^+_2\to X^+_1$ defied by $i^+(p(z), \xi):=(p(z), f(\xi))$. Then $X_1$ and $X_2$ glue up to each other by the morphisms $i^+$ and $i^-$ above to define a smooth complex surface, by which we define $X$. Finally, we will check that this $X$ satisfies the conditions above. Note that the first projections $X_1\to p(\{z\in\mathbb{C}\mid \lambda<|z|<1\})$ and $X_2\to p(\{z\in\mathbb{C}\mid \lambda_1<|z|<\lambda_2\})$ glue up to each other to define a entire map $\pi\colon X\to C$. As this morphism $\pi$ is a holomorphic submersion, we can check the condition $(1)$ (the condition on $\pi|_C$ can be easily checked by the following construction of the submanifold $C\subset X$. Note also that, in this construction, $W'$ can be taken as $X$ itself). Next, we will define a foliation $\mathcal{F}$ on $X$. Let $\mathcal{F}_1$ be the foliation on $X_1$ whose leaves are $\{\{(p(z), \xi)\in X_1\mid \xi=c\}\}_{c\in U_0}$, and $\mathcal{F}_2$ be the foliation on $X_2$ whose leaves are $\{\{(p(z), \xi)\in X_2\mid \xi=c\}\}_{c\in U_1}$. These two foliations glue up to each other by the morphisms $i^+$ and $i^-$ above to define the foliation on $X$, which we denote by $\mathcal{F}$. As $f(0)=0$, the leaves $\{(p(z), \xi)\in X_1\mid \xi=0\}$ and $\{(p(z), \xi)\in X_2\mid \xi=0\}$ glue up to define a compact connected leaf of $\mathcal{F}$, which is naturally isomorphic to $C$. We regard this compact leaf as a submanifold $C\subset X$. From this construction, one can easily check the condition $(2)$. uniqueness of $(C, X)$ ---------------------- Here we will show the following: \[prop:uniq\] Let $f$ be an element of $\mathcal{O}_{\mathbb{C}, 0}$ such that $f(0)=0$ and $f'(0)\not=0$ and $C$ be a smooth elliptic curve. Let $X'$ be a surface as in Theorem \[thm:main\]: i.e. $X'$ includes $C$ as a submanifold, admits a non-singular holomorphic foliation $\mathcal{F}'$ on a neighborhood of $C$ of $X'$ such that $C$ is a leaf of $\mathcal{F}'$, and satisfies the following two conditions $(1)$ and $(2)$: $(1)$ there exists a neighborhood $W'$ of $C$ and a holomorphic submersion $\pi'\colon W'\to C$ such that $\pi'|_C$ is the identity map, and $(2)$ the holonomy morphism ${\rm Hol}'_C\colon\pi_1(C, *)\to \mathcal{O}_{\mathbb{C}, 0}$ satisfies $${\rm Hol}'_C(\gamma_1)(\xi)=\xi,\ {\rm Hol}'_C(\gamma_2)(\xi)=f(\xi),$$ where $\gamma_1$ and $\gamma_2$ are those in §\[subsection:constr\]. Then, by shrinking $U_0$ in §\[subsection:constr\] if necessary, there exists an holomorphic map $j\colon X\to W'$ such that $j$ is an isomorphism to the image of $j$, $j(C)=C\subset X'$, $j$ preserves the foliation structures, and that $\pi'\circ j=\pi$ holds, where $(X, \pi, \mathcal{F})$ are those in §\[subsection:constr\]. By shrinking $X'$, we may assume that $W'=X'$. Denote by $p$ the point $p((1+\lambda_2)/2)\in C$, and fix an embedding $\pi'^{-1}(p)\to\mathbb{C}$ (by shrinking $W'$ if necessary). We also may assume that $U_0$ is small enough so that we can regard it as a subset of $\pi'^{-1}(p)$: $U_0\subset \pi'^{-1}(p)$. Let $h_1\colon \pi'^{-1}(p(\{\lambda<|z|<1\}))\to \mathbb{C}$ be the leafwise constant holomorphic extension of the inclusion $\pi'^{-1}(p)\to\mathbb{C}$, and $h_2\colon \pi'^{-1}(p(\{\lambda_1<|z|<\lambda_2\}))\to \mathbb{C}$ be the leafwise constant holomorphic extension of the inclusion $\pi'^{-1}(p)\to\mathbb{C}$ (note that the condition on the holonomy along $\gamma_1$ is needed for the existence of $h_1$ and $h_2$). Letting $X_1':=\{q\in \pi'^{-1}(p(\{\lambda<|z|<1\}))\mid h_1(q)\in U_0\}$ and $X_2':=\{q\in \pi'^{-1}(p(\{\lambda_1<|z|<\lambda_2\}))\mid h_2(q)\in V_1\}$, consider the maps $$X_1'\to X_1\colon q\to (\pi'(q), h_1(q)),\ X_2'\to X_2\colon q\to (\pi'(q), (f|_{U_1})^{-1}(h_2(q))).$$ As these maps are holomorphic and bijective, and as $\pi'$ is submersion, we can conclude that both of these two maps are isomorphisms. By using the condition on the holonomy along $\gamma_2$, we can easily show that the subset $X_1'\cup X_2'$ of $X'$ is isomorphic to $X$ and the proposition holds. Proof ===== Proof of Theorem \[thm:main\] {#proof-of-theorem-thmmain .unnumbered} ----------------------------- Let $f\in\mathcal{O}_{\mathbb{C}, 0}$ be an element such that $f(0)=0$ and $\tau:=f'(0)\not=0$. From Proposition \[prop:uniq\], It is sufficient to show Theorem \[thm:main\] for $(C, X, \mathcal{F}, \pi)$ we constructed in §\[subsection:constr\]. ### Proof of Theorem \[thm:main\] $(i)$ {#proof-of-theorem-thmmain-i .unnumbered} Assume that $0$ is a repelling fixed point, an attracting fixed point, or a Siegel fixed point of $f$. According to [@Be Theorem 6.3.2, 6.6.2] and the comment near the proof of [@Be Theorem 6.4.1], there exists an element $h\in\mathcal{O}_{\mathbb{C}, 0}$ such that $h(0)=0$, $h'(0)=1$, and $f(h(\xi))= h(\tau\cdot\xi)$ holds. Thus, without loss of generality, we may assume that $f$ is linear: $f(\xi)=\tau\cdot \xi$. First we define a function $\Phi$ on $X$ as follows: Define the function $\Phi_1$ on $X_1$ by $$\Phi_1(p(z), \xi) := \log|\xi|+a\cdot \log|z|,$$ where $a:=\frac{-\log |\tau|}{\log\lambda}$ and $z$ is a complex number such that $\lambda<|z|<1$ and $\xi\in U_0$. Also define the function $\Phi_2$ on $X_2$ by $$\Phi_2(p(z), \xi) := \log|\xi|+a\cdot \log|z|$$ for each $z$ such that $\lambda_1<|z|<\lambda_2$ and $\xi\in U_1$. It is clear that $(i^-)^*\Phi_1=\Phi_2|_{X^-_2}$ holds. The equality $(i^+)^*\Phi_1=\Phi_2|_{X^+_2}$ also can be shown by the calculation as follows: for each $z$ such that $1<|z|<\lambda_2$, $$(i^+)^*\Phi_1(p(z), \xi)=\Phi_1(p(z), f(\xi))= \log|f(\xi)|+a\log|\lambda\cdot z|= \Phi_2(p(z), \xi).$$ Now we showed that the functions $\Phi_1$ and $\Phi_2$ glue up to define a function, by which we define the function $\Phi$. Clealy $\Phi$ is harmonic with $\Phi(p)=-\log{\rm dist}\,(p, C)+O(1)$ as $p\to C$, which shows Theorem \[thm:main\] $(i)$. ### Proof of Theorem \[thm:main\] $(ii)$ {#proof-of-theorem-thmmain-ii .unnumbered} Assume that $f$ is non-linear and that $0$ is a rationally indifferent fixed point of $f$. Then we may assume that $f$ has the expansion $f(\xi)=\tau\cdot\xi+A\cdot \xi^{n+1}+O(\xi^{n+2})$ for some integer $n$ $(A\not=0)$. Let us consider the cohomology class $$u_n:=[\{(X^-_2\cap C, 0), (X^+_2\cap C, \tau^{-1}\cdot A)\}]\in H^1(C, N_{C/X}^{-n}).$$ If $u_n=0$ holds, then there exists a $0$-cochain $\{(X_1\cap C, h_1), (X_2\cap C, h_2)\}\in C^0(C, N_{C/X}^{-n})$ such that $\delta\{(X_1\cap C, h_1), (X_2\cap C, h_2)\}=\{(X^-_2\cap C, 0), (X^+_2\cap C, \tau^{-1}\cdot A)\}$: i.e. $$h_1= \begin{cases} h_2 & ({\rm on}\ X^-_2\cap C) \\ \tau^{n}h_2-\tau^{-1}\cdot A & ({\rm on}\ X^+_2\cap C) \end{cases}$$ holds. Without loss of generality, we may assume that both $h_1$ and $h_2$ are constant functions (It is because, if $\tau^n\not=1$, then we can use the constant functions $h_1=h_2=(\tau^n-1)^{-1}\cdot \tau^{-1}\cdot A$. If $\tau^n=1$, then the holomorphic functions $\exp(2\textrm{i}\pi\tau A^{-1}\cdot h_1)$ and $\exp(2\textrm{i}\pi\tau A^{-1}\cdot h_2)$ glue up to define a entire non-vanishing holomorphic function on $C$, which shows that $h_1$ and $h_2$ are constants). Note that one can deduce directly from the above that the constant $h_1$ satisfies $h_1= \tau^{n}h_1-\tau^{-1}\cdot A$. Let us set $$h(\xi):=\xi+h_1\cdot\xi^{n+1}.$$ Then $$\begin{aligned} h^{-1}f(h(\xi))&=&h^{-1}(\tau\cdot(\xi+h_1\cdot\xi^{n+1})+ A\cdot\xi^{n+1}+O(\xi^{n+2}))\nonumber \\ &=& h^{-1}(\tau\cdot\xi+(\tau\cdot h_1+A)\cdot\xi^{n+1}+O(\xi^{n+2}))\nonumber \\ &=& (\tau\cdot\xi+(\tau\cdot h_1+A)\cdot\xi^{n+1})-h_1\cdot(\tau\cdot\xi)^{n+1}+O(\xi^{n+2})\nonumber \\ &=& \tau\cdot\xi+\tau\cdot((1-\tau^{n})\cdot h_1+\tau^{-1}A)\cdot\xi^{n+1}+O(\xi^{n+2})\nonumber \\ &=& \tau\cdot\xi+O(\xi^{n+2})\nonumber\end{aligned}$$ holds. Thus, by using new coordinate $\xi ':=h(\xi)$ instead of $\xi$, we can expand $f$ as $f(\xi')=\tau\cdot\xi'+O(\xi'^{n+2})$, and thus we can define the cohomology class $u_{n+1}$ just as in the same manner. Note that these $u_n$ defined as above is equal to the trivial element $0\in H^1(C, N_{C/X}^{-n})$ if and only if the obstruction class $u_n(C, X)$ is trivial (it is clear from the definition of the obstruction class $u_n(C, X)$, see [@U83 §2.1], see also [@K3 §3]). Assume that $u_n\not=0$ holds for some integer $n$. In this case, as the pair $(C, X)$ is of type $(\alpha)$ in the classification by Ueda [@U83 §5], we can apply [@U83 Theorem 1] and its corollary, which proves Theorem \[thm:main\] $(ii)$. Therefore all we have to do is to show that there actually exists an integer $n$ such that $u_n\not=0$ holds. Assume that $u_n=0$ for all $n$. Let $m\geq 1$ be an integer such that $\tau^m=1$. Then, from the assumption, $f^m$ can be expanded as below: $$f^m(\xi)=\xi+A\cdot \xi^{n+1}+O(\xi^{n+2}),$$ where $A$ in a non-zero constant. Since $u_1=u_2=\dots, u_n=0$, we can choose a suitable polynomial $h(\xi)=\xi+O(\xi^2)$ such that $h^{-1}\circ f\circ h(\xi)=\tau\cdot\xi+O(\xi^{n+2})$. Let $$h^{-1}(\eta)=\eta+\sum_{\nu=2}^\infty b_\nu\cdot\eta^\nu$$ be the expansion of the inverse function $h^{-1}$ of $h$ around $0$. Then we can calculate that $$\begin{aligned} h^{-1}\circ f^m\circ h(\xi) &=& h^{-1}(h(\xi)+A\cdot (h(\xi))^{n+1}+O(\xi^{n+2}))\nonumber \\ &=& h^{-1}(h(\xi)+A\cdot \xi^{n+1}+O(\xi^{n+2}))\nonumber \\ &=& (h(\xi)+A\cdot \xi^{n+1}+O(\xi^{n+2}))+\sum_{\nu=2}^\infty b_\nu\cdot (h(\xi)+A\cdot \xi^{n+1}+O(\xi^{n+2}))^\nu\nonumber \end{aligned}$$ $$\begin{aligned} &=& (h(\xi)+A\cdot \xi^{n+1}+O(\xi^{n+2}))+\sum_{\nu=2}^\infty b_\nu\cdot (h(\xi)^\nu+O(\xi^{n+2}))\nonumber \\ &=& h^{-1}(h(\xi))+A\cdot \xi^{n+1}+O(\xi^{n+2})=\xi+A\cdot \xi^{n+1}+O(\xi^{n+2}), \nonumber \end{aligned}$$ and also that $$\begin{aligned} h^{-1}\circ f^m\circ h(\xi) &=& (h^{-1}\circ f\circ h)^m(\xi)\nonumber \\ &=& (h^{-1}\circ f\circ h)^{m-1}(\xi+O(\xi^{n+2})) = \dots = \xi+O(\xi^{n+2}), \nonumber \end{aligned}$$ which leads the contradiction. ### Proof of Theorem \[thm:main\] $(iii)$ {#proof-of-theorem-thmmain-iii .unnumbered} Assume that $f$ is a polynomial of degree $d$ and that, for each neighborhood $\Omega$ of $0$, there exists a periodic cycle of $f$ included in $\Omega\setminus\{0\}$. Let $g$ be the Green function of the filled Julia set $K(f)$. Note that $g|_{K(f)}\equiv 0$, $f^*g=d\cdot g$, and that $g|_{I(f)}$ is a harmonic function valued in $\mathbb{R}_{>0}$, where $I(f):=\mathbb{C}\setminus K(f)=\{\xi\in\mathbb{C}\mid f^n(\xi)\to\infty\ {\rm as}\ n\to \infty\}$. Note also that $g$ is Hölder continuous (see [@CG §VIII, Theorem 3.2]). \[lem:cremer\] The point $0$ lies in the boundary of the set $\{\xi\in\mathbb{C}\mid g(\xi)=0\}$. As the total number of non-repelling cycles of $f$ is finite (see [@CG §III Theorem 2.7]) and every repelling cycle of $f$ lies in $J(f)$ (see [@Be Theorem 6.4.1]), we can conclude from the assumption that $0\in J(f)$. Thus the lemma follows from the fact that $J(f)$ coincides with the boundary $\partial K(f)$ of $K(f)$ (see [@CG §III.4]). Let $W'$ be a neighborhood of $C$ and $\psi\colon W'\to (-\infty, \infty]$ be a continuous function whose restriction $\psi|_{W'\setminus C}$ is psh. By shrinking $W'$, we may assume that $\psi$ is bounded from above on a neighborhood of $\partial W'$. Assuming that $\psi$ is not bounded from above, we will derive a contradiction. By shrinking $U_0$ in §\[subsection:constr\] if necessary, we will assume that $W'=X$. First we construct the function $G$ on $X$ as follows: Setting $a:=\frac{-\log d}{\log\lambda}$, define the function $G_1$ on $X_1$ by $$G_1(p(z), \xi) := g(\xi)\cdot |z|^a$$ for $z$ such that $\lambda<|z|<1$ and $\xi\in U_0$. Similarly, define the function $G_2$ on $X_2$ by $$G_2(p(z), \xi) := g(\xi)\cdot |z|^a$$ for $z$ such that $\lambda_1<|z|<\lambda_2$ and $\xi\in U_1$. Note that $(i^-)^*G_1=G_2|_{X^-_2}$. Note also that $(i^+)^*G_1=G_2|_{X^+_2}$ holds, since $$(i^+)^*G_1(p(z), \xi)=G_1(p(z), f(\xi))=g(f(\xi))\cdot |\lambda\cdot z|^a=d\lambda^a\cdot g(\xi) \cdot|z|^a=g(\xi) \cdot|z|^a$$ holds for each for each $z$ such that $1<|z|<\lambda_2$. Thus $G_1$ and $G_2$ glue up to define a function on $X$, by which we define $G$. By shrinking $U_0$ if necessary, we may assume that $G< 1$ holds on $X$. We also assume that $\psi<0$ holds on a neighborhood of the boundary $\partial X$ of $X$ by replacing $\psi$ with $\psi-M$ for sufficiently large real number $M$ if necessary. Then the following lemma holds: \[lemma:key\] There exists a connected leaf $L$ of $\mathcal{F}$ such that $\overline{L}\cap \partial X\not=\emptyset$ and there exists a interior point $p\in L$ such that $p$ attains the maximum value $B$ of the function $H:=\frac{\psi}{-\log G}$ on $L$. Moreover, $B$ is a positive real number. Consider the function $$H^*(q):=\limsup_{\zeta\to q}H(\zeta),$$ which is a upper semi-continuous extension of $H$. Then, as the function $\psi$ is locally bounded from above on $X\setminus C$, it is clear that $H^*(q)=0$ holds for each point $q\in\{G=0\}\setminus C$. From the assumption, there exists a point $p_0\in C$ such that $\psi(p_0)=\infty$ holds. Fix a sufficiently small neighborhood $U_2$ of $p_0$ in $\pi^{-1}(\pi(p_0))$ such that $\psi|_{U_2}>0$ and regard it as a subset of $U_1(\subset U_0)$ (here the continuity assumption for $\psi$ is needed). From the assumption and the fact that the total number of non-repelling cycles of $f$ is finite ([@CG §III Theorem 2.7]), there exists a repelling cycle $\{f(\eta), f^2(\eta), \dots, f^m(\eta)=\eta\}\subset U_2\setminus \{0\}$. Fix a sequence $\{\eta_n\}_{n\geq 0}\subset U_1\setminus K(f)$ such that $f(\eta_{n+1})=\eta_n$ and the set of all accumulation points of $\{\eta_n\}_n$ coincides with the cycle $\{f^n(\eta)\}_n$ (see Lemma \[lem:leaf\] for the existence of such a sequence). Clearly there is a connected leaf $L$ of $\mathcal{F}$ such that, for sufficiently large $n$, the point $\eta_n\in U_2\subset\pi^{-1}(\pi(p_0))$ lies in $L$. Note that $\overline{L}\cap \partial X\not=\emptyset$ follows immediately from $\eta_0\in U_1\setminus K(f)\subset I(f)$. Since the function $\psi|_L$ is negative around $\overline{L}\cap \partial X$ and $H^*(f^n(\eta))=0$ holds for each $n$, the set $\{q\in L\mid H^*(q)>0\}$ is relatively compact subset of $L$ (note that, as $\psi|_{U_2}>0$ holds, the set $\{q\in L\mid H^*(q)>0\}$ is not empty). Thus it follows from the upper semi-continuity of the function $H^*|_L$ that there exists a point $p\in\{q\in L\mid H^*(q)>0\}$ which attains the maximum $B>0$ of the function $H^*|_L$. \[lem:leaf\] Let $\eta\in J(f)\cap U_1$ be a point included in a repelling cycle of $f$. Then there exists a sequence $\{\eta_n\}_{n\geq 0}\subset U_1\setminus K(f)$ such that $f(\eta_{n+1})=\eta_n$ and the set of all accumulation points of $\{\eta_n\}_n$ coincides with the cycle $\{f^n(\eta)\}_n$. Let $m$ be the minimum positive integer which satisfies $f^m(\eta)=\eta$. Then $\eta$ is a repelling fixed point of the polynomial $f^m\colon \mathbb{C}\to\mathbb{C}$. Thus, by choosing suitable coordinate $\xi'$ of $\mathbb{C}$ such that $\xi'(\eta)=0$, we may assume that $f(\xi')=\tau_\eta\cdot \xi'$ for some complex number $\tau_\eta$ such that $|\tau_\eta|>1$ (see the comment near the proof of [@Be Theorem 6.4.1]). Fix a point $\xi_0\in U_1\setminus K(f)$ which is sufficiently close to the point $\eta$ and define the points $\eta_{j\cdot m}$ by $\xi'(\eta_{j\cdot m})=\tau_\eta^{-j}\cdot\xi'(\xi_0)$ for each integer $j$. For each integers $j\geq1$ and $n\in\{0, 1, \dots, m-1\}$, set $\eta_{m\cdot j-n}:=f^n(\eta_{m\cdot j})$. Then one can easily check the equation $f(\eta_{n+1})=\eta_n$. Let $\xi_1$ be an accumulation point of the sequence $\{\eta_n\}$. Then there exits a subsequence $\{n_k\}_k\subset\mathbb{Z}_{\geq 0}$ such that $\lim_{k\to\infty}\eta_{n_k}=\xi_1$holds. Fix an integer $\ell\in\{0, 1, \dots, m-1\}$ such that the sub sequence $\{k_j\}_j:=\{k\mid n_k\equiv -\ell\ {\rm mod}\ m\}$ is infinite. Letting $n_{k_j}=\nu_j\cdot m-\ell$, we can calculate $$\xi_1=\lim_{j\to\infty}\eta_{n_{k_j}} =\lim_{j\to\infty}\eta_{\nu_j\cdot m-\ell} =\lim_{j\to\infty}f^{\ell}\left(\eta_{\nu_j\cdot m}\right) =f^{\ell}\left(\lim_{j\to\infty}\eta_{\nu_j\cdot m}\right) =f^\ell(\eta),$$ which shows the lemma. Let $L, p, B$ be those in Lemma \[lemma:key\]. As $B$ is the maximum of the function $H=\frac{\psi}{-\log G}$ on $L$, the inequality $\psi+B\cdot\log G\leq 0$ holds on $L$. As the function $\psi$ is psh and the function $(\log G)|_L$ is harmonic, we can conclude that $(\psi+B\cdot\log G)|_L$ is a subharmonic function on $L$. Since $\psi(p)+B\cdot\log G(p)=0$ holds and $p$ is an interior point of $L$, one can use maximum principle to show that $(\psi+B\cdot\log G)|_L\equiv 0$ holds. Thus $$\psi|_L\equiv(-B\cdot\log G)|_L$$ holds, which leads the contradiction since $\psi|_L<0$ holds around $\overline{L}\cap\partial X$ and $-B\cdot\log G>0$ holds on every point of $X$. Here we give another (simplified) proof of Theorem \[thm:main\] $(iii)$, which was taught by Professor Tetsuo Ueda. Let $\psi$ be a psh function defined on $W'\setminus C$. Fix a neighborhood $Y$ of $C$ in $W'$ such that $\overline{Y}\subset W'$ and $Y\subset W$. Set $M := \sup_{\partial Y} \psi$ and fix a compact leaf $\Gamma$ of $\mathcal{F}$ such that $\Gamma\cap\pi^{-1}(p(1))\subset U_1$ is a repelling cycle of $f$. Let $L$ be a leaf of $\mathcal{F}$ which accumulates to $\Gamma$ (this $L$ can be constructed in the same manner as in the above proof of Theorem \[thm:main\] $(iii)$). Fix a holomorphic map $g \colon \{ z\in\mathbb{C}\mid 0 < |z| \leq R\} \to L$ for some $R>0$ which is an isomorphism to the image such that $g(\{|z|=R\}) \subset Y$ and $g(z)\to \Gamma$ as $z\to 0$. As the function $\psi\circ g$ is a psh function defined on $\{z\in\mathbb{C}\mid 0<|z|<R\}$ bounded from above, we can extend it and can regard it as a psh function defined on $\{z\in\mathbb{C}\mid |z|<R\}$. Thus we can use maximum principle to conclude that $\psi(g(z)) \leq M$ holds for each $z$ such that $|z|<R$. Therefor we obtain the inequality $\psi |_{L\cap Y} \leq M$. From the assumption, such a leaf $\Gamma$ exists in any neighborhood of $C$ in $X$. Thus we can conclude from the above inequality that $\liminf_{q\to p}\psi(q) \leq M$ holds for each pint $p \in C$, which shows Theorem \[thm:main\] $(iii)$. Proof of Corollary \[cor:main\] {#proof-of-corollary-cormain .unnumbered} ------------------------------- ### Proof of Corollary \[cor:main\] $(i)$ {#proof-of-corollary-cormain-i .unnumbered} In this case, $0$ is a Siegel fixed point of $f$ and thus there exists a function $\Phi$ as in the proof of Theorem \[thm:main\] $(i)$. By using this function $\Phi$, we can conclude that there exists a flat metric on $[C]|_W$ for some neighborhood $W$ of $C$, which clearly has semi-positive curvature. By using this flat metric, we can construct a smooth Hermitian metric on $[C]$ with semi-positive curvature from the same arguments as in the proof of [@K2 Corollary 3.4]. ### Proof of Corollary \[cor:main\] $(ii)$ {#proof-of-corollary-cormain-ii .unnumbered} In this case, there exists a periodic cycle of $f$ included in $\Omega\setminus\{0\}$ for each neighborhood $\Omega$ of $0$ (see [@U83 §5.4]). Let $h$ be a singular Hermitian metric on $[C]$ with semi-positive curvature such that $|f_C|_h$ is continuous around $C$, where $f_C$ is a global holomorphic section of $[C]$ whose zero divisor coincides with the divisor $C$. Then, as the function $\psi:=-\log |f_C|_h^2$ can be regarded as a continuous function defined on a neighborhood of $C$ which is psh outside of $C$, we can conclude form Theorem \[thm:main\] $(iii)$ that there exits a positive constant $M$ such that $\psi<M$ holds on $X$, which shows the corollary (see also the proof of [@K3 Theorem 1.1]). Some examples ============= Let $p\colon \mathbb{C}^*\to C$ and $0<\lambda<1$ be those in §\[subsection:constr\]. Consider the rank-2 vector bundle on $E\to C$ defined by $E:=(\mathbb{C}^*\times\mathbb{C}^2)/\sim$, where $\sim$ is the relation generated by $(z, x, y)\sim (\lambda\cdot z, x, x+y)$. Let $X$ be the ruled surface associated to $X$: $X:=\mathbb{P}(E)$. $X$ admits a non-singular holomorphic foliation $\mathcal{F}$ whose leaves are either $\{\{[(z, x, y)]\in X\mid y=(c+n)\cdot x\ \text{for some}\ n\in\mathbb{Z}\}\}_{c\in\mathbb{C}/\mathbb{Z}}$ or $\{[(z, x, y)]\in X\mid x=0\}$. As $\{[(z, x, y)]\in X\mid x=0\}$ is naturally isomorphic to $C$, let us regard it as $C$ embedded in $X$: $C\subset X$. Then $(X, C, \mathcal{F})$ enjoys the conditions in Theorem \[thm:main\]. In this case, the holonomy map $f(\xi)$ can be calculated as $f(\xi)=\frac{\xi}{1+\xi}$, which has $0$ as a rationally indifferent fixed point. Note that $E$ is a rank-$2$ degree-$0$ vector bundle which is the non-trivial extension of $\mathbb{I}_C$ by $\mathbb{I}_C$, where $\mathbb{I}_C$ is the trivial line bundle on $C$. According to [@N §6], this $X$ is essentially the unique example of projective smooth surface in which smooth elliptic curve can be embedded as a curve of type $(\alpha)$ in Ueda’s classification. Note also that this example is the same one as [@DPS94 Example 1.7] (see also [@E §4.1]). Let $C_0$ be a smooth elliptic curve embedded in the projective plane $\mathbb{P}^2$. Fix nine points $p_1, p_2, \dots, p_9$ from $C_0$ different from each other. Let us denote by $X$ the blow-up of $\mathbb{P}^2$ at $\{p_j\}_{j=1}^9$, and by $C$ the strict transform of $C_0$. From the simple calculation, it follows that the degree of the normal bundle $N_{C/X}$ is equal to $0$, and thus $N_{C/X}$ is topologically trivial. By choosing $\{p_j\}_{j=1}^9$ in sufficiently general position, we may assume that $N_{C/X}$ is a non-torsion element of ${\rm Pic}^0(C)$: i.e. there does not exist an integer $\ell$ such that $N_{C/X}^\ell=\mathbb{I}_C$. We here remark that the neighborhood structure of $C$ and the semi-positivity of $[C]$ in this example has been deeply investigated [@Br] [@U83] (see also [@D §2]). In order to determine a minimal singular metric of $[C]$ in this example by using Corollary \[cor:main\], we are interested in whether there exists an configuration of nine points $\{p_j\}_{j=1}^9$ such that there exists a foliation $\mathcal{F}$ and a submersion $\pi$ which satisfies the conditions $(1)$ and $(2)$. Unfortunately, we cannot give any answer to this question (here we remark that the question on the existence of holomorphic foliation on this $X$ has already been posed in [@DPS96]). \[ex:cor\] Let $C$ be a smooth elliptic curve. Here we construct an holomorphic submersion $\widetilde{X}\to\Omega$ from a smooth complex manifold $\widetilde{X}$ of dimension three to a neighborhood $\Omega$ of $U(1):=\{\tau\in\mathbb{C}\mid |\tau|=1\}$ in $\mathbb{C}$ which satisfies the following conditions: $(a)$ there exists a submanifold $\widetilde{C}$ of $\widetilde{X}$ of dimension two such that the restriction $\widetilde{C}\to \Omega$ is a proper submersion and each fiber of this restricted map is isomorphic to $C$, $(b)$ $[\widetilde{C}]|_{X_\tau}$ is semi-positive for each $\tau\in\Omega\setminus U(1)$, where $X_\tau$ is the fiber of $\tau$, $(c)$ $[\widetilde{C}]|_{X_\tau}$ is also semi-positive for almost all $\tau\in U(1)$ in the sense of Lebesgue measure, and $(d)$ there exist uncountably many elements $\tau\in U(1)$ such that $[\widetilde{C}]|_{X_\tau}$ is not semi-positive. Fix a sufficiently small open neighborhood $\Omega$ of $U(1)$ in $\mathbb{C}^*$ and consider the function $F\colon\mathbb{C}\times \Omega\to \mathbb{C}\times \Omega$ defined by $F(\xi, \tau):=(\tau\cdot\xi+\xi^2, \tau)$. Fix also a sufficiently small neighborhood $\widetilde{U}_0$ of $\{0\}\times U(1)$ in $\mathbb{C}\times \Omega$ such that $f|_{\widetilde{U}_0}$ is locally isomorphic (note that the Jacobian determinant of $F$ at $(0, \tau)$ is $\tau$, which is a non-zero constant for each $\tau\in \Omega$). We may assume that $\widetilde{U}_0=U_0\times \Omega$ holds for some neighborhood $U_0\subset \mathbb{C}$ of $0$. By shrinking $U_0$, we may assume that $F|_{\widetilde{U}_0}$ is injective and thus it is an isomorphism to the image of it. Denoting by $\widetilde{V}_0$ the image $F(\widetilde{U}_0)\subset \mathbb{C}\times \Omega$, let us consider the sets $\widetilde{V}_1:=\widetilde{U}_0\cap \widetilde{V}_0$ and $\widetilde{U}_1:=(F|_{\widetilde{U}_0})^{-1}(\widetilde{V}_1)$. In what follows, we regard $F$ as an isomorphism from $\widetilde{U}_1$ to $\widetilde{V}_1$. Let $p, \lambda, \lambda_1, \lambda_2$ be those in §\[subsection:constr\]. Denote by $\widetilde{X}_1$ the set $p(\{z\in\mathbb{C}\mid \lambda<|z|<1\})\times \widetilde{U}_0$, and by $\widetilde{X}_2$ the set $p(\{z\in\mathbb{C}\mid \lambda_1<|z|<\lambda_2\})\times \widetilde{U}_1$. We define a complex manifold $\widetilde{X}$ by gluing up $\widetilde{X}_1$ and $\widetilde{X}_2$ in the same manner as in §\[subsection:constr\] by using the function $F$. Denote by $\widetilde{C}$ the submanifold defined by $C\times (\{0\}\times\Omega)\subset\widetilde{X}$ (here we are regarding $\{0\}\times\Omega$ as a subset of $\widetilde{U}_0$ and $\widetilde{U}_1$). The second projection $\widetilde{U}_0\to \Omega$ (and the restriction $\widetilde{U}_1\to\Omega$ of this map) induces the submersion $\widetilde{X}\to \Omega$. Clearly the fiber $X_\tau$ and the submanifold $\widetilde{C}\cap X_\tau\subset X_\tau$ satisfies the conditions $(1)$ and $(2)$ with the holonomy function $f(\xi)=\tau\cdot \xi+\xi^2$. Now we can easily check the conditions $(a), (b), (c), $ and $(d)$ by applying Corollary \[cor:main\] (here we also used [@Be Theorem 6.6.5] and the fact that there exists uncountably many elements $\tau\in U(1)$ which satisfies the condition as in Corollary \[cor:main\] [@C p. 155], see also [@U83 §5.4]). [99]{} <span style="font-variant:small-caps;">A. F. Beardon</span>, Iteration of Rational Functions, Graduate Texts in Mathematics, Springer-Verlag (1991), no. 132. <span style="font-variant:small-caps;">S. Boucksom</span>, Divisorial Zariski decompositions on compact complex manifolds, Ann. Sci. École Norm. Sup. (4) [**37**]{} (2004), no. 1, 45–76. <span style="font-variant:small-caps;">M. Brunella</span>, On Kähler surfaces with semipositive Ricci curvature, Riv. Mat. Univ. Parma, [**1**]{} (2010), 441–450. <span style="font-variant:small-caps;">C. Camacho, P. Sad</span>, Invariant varieties through singularities of holomorphic vector fields, Ann. Math. 115 (1982), 579-595. <span style="font-variant:small-caps;">L. Carlesson, T.W. Gamelin</span>, Complex dynamics, Universitext: Tracts in Mathematics, Springer-Verlag, New York (1993). H. Cremer, Zum Zentrumproblem, Math. Ann., 98 (1928), 151-163. <span style="font-variant:small-caps;">J.-P. Demailly</span>, Structure Theorems for Compact Kähler Manifolds with Nef Anticanonical Bundles, Complex Analysis and Geometry, Springer Proceedings in Mathematics & Statistics, [**144**]{} (2015), 119–133. <span style="font-variant:small-caps;">J.-P. Demailly, T. Peternell, M. Schneider</span>, Compact complex manifolds with numerically effective tangent bundles, J. Alg. Geom., [**3**]{} (1994), 295–345. <span style="font-variant:small-caps;">J.-P. Demailly, T. Peternell, M. Schneider</span>, Compact Kähler manifolds with hermitian semipositive anticanonical bundle, Comp. Math., [**101**]{} (1996), 217–224. <span style="font-variant:small-caps;">J.-P. Demailly, T. Peternell, M. Schneider</span>, Pseudo-effective line bundles on compact Kähler manifolds, Internat. J. Math. [**12**]{} (2001), 689–741. <span style="font-variant:small-caps;">T. Eckl</span>, Numerically trivial foliations, Annales de l’Institut Fourier, [**54**]{}, Issue 4 (2004), 887–938. <span style="font-variant:small-caps;">T. Koike</span>, On minimal singular metrics of certain class of line bundles whose section ring is not finitely generated, to appear in Ann. Inst. Fourier (Grenoble), arXiv:1312.6402. <span style="font-variant:small-caps;">T. Koike</span>, On the minimality of canonically attached singular Hermitian metrics on certain nef line bundles, to appear in Kyoto J. Math., arXiv:1405.4698. <span style="font-variant:small-caps;">A. Neeman</span>, Ueda theory: theorems and problems, Mem. Amer. Math. Soc. [**81**]{} (1989), no. 415. , Iterations of analytic functions, Ann. of Math., [**43**]{} (1942), 607–612. <span style="font-variant:small-caps;">T. Suwa</span>, Indices of vector fields and residues of holomorphic singular foliations, Hermann (1998). <span style="font-variant:small-caps;">T. Ueda</span>, On the neighborhood of a compact complex curve with topologically trivial normal bundle, J. Math. Kyoto Univ., [**22**]{} (1983), 583–607.
--- author: - | \ Department of Theoretical Physics and IFIC,\ University of Valencia-CSIC,\ E-46100, Valencia, Spain.\ E-mail: - | David Ibanez\ Department of Theoretical Physics and IFIC,\ University of Valencia-CSIC,\ E-46100, Valencia, Spain.\ E-mail: title: | The effective gluon mass\ and its dynamical equation. --- Introduction ============ It is well-established by now that the dynamical generation of an effective gluon mass [@Cornwall:1981zr] explains in a natural and self-consistent way the infrared finiteness of the (Landau gauge) gluon propagator and ghost dressing function, observed in large-volume lattice simulations for both $SU(2)$ [@Cucchieri:2007md] and $SU(3)$ gauge groups [@Bogolubsky:2007ud]. Given the nonperturbative nature of the mass generation mechanism, the Schwinger-Dyson equations (SDEs) constitute the most natural framework for studying such phenomenon in the continuum [@Binosi:2007pi; @Alkofer:2000wg; @Fischer:2006ub; @Aguilar:2004sw]. Specifically, we will work in the framework provided by the synthesis of the pinch technique (PT) [@Cornwall:1981zr; @Cornwall:1989gv; @Binosi:2002ft] with the background field method (BFM) [@Abbott:1980hw], known in the literature as the PT-BFM scheme [@Binosi:2002ft; @Aguilar:2006gr]. Probably the most crucial theoretical ingredient for obtaining out of the SDEs an infrared-finite gluon propagator, *without* interfering with the gauge invariance of the theory, encoded in the BRST symmetry, is the existence of a set of special vertices, to be generically denoted by $V$ and called *pole vertices*. These vertices contain massless, longitudinally coupled poles, and must be added to the usual (fully dressed) vertices of the theory. They capture the underlying mass generation mechanism, which is none other than a non-Abelian realization of the Schwinger mechanism. In addition to triggering the Schwinger mechanism, the massless poles contained in the pole vertices act as composite, longitudinally coupled Nambu-Goldstone bosons, maintaining gauge invariance and preserving the form of the Ward identities (WIs) and the Slavnov-Taylor identities (STIs) of the theory in the presence of a dynamically generated gluon mass. In fact, recent studies indicate that the QCD dynamics can indeed generate longitudinally coupled composite (bound-state) massless poles, which subsequently give raise to the required vertices $V$ [@Aguilar:2011xe; @Ibanez:2012zk]. At the level of the SDEs, the analysis finally boils down to the derivation of an integral equation, to be referred as the *mass equation*, that governs the evolution of the dynamical gluon mass, $m^2(q^2)$, as a function of the momentum $q^2$. The main purpose of this presentation is to report on recent work [@Binosi:2012sj], where the complete mass equation has been obtained in the Landau gauge employing the *full* SDE of the gluon propagator and using as a guiding principle the special properties of the aforementioned vertices $V$ (for related studies in the Coulomb gauge see, *e.g.*, [@Szczepaniak:2001rg; @Epple:2007ut]). In this context, the detailed numerical solution of the full mass equation (for arbitrary values of the physical momentum), reveals the existence of *positive-definite* and *monotonically decreasing* solutions. The SDE of the gluon propagator =============================== The full gluon propagator $\Delta^{ab}_{\mu\nu}(q)=\delta^{ab}\Delta_{\mu\nu}(q)$ in the Landau gauge is given by the expression $$\label{prop} \Delta_{\mu\nu}(q)=-iP_{\mu\nu}(q)\Delta(q^2); \quad P_{\mu\nu}(q)=g_{\mu\nu}-\frac{q_\mu q_\nu}{q^2},$$ and its *inverse* gluon dressing function, $J(q^2)$, is defined as $$\label{gldressing} \Delta^{-1}(q^2)=q^2 J(q^2).$$ The usual starting point of our dynamical analysis is the SDE governing the gluon propagator. Specifically, within the PT-BFM formalism, one can consider the propagator connecting a quantum ($Q$) with a background ($B$) gluon, to be referred as the $QB$ propagator and denoted by $\widetilde{\Delta}(q^2)$. The SDE of the above propagator is shown in Fig. \[QB-SDE\], and it may be related to the conventional $QQ$ propagator, $\Delta(q^2)$, connecting two quantum gluons, through the powerful background-quantum identity [@Binosi:2002ft; @Grassi:1999tp] $$\label{propBQI} \Delta(q^2) = [1+G(q^2)]\widetilde{\Delta}(q^2),$$ In this identity, the function $G(q^2)$ corresponds to the $g_{\mu\nu}$ form factor of a well known two-point function [@Binosi:2007pi; @Binosi:2002ft; @Grassi:1999tp]. Then, the corresponding version of the SDE for the conventional gluon propagator (in the Landau gauge) reads [@Binosi:2007pi; @Aguilar:2006gr] $$\label{glSDE} \Delta^{-1}(q^2)P_{\mu\nu}(q) = \frac{q^2P_{\mu\nu}(q) + i\sum_{i=1}^6(a_i)_{\mu\nu}}{1 + G(q^2)},$$ where the diagrams $(a_i)$ are shown in Fig. \[QB-SDE\]. The relevant point to recognize here is that the transversality of the gluon self-energy is realized according to the pattern highlighted by the boxes of Fig. \[QB-SDE\], namely, $$\label{pattern} q^\mu[(a_1)+(a_2)]_{\mu\nu}=0; \quad q^\mu[(a_3)+(a_4)]_{\mu\nu}=0; \quad q^\mu[(a_5)+(a_6)]_{\mu\nu}=0.$$ Derivation of the gluon mass equation ===================================== As has been explained in detail in the recent literature [@Aguilar:2011xe; @Ibanez:2012zk], the Schwinger mechanism allows for the emergence of massive solutions out of the SDE, preserving, at the same time, the gauge invariance intact. At this level, the triggering of this mechanism proceeds through the inclusion of the pole vertices $V$ in the SDE Eq. (\[glSDE\]). From the kinematic point of view, we will describe the transition from a massless to a massive gluon propagator by carrying out the replacement (Minkowski space) $$\label{massiveprop} \Delta^{-1}(q^2)=q^2 J(q^2) \longrightarrow \Delta_m^{-1}(q^2) = q^2 J_m(q^2) - m^2(q^2).$$ Notice that the subscript “m” indicates that effectively one has now a mass inside the corresponding expressions: for example, whereas perturbatively $J(q^2)\sim \ln q^2$, after dynamical gluon mass generation has taken place, one has $J_m(q^2)\sim \ln(q^2 + m^2)$. Then, gauge invariance requires that the replacement given in Eq. (\[massiveprop\]) be accompanied by the following simultaneous replacement of all relevant vertices $$\label{replacever} \Gamma \longrightarrow \Gamma' = \Gamma_m + V,$$ where $V$ must be such that the new vertex $\Gamma'$ satisfies the same formal WIs (or STIs) as $\Gamma$ before. The most familiar case is that of the $BQ^2$ vertex $\widetilde{\Gamma}'_{\alpha\mu\nu}$, whose pole part must satisfy the WI [@Aguilar:2011ux] $$\label{WIthreepole} q^\alpha \widetilde{V}_{\alpha\mu\nu}(q,r,p) = m^2(r^2) P_{\mu\nu}(r) - m^2(p^2) P_{\mu\nu}(p),$$ when contracted with respect to the momentum of the background gluon. In complete analogy with the above case, one may use the WI satisfied by the conventional $BQ^3$ vertex, namely, $$\begin{aligned} \label{WIfour} q^\alpha \widetilde{\Gamma}_{\alpha\mu\nu\rho}^{abcd}(q,r,p,t) &=& ig^2[f^{abx}f^{xcd}\Gamma_{\nu\rho\mu}(p,t,q+r) + f^{acx}f^{xdb}\Gamma_{\rho\mu\nu}(t,r,q+p) \nonumber \\ &+& f^{adx}f^{xbc}\Gamma_{\mu\nu\rho}(r,p,q+t)],\end{aligned}$$ in order to deduce that, after the replacement Eq. (\[replacever\]), its $\widetilde{V}$ part satisfies [@Binosi:2012sj] $$\begin{aligned} \label{WIfourpole} q^\alpha \widetilde{V}_{\alpha\mu\nu\rho}^{abcd}(q,r,p,t) &=& ig^2[f^{abx}f^{xcd}V_{\nu\rho\mu}(p,t,q+r) + f^{acx}f^{xdb}V_{\rho\mu\nu}(t,r,q+p) \nonumber \\ &+& f^{adx}f^{xbc}V_{\mu\nu\rho}(r,p,q+t)].\end{aligned}$$ Finally, as a large variety of lattice simulations and analytic studies suggest, we will take for granted that the ghost propagator $D(q^2)$ remains massless in the Landau gauge. The main implication of this property for the case at hand is that the (fully-dressed) BFM gluon-ghost vertex, appearing in graph $(a_3)$, does *not* need to be modified by the presence of $\widetilde{V}$-type vertices. Quite remarkably, the above WIs, supplemented by the totally longitudinal nature of the pole vertices, are the only properties that one needs for deriving the mass equation; in particular, the closed form of the pole vertices is not needed. According to the previous discussion, after the inclusion of the pole vertices, the gluon SDE Eq. (\[glSDE\]) becomes in the Landau gauge $$\label{glSDEprime} [q^2J_m(q^2) - m^2(q^2)]P_{\mu\nu}(q) = \frac{q^2P_{\mu\nu}(q) + i\sum_{i=1}^6(a'_i)_{\mu\nu}}{1 + G(q^2)},$$ where the *prime* indicates that (in general) one must perform the simultaneous replacements Eq. (\[massiveprop\]) and Eq. (\[replacever\]) inside the corresponding diagrams. Evidently, the lhs of Eq. (\[glSDEprime\]) involves two unknown quantities, $J_m(q^2)$ and $m^2(q^2)$, which will eventually satisfy two separate, but *coupled*, integral equations of the generic type $$\begin{aligned} && J_m(q^2) = 1 + \int_k {\cal K}_1(q^2,m^2,\Delta_m), \nonumber \\ && m^2(q^2) = \int_k {\cal K}_2(q^2,m^2,\Delta_m), \label{meq}\end{aligned}$$ such that ${\cal K}_1, {\cal K}_2\neq 0$, as $q^2\rightarrow 0$. In order to derive the closed form of the mass equation Eq. (\[meq\]), one must identify all mass-related contributions coming from the $\widetilde{V}$ vertices that are contained in the Feynman graphs comprising the rhs of Eq. (\[glSDEprime\]). With the transversality of both sides of Eq. (\[glSDEprime\]) guaranteed by the presence of the pole vertices, it is far more economical to derive the mass equation by isolating the appropriate cofactors of $q_\mu q_\nu/q^2$, to be denoted by $a_i^{\widetilde{V}}(q^2)$, on both sides. Notice that selecting the $g_{\mu\nu}$, or taking the trace in Eq. (\[glSDEprime\]), would entail the use of the special *seagull identity* [@Aguilar:2009ke; @Aguilar:2011ux]. The most important steps of this construction may be summarized as follows: $({\bf i})$ From the previous comments about the BFM gluon-ghost vertex, graph $(a'_3)$ have not $\widetilde{V}$-component. $({\bf ii})$ The WI Eq. (\[WIfourpole\]) and the longitudinality condition for the $BQ^3$ pole vertex may be used to demonstrate that the $\widetilde{V}$-component of graph $(a'_5)$ vanishes in the Landau gauge. $({\bf iii}) $ The contribution $a_6^{\widetilde{V}}(q^2)$ stems solely from the combination $\Gamma_m\widetilde{V}$ of the product $\Gamma'\widetilde{\Gamma}'$ appearing in graph $(a'_6)$. Thus, one concludes that the *complete* mass equation can be written as $$\label{masscompact} m^2(q^2) = \frac{i[a_1^{\widetilde{V}}(q^2) + a_6^{\widetilde{V}}(q^2)]}{1 + G(q^2)}.$$ Interestingly enough, the entire procedure may be pictorially summarized, in a rather concise way, as shown in Fig. \[diagrammaticmass\]. Complete mass equation and numerical results ============================================ The final equation obtained from Eq. (\[masscompact\]) reads (Euclidean space) $$\begin{aligned} \label{masscomplete} m^2(q^2) &=& -\frac{g^2C_A}{1+G(q^2)}\frac{1}{q^2}\int_k m^2(k^2)[(k+q)^2-k^2] \Delta^{\alpha\rho}(k)\Delta_{\alpha\rho}(k+q)\bigg\lbrace 1 - C\,[Y(k+q)+Y(k)]\bigg\rbrace \nonumber \\ &+& \frac{g^2C_A}{1+G(q^2)}\frac{1}{q^2}(q^2g_{\delta\gamma}-2q_\delta q_\gamma)\int_k m^2(k^2)\, C\, [Y(k+q)-Y(k)] \Delta_\epsilon^\delta(k)\Delta^{\gamma\epsilon}(k+q),\end{aligned}$$ with $C=3\pi C_A\alpha_s$, and $$\label{Yintegral} Y(k^2)=\frac{1}{3}\frac{k_\alpha}{k^2}\int_l \Delta^{\alpha\rho}(l)\Delta^{\beta\sigma}(l+k)\Gamma_{\sigma\rho\beta},$$ corresponding to the subdiagram on the upper left corner of $(a_6)$ \[see also Fig. \[diagrammaticmass\]\]. Even though Eq. (\[masscomplete\]) forms part of a system of coupled equations, see Eq. (\[meq\]), in what follows we will study it in isolation, given that the corresponding equation for $J_m (q^2)$ is unknown. To that end, we will treat the gluon propagators appearing in the mass equation as external quantities, using lattice results for their form [@Bogolubsky:2007ud]. In addition, the rhs of Eq. (\[Yintegral\]) depends on the full three-gluon vertex $\Gamma_{\sigma\rho\beta}$, whose exact form is not known. We will therefore approximate $Y(k^2)$ by its one-loop expression, obtained by substituting tree-level values for all quantities appearing in the integral; a lengthy but straightforward calculation yields \[Euclidean space, momentum subtraction (MOM) scheme\] $$\label{Yrenor} Y_R(k^2)=-\frac{1}{(4\pi)^2}\frac{5}{4}\log\frac{k^2}{{\mu}^2}\,.$$ After these considerations, and using spherical coordinates $x=q^2$ and $y=k^2$, let us study the deep infrared limit $x\rightarrow 0$ of Eq. (\[masscomplete\]), given by $$\begin{aligned} \label{deepIR} && m^2(0) = -\frac{3\alpha_S C_A}{8\pi [1+G(0)]}\int_0^\infty dy m^2(y){\cal K}_2(y); \nonumber \\ && {\cal K}_2(y) = \lbrace[1-2CY(y)]Z^2(y)\rbrace', \quad Z(y)=y\Delta(y).\end{aligned}$$ Even though the value of $C$ is fixed (see above), in what follows we will treat it as a free parameter, in order to study what happens to the gluon mass equation when one varies independently $\alpha_S$ and $C$. The reason for doing this is that, whereas Eq. (\[Yrenor\]) furnishes a concrete form for the two-loop *dressed* correction, by no means does it exhaust it; thus, by varying $C$, one basically tries to mimic further correction that may be added to the *skeleton* provided by the $Y_R(k^2)$ of Eq. (\[Yrenor\]) (for a fixed value of $\alpha_S$); indeed rescaling C is equivalent to rescaling $Y_R(k^2)$. Of course, the real value of $C$ will emerge as a special case of this general two-parameter study. Finally, it is convenient to define the “reduced” $C_r=C/3\pi C_A$, and drop the suffix “r”. Let us first set $C=0$, thus turning off the two-loop dressed contributions. Then, integrating (\[deepIR\]) by parts, one obtains $$m^2(0) = \frac{3\alpha_S C_A}{8\pi [1+G(0)]}\int_0^\infty dy [m^2(y)]' Z^2(y)\,.$$ Given that $1+G(0)>0$, it is clear that a monotonically decreasing gluon mass, namely , expected on physical grounds, would give rise to a negative value for $m^2(0)$, which is physically wrong. Thus, the only way to reconcile a positive-definite and monotonically decreasing gluon mass is to obtain an effective reversal of sign from the two-loop dressed contributions; as we will see, this is indeed what happens. To study this crucial point in detail, let us consider how the shape of the kernel ${\cal K}_2$ changes as $C$ is varied. In Fig. \[kernel\_mass\], one observes that, as $C$ increases, ${\cal K}_2$ displays a less pronounced positive (respectively negative) peak in the small (respectively large) momenta region. Next, for $C\gtrsim 0.37$, a small negative region starts to appear in the infrared, which rapidly becomes a deep negative well for $y\lesssim 0.6$, with ${\cal K}_2$ becoming positive for higher momenta. Therefore we observe that the addition of the two-loop dressed contributions counteracts the effect of the overall minus sign of Eq. (\[deepIR\]), by effectively achieving a sign reversal of the kernel. Indeed, one concludes that there exists a critical value $\overline{C}\approx 0.56$ such that, if $C>\overline{C}$, Eq. (\[deepIR\]) will display at least one physical monotonically decreasing solution for a suitable value of the strong coupling $\alpha_S$. Finally, to see if the picture sketched above is confirmed when $x\neq 0$, one can study numerically the solutions of Eq. (\[masscomplete\]) following the algorithm described in [@Binosi:2012sj]. In this case the absence of solutions persists until the critical value $\overline{C}$ is reached, after which one finds exactly one monotonically decreasing solution. Specifically, in Fig. \[kernel\_mass\] we plot the solutions for the most representative $C$ values. The value $C=\alpha_S\approx 0.88$ corresponds to the case in which $Y$ is kept at its lowest order perturbative value, whereas $C=1.85$ corresponds to the standard MOM value $\alpha_S=0.22$ at $\mu=4.3$ GeV [@Boucaud:2005rm]. As can be readily appreciated, the masses obtained display the basic qualitative features expected on general field-theoretic considerations and employed in numerous phenomenological studies; in particular, they are monotonically decreasing functions of the momentum and vanish rather rapidly in the ultraviolet. Conclusions =========== In this presentation we have reported recent progress [@Binosi:2012sj] on the study of the nonpertubative equation that governs the momentum evolution of the dynamically generated gluon mass. By appealing to the existence of the special nonperturbative vertices $V$ associated with the Schwinger mechanism, we have outlined the methodology that allows for a systematic and expeditious identification of the parts of the SDE that contributes to the mass equation. The numerical analysis of the resulting mass equation reveals that the inclusion of two-loop dressed contributions has a profound impact on the nature of the mass equation, already at the qualitative level. Indeed, they are crucial in order to obtain physically meaningful solutions out of the mass equation, *i.e.*, positive-definite and monotonically decreasing solutions for the effective gluon mass. In the future, given the importance of the term $Y(k^2)$ for this entire construction, it would be particularly important to determine its structure beyond the perturbative one-loop approximation used. Nevertheless, even with the approximate version of $Y(k^2)$, the full mass equation provides a natural starting point for calculating reliably the effect that the inclusion of light quark flavors might have on the form of the gluon propagator, and complement recent studies based on the SDEs [@Aguilar:2012rz] as well as lattice simulations [@Ayala:2012pb]. This research is supported by the European FEDER and Spanish MICINN under grant FPA2008-02878. [99]{} J. M. Cornwall, Phys. Rev. D [**26**]{} (1982) 1453. A. Cucchieri and T. Mendes, PoS [**LAT2007**]{}, 297 (2007); Phys. Rev. Lett.  [**100**]{}, 241601 (2008); Phys. Rev.  D [**81**]{}, 016005 (2010); PoS [**LATTICE2010**]{}, 280 (2010); AIP Conf. Proc.  [**1343**]{}, 185 (2011). I. L. Bogolubsky, E. M. Ilgenfritz, M. Muller-Preussker and A. Sternbeck, PoS [**LAT2007**]{}, 290 (2007); Phys. Lett.  B [**676**]{}, 69 (2009). D. Binosi and J. Papavassiliou, Phys. Rev. D [**77**]{} (2008) 061702; JHEP [**0811**]{} (2008) 063 R. Alkofer and L. von Smekal, Phys. Rept.  [**353**]{} (2001) 281. C. S. Fischer, J. Phys. G [**32**]{} (2006) R253. A. C. Aguilar and A. A. Natale, JHEP [**0408**]{}, 057 (2004). J. M. Cornwall and J. Papavassiliou, Phys. Rev.  D [**40**]{}, 3474 (1989). D. Binosi and J. Papavassiliou, Phys. Rev.  D [**66**]{}(R), 111901 (2002); J. Phys. G [**30**]{}, 203 (2004); Phys. Rept.  [**479**]{}, 1 (2009). See, e.g., L. F. Abbott, Nucl. Phys.  B [**185**]{}, 189 (1981), and references therein. A. C. Aguilar and J. Papavassiliou, JHEP [**0612**]{} (2006) 012. A. C. Aguilar, D. Ibanez, V. Mathieu and J. Papavassiliou, Phys. Rev. D [**85**]{} (2012) 014018. D. Ibanez and J. Papavassiliou, arXiv:1211.5314 \[hep-ph\]. D. Binosi, D. Ibanez and J. Papavassiliou, Phys. Rev. D [**86**]{} (2012) 085033. A. P. Szczepaniak and E. S. Swanson, Phys. Rev. D [**65**]{}, 025012 (2002). D. Epple, H. Reinhardt, W. Schleifenbaum and A. P. Szczepaniak, Phys. Rev. D [**77**]{} (2008) 085007. P. A. Grassi, T. Hurth and M. Steinhauser, Annals Phys.  [**288**]{} (2001) 197. A. C. Aguilar, D. Binosi and J. Papavassiliou, JHEP [**0911**]{} (2009) 066. A. C. Aguilar and J. Papavassiliou, Phys. Rev. D [**81**]{} (2010) 034003. A. C. Aguilar, D. Binosi and J. Papavassiliou, Phys. Rev. D [**84**]{} (2011) 085026. P. Boucaud, F. de Soto, J. P. Leroy, A. Le Yaouanc, J. Micheli, H. Moutarde, O. Pene and J. Rodriguez-Quintero, Phys. Rev. D [**74**]{} (2006) 034505. A. C. Aguilar, D. Binosi and J. Papavassiliou, Phys. Rev. D [**86**]{} (2012) 014032. A. Ayala, A. Bashir, D. Binosi, M. Cristoforetti and J. Rodriguez-Quintero, Phys. Rev. D [**86**]{} (2012) 074512.
--- abstract: 'An important problem for HCI researchers is to estimate the parameter values of a cognitive model from behavioral data. This is a difficult problem, because of the substantial complexity and variety in human behavioral strategies. We report an investigation into a new approach using approximate Bayesian computation (ABC) to condition model parameters to data and prior knowledge. As the case study we examine menu interaction, where we have click time data only to infer a cognitive model that implements a search behaviour with parameters such as fixation duration and recall probability. Our results demonstrate that ABC (i) improves estimates of model parameter values, (ii) enables meaningful comparisons between model variants, and (iii) supports fitting models to individual users. ABC provides ample opportunities for theoretical HCI research by allowing principled inference of model parameter values and their uncertainty.' author: - | Antti Kangasr[ä]{}[ä]{}si[ö]{}$^{1}$, Kumaripaba Athukorala$^{1}$, Andrew Howes$^{2}$,\ Jukka Corander$^{3}$, Samuel Kaski$^{1}$, Antti Oulasvirta$^{4}$\ \ \ \ \ \ bibliography: - 'references.bib' --- Introduction ============ It has become relatively easy to collect large amounts of data about complex user behaviour. This provides an exciting opportunity as the data has the potential to help HCI researchers understand and possibly predict such user behavior. Yet, unfortunately it has remained difficult to explain what users are doing and why in a given data set. The difficulty lies in two problems: modeling and inference. The *modeling problem* consists of building models that are sufficiently general to capture a broad range of behaviors. Any model attempting to explain real-world observations must cover a complex interplay of factors, including what users are interested in, their individual capacities, and how they choose to process information (strategies). Recent research has shown progress in the direction of creating models for complex behavior [@bailly2014model; @chen2015emergence; @Cockburn2007; @fu2007snif; @Halverson2011; @Hornof2004; @Kieras2014; @kieras2000modern; @miller2004; @payne2013adaptive]. After constructing the model, we are then faced with the *inference problem*: how to set the parameter values of the model, such that the values agree with literature and prior knowledge, and that the resulting predictions match with the observations we have (Figure \[fig:overview1\]). Unfortunately, this problem has been less systematically studied in HCI. To this end, the goal of this paper is to report an investigation into a flexible and powerful method for inferring model parameter values, called *approximate Bayesian computation* (ABC) [@sunnaaker2013approximate]. ABC has been applied to many scientific problems [@beck2004model; @csillery2010approximate; @sunnaaker2013approximate]. For example, in climatology the goal is to infer a model of climate from sensor readings, and in infectious disease epidemiology an epidemic model from reports of an infection spread. Inference is of great use both in applications and in theory-formation, in particular when testing models, identifying anomalies, and finding explanations to observations. However ABC, nor any other principled inference method, have, to our knowledge, been applied to complex cognitive models in HCI[^1]. We are interested in principled methods for inferring parameter values, because they would be especially useful for process models of behaviour. This is because the models are usually defined as simulators, and thus the inference is very difficult to perform using direct analytical means[^2]. Such process models in HCI have been created, for example, based on cognitive science [@anderson1997act; @byrne2001act; @card1983psychology; @fu2007snif; @kieras1997overview; @rumelhart1982simulating], control theory [@jagacinski2003control], biomechanics [@bachynskyi2015informing], game theory [@camerer2003behavioral], foraging [@pirolli1999information; @pirolli2005rational], economic choice [@azzopardi2014modelling], and computational rationality [@chen2015emergence]. In the absence of principled inference methods for such models, some approaches have included: (1) simplifying models until traditional inference methods are possible; (2) using values adopted from the literature or adjusting them without studying their effect on behavior; or (3) manually iterating to find values that lead to acceptable performance. Compared to this, principled inference methods might reduce the potential for ambiguity, miscalculation, and bias, because model parameter values could be properly conditioned on both literature and prior knowledge, as well as the observation data. ABC is particularly promising for inferring the values of process model parameters from naturalistic data—a problem that is known to be difficult in cognitive science [@myung2016model]. The reason is that ABC does not make any further assumptions of the model, apart from the researcher being able to repeatedly simulate data from it using different parameter values. ABC performs inference by systematically simulating user behavior with different parameter configurations. Based on the simulations, ABC estimates which parameter values lead to behavior that is similar to observations, while also being reasonable considering our prior knowledge of plausible parameter values. As a challenging and representative example, this paper looks at a recent HCI process model class in which behavioral strategies are learned using reinforcement learning [@chen2015emergence; @fu2007snif; @gershman2015computational; @payne2013adaptive]. These models assume that users behave (approximately) to maximize utility given limits on their own capacity. The models predict how a user will behave in situations constrained by (1) the environment, such as the physical structure of a user interface (UI); (2) goals, such as the trade-off between time and effort; and (3) the user’s cognitive and perceptual capabilities, such as memory capacity or fixation duration. This class of models, called *computational rationality* (CR) models, has been explored previously in HCI, for example in SNIF-ACT [@fu2007snif], economic models of search [@azzopardi2014modelling], foraging theory [@pirolli1999information], and adaptive interaction [@payne2013adaptive]. The recent interest in this class is due to the benefit that, when compared with classic cognitive models, it requires no predefined specification of the user’s task solution, only the objectives. Given those, and the constraints of the situation, we can use machine learning to infer the optimal behavior policy. However, achieving the inverse, that is inferring the constraints assuming that the behavior is optimal, is exceedingly difficult. The assumptions about data quality and granularity of previously explored methods for this inverse reinforcement learning problem [@ng2000algorithms; @ramachandran2007bayesian; @ziebart2008maximum] tend to be unreasonable when often only noisy or aggregate-level data exists, such as is often the case in HCI studies. Our application case is a recent model of *menu interaction* [@chen2015emergence]. The model studied here has previously captured adaptation of search behavior, and consequently changes to task completion times, in various situations [@chen2015emergence]. The model makes parametric assumptions about the user, for example about the visual system (e.g., fixation durations), and uses reinforcement learning to obtain a behavioral strategy suitable for a particular menu. The inverse problem we study is how to obtain estimates of the properties of the user’s visual system from selection time data only (click times of menu items). However, due to the complexity of the model, its parameter values were originally tuned based on literature. Later in Study 1, we demonstrate that we are able to infer the parameter values of this model based on observation data, such that the predictions improve over the baseline, while the parameter values still agree with the literature. To the best of our knowledge, this is also the first time such inverse reinforcement learning problem has been solved based on aggregate-level data. We also aim to demonstrate the applicability of ABC, and inference in general, in two situations: model development and modeling of individuals. In Study 2, we demonstrate how ABC allows us to make meaningful comparisons between multiple model variants, and their comparable parameters, after they all have been fit to the same dataset. This presents a method for speeding up the development of these kind of complex models, though automatic inference of model parameter values. In Study 3, we demonstrate how ABC allows us to infer model parameter values for individual users. We discover that overall these individual models outperform a population-level model fit to a larger set of data, thus demonstrating the benefit of individual models. As a comparison, it would not be possible to fit individual models based on literature alone, as the information generally only applies on population level. Overview of Approach ==================== This paper is concerned with inference of model parameter values from data, which is also called *inverse modeling*. Inverse modeling answers the question: “what were the parameter values of the model, assuming the observed data was generated from the model?” Our goal is to assess the usefulness of approximate Bayesian computation (ABC) [@sunnaaker2013approximate] to this end. We now give a short overview of inverse modeling in HCI, after which we review ABC and explain its applicability. We finally provide a short overview of the particular ABC algorithm, BOLFI [@gutmann2016bayesian], we use in this study. Inverse Modeling Approaches for Cognitive Models ------------------------------------------------ For models that have simple algebraic forms, such as linear regression, inverse modeling is simple, as we can explicitly write down the formula for the most likely parameter values given data. For complex models, such formula might not exist, but it is often possible to write down an explicit *likelihood function*, $L(\theta|Y_{obs})$, which evaluates the likelihood of the parameters $\theta$ given the observed data $Y_{obs}$. When this likelihood function can be evaluated efficiently, inverse modeling can be done, even for reinforcement learning (RL) models [@ng2000algorithms; @ramachandran2007bayesian; @ziebart2008maximum]. However, this inverse reinforcement learning has been only possible when precise observations are available of the environment states and of the actions the agent took, which in HCI applications is rarely the case. When the likelihood function of the model can not be evaluated efficiently, there are generally two options left. The traditional way in HCI has been to set the model parameters based on past models and existing literature. If this has not led to acceptable predicted behavior, the researcher might have further tuned the parameters by hand until the predictions were satisfactory. However, this process generally has no guarantees that the final parameters will be close to the most likely values. An alternative solution, which we have not seen used in HCI context before, would be to use likelihood-free inference methods, that allow the model parameters to be estimated without requiring the likelihood function to be evaluated directly. These methods are derived based on mathematical principles, and thus offer performance guarantees, at least in the asymptotic case. ABC is one such method [@sunnaaker2013approximate], and we will explain it next in more detail. Approximate Bayesian Computation (ABC) -------------------------------------- ABC is a principled method for finding parameter values for complex HCI models, including simulators, based on observed data and prior knowledge. It repeatedly simulates data using different parameter values, in order to find regions of the parameter space that lead to simulated data that is similar to the observed data. Different ABC algorithms differ, for example, in the way in which they choose the parameter values. The main benefit of ABC for HCI is its generality: the only assumption needed is that the researcher is able to repeatedly simulate observations with different parameter values. Therefore, while in this paper we examine only a particular simulator, the approach is of more general value. To be precise, ABC can be used in the following recurring scenario in HCI: - **Inputs:** A model $M$ with unknown parameters $\theta$; prior knowledge of reasonable values for $\theta$ (for example from literature); observations $Y_{obs}$ of interactive behavior (for example from user study logs) - **Outputs:** Estimates of likely values for parameters $\theta$ and their uncertainty. Likely values of $\theta$ should produce a close simulated replication of observed data: $M(\theta) \approx Y_{obs}$, while still being plausible given prior knowledge. The process of using ABC is depicted in Figure \[fig:improcess\]. First the researcher implements her model as an executable simulator. Values for well-known parameters of the model are set by hand. For inferred parameters $\theta$ a *prior probability distribution* $P(\theta)$ is defined by the researcher based on her prior knowledge of plausible values. The researcher then defines the set of observations $Y_{obs}$ that $\theta$ will be conditioned on. Next, the researcher defines a *discrepancy function* $d(Y_{obs}, Y_{sim}) \to [0,\infty)$, that quantifies the similarity of the observed and simulated data in a way meaningful for the researcher. Finally, an ABC algorithm is run; it selects at which parameter values $\{\theta_i\}$ the simulator will be run, and how the conditional distribution of the parameter values, also known as the *posterior* $P(\theta|Y_{obs})$, is constructed based on the simulations. BOLFI: An ABC Variant Used in This Paper ---------------------------------------- This paper employs a recent variant of ABC called BOLFI [@gutmann2016bayesian], which reduces the number of simulations[^3] while still being able to get adequate estimates for $\theta$. An overview of the method is shown in Figure \[fig:overview2\]. The main idea of BOLFI is to learn a statistical regression model—called a Gaussian process—for estimating the discrepancy values over the feasible domain of $\theta$ from a smaller number of samples that do not densely cover the whole parameter space. This is justified when the situation is such that small changes in $\theta$ do not yield large changes in the discrepancy. Additionally, as we are most interested in finding regions where the discrepancy is small, BOLFI uses a modern optimization method called *Bayesian optimization* for selecting the locations where to simulate. This way we can concentrate the samples to parameter regions that are more likely to lead to low discrepancy simulated data. This approach has resulted in 3–4 orders of faster inference compared with the state-of-the-art ABC algorithms. Details of the method are given in the paper by Gutmann and Corander [@gutmann2016bayesian]. Case: Model of Menu Selection ============================= ![ Case model: A simulation process figure of the computational rationality model of user interaction with drop-down menus, adapted from Chen et al. [@chen2015emergence]. The Q-table is constructed in the training phase; in the simulation phase it is kept fixed. []{data-label="fig:CR-menu-model"}](figures/menu_search_model_v4.png){width="0.9\columnwidth"} Our case looks at a recent model for visual search of menus, introduced by Chen et al. [@chen2015emergence]. The purpose of this model is to predict the visual search behavior (how eyes fixate and move) and task completion times of a person searching for an item from a vertical menu. This model presents a particularly challenging problem for inference of parameter values because substantial computation is required by the reinforcement learning algorithm to calculate the search behavior policy, given a particular parameter set. The parameters of Chen et al.’s model [@chen2015emergence] describe cognitive characteristics of a user, such as the duration of a saccade when searching through the menu. In contrast to Chen et al. [@chen2015emergence], where parameters were largely set to values in the literature[^4], the inference problem that we study here is to estimate parameter values based on limited behavioral data: click times for menu items. Across the studies, we condition the parameter values of this model, and it’s variants, to this type of data in different settings. Introduction to Computational Rationality ----------------------------------------- An important property of the model we examine [@chen2015emergence] is that it computes computationally rational policies—behavior patterns optimized to maximize the utility of an agent given its bounds and goals [@lewis2014computational]. The bounds include limitations on the observation functions and on the actions that the agent can perform. These bounds define a space of possible policies. The use of computationally rational agents to model cognition has been heavily influenced by *rational analysis*, a method for explaining behavior in terms of utility [@anderson1991human; @chater1999ten; @oaksford1994rational], an idea used for example in information foraging theory and economic models of search [@azzopardi2014modelling; @pirolli1999information]. Computational rational agents have been used to model a number of phenomena in HCI [@payne2013adaptive]. Applications relevant to this paper include menu interaction [@chen2015emergence] and visual search [@hayhoe2014modeling; @Myers2013; @nunez2013models; @Tseng2015]. CR models use *reinforcement learning* (RL) methods to compute the optimal policies [@sutton1998reinforcement]. Applying RL has two prerequisites. First, an environment is needed, which has a state that the RL agent can observe, and actions that the agent can perform to change the state. The environment is commonly a Markov decision process (MDP), and designed to approximate the real-world situation the real user faces. Second, a reward function is required—a mapping from the states of the environment to real numbers—which defines what kind of states are valuable for the RL agent (higher rewards being favorable). The RL algorithm finds the (approximately) optimal policy by experimenting in the environment and updating the policy until (approximate) convergence. The resulting policy—and thus the predicted behavior—naturally depends on the parameters of the environment and of the reward function, which have been set by the researcher. Overview of Menu Selection Model -------------------------------- We summarize here all key details of the original model of Chen et al. [@chen2015emergence] (Fig \[fig:CR-menu-model\]). The environment is a menu composed of eight items, arranged into two semantic groups of four items each, where the items in each group share some semantic similarity. There are two conditions for the menu: either the item is present in the menu, or absent. At the beginning of an episode the agent is shown a target item. The task of the agent is to select the target item in the menu if it is present, or otherwise to declare that the menu does not contain the target item. The agent has ten possible actions: fixate on any of the 8 items, select the fixated item or declare that the item is not present in the menu (quit). Fixating on an item reveals its semantic relevance to the agent, whereas selecting an item or quitting ends the episode. After each action, the agent observes the state of the environment, represented with two variables: semantic relevances of the observed menu items and the current fixation location. The agent receives a reward after each action. After a fixation action, the agent gets a penalty that corresponds to the time spent for performing the saccade from the previous location and the fixation to the new item. If the agent performs the correct end action, a large reward is given—otherwise an end action results in a large penalty. The RL agent selects the actions based on the expected cumulative rewards the action allows the agent to receive starting from the current state—also known as Q-value of the state-action pair in RL terminology. These Q-values are learned in the training phase, over 20 million training episodes, using the Q-learning algorithm. To select an action, the agent compares the Q-values of each action in that state (see the Q-table in Fig \[fig:CR-menu-model\]) and chooses the action with the highest value. [**Parameter**]{} [**Description**]{} ------------------- -------------------------------------------------------------------------------------------------------------------------- $f_{dur}$ Fixation duration $d_{sel}$ Time cost for selecting an item (added to the duration of the last fixation of the episode if the user made a selection) $p_{rec}$ Probability of recalling the semantic relevances of all of the menu items during the first fixation of the episode $p_{sem}$ Probability of perceiving the semantic relevance of menu items above and below of the fixated item : Parameters inferred with ABC in Studies 1-3.[]{data-label="tab:parameters"} Variants -------- Above we described one model variant reported in Chen et al. [@chen2015emergence]. According to the description of the observation data, no items in the menus had more than 3 letters difference in length [@bailly2014model]. To comply with this and to reduce the complexity of the state space, we assumed that there is no detectable difference in the length of the items. Thus we used the model variant from Chen et al. [@chen2015emergence] where the only detectable feature is the semantic similarity to the target item. In Study 2 reported below, we will explore three additions to the model and their effect on the predictions. All model parameters inferred with ABC, across the studies, are listed in Table \[tab:parameters\]. Experiments and Results ======================= In the rest of the paper, we show with three case studies how ABC can be used to improve the current modeling practices. All studies use the Chen et al. model [@chen2015emergence], and the core problem in all is *inverse modeling*: Given aggregate observation data (task completion times), find the most likely parameter values $\theta$ and their distribution, such that the predictions made by the model agree with the observations. 1. **Study 1. ABC compared to manual tuning**: We demonstrate that ABC can improve model fit by inferring parameter values from data, compared to the common practice of setting them manually based on the literature. 2. **Study 2. ABC in model development**: We demonstrate how ABC helps in improving models, by fitting multiple models to same data, exposing differences and anomalies. 3. **Study 3. ABC in modeling individual differences**: We demonstrate how individual models can be fit with ABC, by conditioning the model to individual data. We use the same dataset as Chen et al. [@chen2015emergence], which is a subset of a study reported by Bailly et al. [@bailly2014model] and based on the study design of Nilsen [@nilsen1999exploring]. In the study, a label is shown and the user must click the correct item in a menu with 8 elements as quickly as possible. Items were repeated multiple times to understand practice effects. Multiple menus were used, and target position and absence/presence of target systematically varied. Eye movement data were collected and processed for fixation and saccade durations. Twenty-one paid participants took part in the study. Further details of the study that produced the data are reported in [@bailly2014model]. We implemented the BOLFI algorithm in Python. Parts of the source code were later published within an open-source library for likelihood-free inference [@kangasraasio2016engine]. Running the experiments took around one day each on a cluster computer. Further technical details of the experiments and implementation are described in the Appendix. Study 1. ABC Compared to Manual Tuning -------------------------------------- Our aim in the first study was to analyze how much we can improve the predictions made by the model by conditioning values of key parameters on observation data instead of the standard practice of choosing all of the parameter values manually. The case study was chosen to represent the common setting in HCI research where only aggregate data may be available. We used the model of Chen et al. [@chen2015emergence], and compared the parameter values inferred by ABC to those set based on literature in the original paper [@chen2015emergence]. We predicted task completion times (TCT) and fixation durations with both models, and compared them with observation data from [@bailly2014model]. For simplicity, we inferred the value of only one parameter $\theta$ with ABC, the *fixation duration* $f_{dur}$. The rest of the model parameter values were set to be identical with the baseline model. The value of this parameter was conditioned on the observed aggregate task completion times (TCT; combined observations from both menu conditions: target absent—referred to as *abs*, target present—referred to as *pre*). Chen et al. [@chen2015emergence] set the value of this parameter to 400 ms based on a study by Brumby et al. [@Brumby:2014]. ### Results As shown in Figure \[fig:exp1-results-1\], the parameter value inferred with ABC lead to the model predictions matching better to observation data not used for the modelling. This holds both for TCT and fixation duration. In detail, the ground truth aggregated TCT was 0.92 s (std 0.38 s). The manually fit model predicted 1.49 s (std 0.68 s), whereas the ABC fit model predicted 0.93 s (std 0.40 s). For predictions, we used the maximum a posteriori (MAP) value predicted by ABC, which was 244 ms for fixation duration (detail not shown). This corresponds to values often encountered in e.g. reading tasks [@rayner1998eye]. In summary, inferring the fixation duration parameter value using ABC lead to improved predictions, compared to setting the parameter value manually based on literature. The inferred parameter value was also reasonable based on literature. ### Observations on the Resulting Models A closer inspection of predictions made by the models exposed two problematic issues which led to improvements in Study 2. The first issue is that while the aggregate TCT predictions were accurate, and all predictions with ABC were better compared to manual tuning, even ABC-fitted predictions were not reasonable when split to sub-cases according to whether the target was present in the menu or not. This is clearly visible in Figure \[fig:exp1-results-1\] (rows two and three), where we notice that the predicted TCT when target is absent is actually around four to six times as long as the actual user behavior. The second issue concerns the search strategies predicted by the model. Chen et al. [@chen2015emergence] showed that their model was able to learn a behavior strategy, where the agent would look first at the topmost item, and second at the 5th item, which was the first item of the second semantic group. This was seen as a clear spike on the fifth item in the “proportion of gazes to target” feature. However, not every attempt to replicate this result succeeded (Fig. \[fig:exp1-results-3\]), and similar variation in predicted strategies was observed with the ABC-fitted model as well[^5]. Our conclusion is that there likely exist multiple behavior strategies that are almost equally optimal, and the RL algorithm may then find different local optima in different realizations. This is possible, as Q-learning is guaranteed to find the globally optimal strategy only given an infinite amount of learning samples; with only finite samples, this is not guaranteed. Because of this issue with the inference of the behavioral strategies, we do not discuss in detail the inferred strategies, but only report results that we were able to repeat reliably. Study 2: ABC in Model Development --------------------------------- We next demonstrate how ABC can be used in the model improvement cycle, where new models are proposed and compared. As a baseline, we start with the model introduced in Study 1, to which we add features to fix the issues we observed in Study 1. We show that with ABC multiple different models can be conditioned to the same observation data, in order to compare their predictions and (compatible) parameter estimates. Doing the same manually would be very laborious. The model variants we propose are as follows: - **Variant 1: Chen et al. [@chen2015emergence] model $+$ selection latency**: The agent incurs a delay $d_{sel}$ when selecting an item. - **Variant 2: Variant 1 $+$ immediate menu recognition**: The agent is able to recognize the menu based on the first item with probability $p_{recall}$. - **Variant 3: Variant 2 $+$ larger foveated area**: The agent can perceive the semantic relevance of the neighboring items (above and below the fixated item) through peripheral vision with probability $p_{sem}$. **Variant 1:** We first observed that both the TCT and recorded fixation duration are longer when the target item is present. We hypothesized that the user might have had to spend some time confirming her judgment of the target item and physically making the selection using the pointer. To allow the model to capture this behavior, we added an additional delay, $d_{sel}$, for the selection action. For example, the mathematical model of Bailly et al. [@bailly2014model] implements a similar selection latency. **Variant 2:** We observed that some of the users were able to decide that the target item was not present in the menu just using one fixation on the menu. Our hypothesis was that the users were able to memorize some of the menus, allowing them to naturally finish the task much faster when they recalled the menu layout. To capture this behavior, we allowed the agent to instantly observe the full menu during the first fixation, with probability $p_{rec}$. **Variant 3:** We also observed in Study 1 that the inferred number of fixations was in both cases larger than in the observation data. The models predicted on average 6.0 fixations when the target was absent (ground truth was 1.9) and 3.1 when target was present (ground truth was 2.2). Our hypothesis was that the user might have observed the semantic relevance of neighboring items using peripheral vision, allowing her to finish the task with a smaller number of fixations. The model of Chen et al. [@chen2015emergence] had a peripheral vision component but it only applied to size-related information (shape relevance). Our hypothesis is also justified by the experiment setup of Bailly et al. [@bailly2014model], where the neighboring items do fall within the fovea (2 degrees), thus making it physiologically possible for the user to observe the semantic relevance of the neighboring items. To capture this behavior, we allowed the agent to observe the semantic relevance of the items above and below the fixated item in the menu with probability $p_{sem}$ (independently for the item above and below). *Implementation:* In order to be able to do inference on these new parameters, we only needed to make small additions to the simulator code: add an interface for setting the values of these new parameters and implement the described changes in the model. On the ABC side, we only described the names and priors of the new parameters, and increased the amount of locations where to simulate. More locations are justified as each new parameter increases the size of the parameter space that needs to be searched. We also noticed in Study 1 that the models were not able to replicate the behavior well in both menu conditions (target present, absent) at the same time. For this reason, we make a small adjustment to the discrepancy function, so that the TCT is compared in both menu conditions separately. This should allow the models to better replicate the full observed behavior. Further details are provided in the Appendix. ### Results The predictions made by the different models, compared to the observation data, are visualized in Figure \[fig:exp2-results-1\]. With increased model complexity, we also see increasing agreement of the predictions with the observation data. This is partly expected, as more complex models are in general able to fit any dataset better. However, with the use of priors, we are able to regularize the parameter values to reasonable ranges, and thus avoid over-fitting the models to the data. The baseline model was not able to predict the behavior of the user very well on many of the variables. The MAP value for $f_{dur}$ (fixation duration) was 210 ms. The TCTs predicted by the baseline model were \[1500 ms (abs), 770 ms (pre)\], whereas the ground truth was \[490 ms (abs), 970 ms (pre)\]. The predicted fixation duration was 210 ms, which is still reasonable, although on the low side, compared to the observed means \[230 ms (abs), 420 ms (pre)\]. Furthermore, the predicted number of fixations on items on the menu was \[6.0 (abs), 3.1 (pre)\], whereas the users only performed \[1.9 (abs), 2.2 (pre)\] fixations. Variant 1 improved predictions over the baseline. The MAP value for normal $f_{dur}$ was 170 ms and for $d_{sel}$ (selection delay) 320 ms. The predicted TCTs were \[1300 ms (abs), 1000 ms (pre)\], which is already a very reasonable estimate when target is present, although still far from the truth when the target is absent. The predicted fixation durations (now with the selection delay factored in) were \[170 ms (abs), 270 ms (pre)\], which is an improvement over the baseline in the present condition, but not on the target absent condition. The predicted numbers of fixations were nearly identical to baseline. Variant 2 again improved predictions over both the baseline and Variant 1. The MAP value for $f_{dur}$ was 290 ms, for $d_{sel}$ was 300 ms, and for $p_{rec}$ (probability of recall) was 87 %. The predicted TCTs were \[570 ms (abs), 980 ms (pre)\], which is the first time we have been able to predict a lower TCT for the target absent case. However, the variation in TCT when target is absent is quite large; the predicted standard deviation was 660 ms, whereas the ground truth was 300 ms. The predicted fixation durations were \[290 ms (abs), 430 ms (pre)\], which is already close to the ground truth in the target present condition. The predicted numbers of fixations were \[1.8 (abs), 2.1 (pre)\], which is a considerable improvement over previous estimates. Variant 3 provided still slight improvements over previous results. The MAP value for $f_{dur}$ was 280 ms, for $d_{sel}$ was 290 ms, for $p_{rec}$ was 69 %, and for $p_{sem}$ (the probability of observing the semantic similarity with peripheral vision) was 93 %. The predicted TCTs were \[640 ms (abs), 1000 ms (pre)\], which is slightly further from the observations than with Variant 2. However, the variation in the distributions is closer to observed values than with Variant 2 (the discrepancy measure led ABC to minimize both the difference in mean and in standard deviation at the same time, details in Appendix). The predicted fixation durations were similar as with Variant 2. The predicted numbers of fixations were \[2.0 (abs), 2.2 (pre)\], which is slightly better than with Variant 2. We conclude that we were able to fit multiple model variants to the same observation data, and make meaningful comparisons between the different models because of this. We observed that the quality of the predictions increased when we added our additional assumptions to the model, which was expected as the models became more flexible, but also provided evidence that these features probably reflect actual user behavior as well. Furthermore, ABC was found useful in hypothesis comparison, as we avoided manually trying out a large number of different parameter values manually to find values that lead to reasonable predictions. Study 3. ABC and Individual Differences --------------------------------------- Most modeling research in HCI aims at understanding general patterns of user behavior. However, understanding how individuals differ is important for both theoretical and practical reasons. On the one hand, even seemingly simple interfaces like input devices show large variability in user behavior. On the other hand, adaptive user interfaces and ability-based design rely on differentiating users based on their knowledge and capabilities. Our final case looks at the problem of individual differences in inverse modeling. In Study 3 we select a group of users and fit an individual model for each of these users. We then compare how good predictions these individual models are able to produce, compared to the same model fit with the data from all of the users in the dataset (population level model). We selected a representative set of 5 users for Study 3. We first selected all users from the dataset of whom there were 15 or more observations in each menu condition (target absent, present), leaving 11 users. We then ordered the users based on their difference in TCT to population mean, summed from both menu conditions. To get a good distribution of different users, for this experiment we selected the users who were the furthest (S8), third most furthest (S5), and fifth most furthest away (S23) from the population mean – as well as the users who were the closest (S19) and third most closest (S18) to the population mean. The model we used in this study, for both individual and population level modeling, corresponded to Variant 3 from the previous section. To simplify the analysis, here we only infer the values of two of the parameters for each user, keeping the rest fixed. The inferred parameters were $p_{rec}$ and $p_{sem}$. Based on the Study 2, it seemed to us that there was less variation in $f_{dur}$ and $d_{sel}$, whereas the use of memory and acuity of peripheral vision could plausibly vary more between individuals. We fixed the value of $f_{dur}$ to 280 ms and $d_{sel}$ to 290 ms, according to the MAP estimate in Study 2. For each of the selected users, we collected all of the observations of that user from the dataset, and conditioned the parameter values of the individual model for that user on that small dataset. The parameter values of the population level model were the same as inferred in Study 2 for Variant 3. The accuracy of the predictions made for each user by their individual model was compared with the predictions made by the population level model. In the comparison, we considered the predicted TCTs and numbers of fixations at each condition to the observed values, and report the magnitude (absolute value) of the prediction errors. ### Results The predicted MAP parameter values are collected in Table \[tab:exp3-table\]. The individual model parameter values deviate around $\pm$10 percentage points from the population level model parameter values, which is a reasonable magnitude for individual variation. We calculated the magnitude of prediction errors for all of the models by taking the absolute difference in model predicted means and observed data means for each feature. The prediction errors of the population level model on the population data and on individual user data are shown in Figure \[fig:exp3-results-1\]. Overall, the prediction errors with a population level model tend to be larger for individual users than they are for the whole population. This shows that population level models that are good for explaining population level dynamics may perform badly when used for explaining subject level dynamics. Furthermore, as could be expected, prediction errors with a population level model tend to be larger for users who differ more from the population mean. This presents a clear motivation for developing individual models, as they could help to understand subject level dynamics, especially regarding users who differ from the population mean. The prediction errors of the individual models on individual user data are shown in Figure \[fig:exp3-results-2\]. Overall we observe a rather consistent quality in the predictions made by the individual user models. The only exception is user S8, who was the furthest away from the mean. It is likely that user S8 might have performed the task overall in a very different way from the rest of the users. For example, the number of fixations taken by this user when target was absent was 3.1, but only 2.7 when the target was present. This could indicate that the user was unusually careful in examining the menu before declaring that the target was not present. Improvements in prediction error magnitude when changing from population level model to an individual model are shown in Figure \[fig:exp3-results-3\]. The overall trend is that individual user models improve prediction quality, although not always in all parts. With most users the prediction errors decreased in at least three of the four predicted features. We conclude that by using ABC we were able to fit CR models to data from individual users, and that the resulting individual models were able to produce better predictions than a population level model fitted to the whole participant pool. Performing this modeling task would not have been possible with just choosing the values based on literature, as such information tends to only apply for population level models. On the other hand, choosing the parameter values manually for each user would have required a considerable amount of manual labour, which ABC was able to automate. Moreover, inverse modeling helped us expose a behavioral pattern that was not well explained by the model (user S8). [**Model**]{} [**$p_{rec}$**]{} [**$p_{sem}$**]{} --------------- ------------------- ------------------- S5 61 % 89 % S8 54 % 87 % S18 70 % 96 % S19 76 % 91 % S23 73 % 92 % POP 69 % 93 % : MAP estimates of parameter values for individual models (S5, S8, S18, S19, S23) and the population level model (POP) in Study 3.[]{data-label="tab:exp3-table"} Discussion and Conclusion ========================= We have demonstrated that ABC is applicable for inverse modeling of computationally rational models of complex human behavior based on aggregate behavioural data. We highlighted advantages ABC has over alternative methods for inferring model parameter values in HCI. First, the method is applicable for a wide range of models, as it relies on only few assumptions. Second, the parameter value estimates are conditioned both on the observation data, as well as any prior knowledge the researcher might have of the situation. This way over-fitting the model to the observation data may be avoided, which could happen if we had only tried to maximize the ability of the model to replicate the data. Third, the inference process produces a full posterior distribution over the parameter space, instead of only a point estimate, allowing for better analysis of the reliability of the estimates. In Study 1 we demonstrated that ABC was able to achieve better model fit compared to setting the model parameter value based on literature and manual tuning. We also identified problems with the existing state-of-the-art model for visual search [@chen2015emergence], related to both the quality of the predictions and convergence issues. In Study 2 we demonstrated the applicability of ABC in model comparison by fitting four different models to the same dataset and comparing the resulting predictions and inferred model parameter values. We also proposed improvements to the existing state-of-the-art model, and demonstrated that they resulted in improved quality of predictions. In Study 3 we demonstrated that with ABC it is possible to fit one of the models from Study 2 to data collected from a single individual, thus creating an individual model. We further demonstrated that the predictions made by the individual models were better compared to a model fit to a large amount of population-level data. Together, these contributions help address a substantial problem in understanding interactive behaviour that has been evident in HCI and Human Factors for more than 15 years [@kieras2000modern]. The problem is how to estimate model parameter values given the strategic flexibility of the human cognitive system [@Howes2009; @kieras2000modern; @lewis2014computational]. One of the consequences of strategic flexibility has been to make it difficult to test theories of the underlying information processing architecture; because behaviour that is merely strategic can be mistakenly taken as evidence for one or other architectural theory or set of architectural parameters [@Howes2009]. ABC, and inverse modeling methods in general, addresses this problem by establishing a principled mathematical relationship between the observed behaviour and the model parameter values. In the future, inverse modeling might provide a general framework for implementing adaptive interfaces that are able to interpret user behavior so as to determine individual preferences, capabilities, and intentions, rather than merely mapping actions directly to effects. In summary, we consider ABC to provide ample opportunities for widespread research activity on both HCI applications, and as a core inference methodology for solving the inverse problems arising in research. Acknowledgments =============== This work has been supported by the Academy of Finland (Finnish Centre of Excellence in Computational Inference Research COIN, and grants 294238, 292334) and TEKES (Re:Know). AH was supported by the EU-funded SPEEDD project (FP7-ICT 619435). AO was funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement 637991). Computational resources were provided by the Aalto Science IT project. Appendix: ABC BOLFI Implementation ================================== We implemented BOLFI in Python with the following details. We used a Gaussian process (GP) model from the GPy Python library to model the discrepancy. The kernel was Matern 3/2 with variance 0.01, scale 0.1, and noise variance 0.05. The first $N_{init}$ sample locations were drawn from the quasi-random Sobol sequence (equal to the number of CPU cores allocated for the job). The remaining sample locations were decided as follows. We created a function that computed the lower confidence bound (LCB) for the GP: $LCB(x) = \mu_{GP}(x) - b \sigma_{GP}(x)$. We used $b =$ 1.0. For asynchronous parallel sampling, we needed a way to acquire multiple locations that were reasonable, but also sufficiently well apart. For this purpose we created a function that calculated the sum of radial-basis function kernels that were centered at the locations $P$ currently being sampled: $R(x) = \sum_{p \in P} a \exp((x-p)^2/l)$. We used $a =$ 0.04, $l =$ 0.04. The acquisition function for the next sample location was $A(x) = \min_x [LCB(x) + R(x)]$. Additionally, there was a 10 % chance of the location being drawn from the prior instead of the acquisition function. **Study 1:** The model was trained with Q-learning for 20 million training episodes, after which we simulated 10,000 episodes for visualizing the behavior predicted by the trained model. The observation data has the target item absent in 10 % of the sessions, but in the Chen et al. paper [@chen2015emergence] it was assumed that it was absent in 50 % of the cases. We tried both splits in training data (10% and 50%), but did not find a large overall difference in the results. In the subsequent experiments, we also used the 10 % split, as it might remove a possible source of bias. Our prior for $f_{dur}$ was a truncated Gaussian distribution with mean 300 ms, std 100 ms, min 0 ms, max 600 ms. The prior was set with the intuition that values between 200 ms and 400 ms should be likely ($\pm$ 1 std), whereas values between 100 ms and 500 ms could still be accepted if the data really supported those values ($\pm$ 2 std). BOLFI computed discrepancy at 100 locations using 40 CPU cores. Of the 10,000 simulated episodes, we only used the first 2,500 for calculating the discrepancy. This was done as it is more sensible to compare datasets of similar size. Altogether the model fitting took 20 h (in wall-clock time), each individual sample taking 6 h. The discrepancy was based on the mean and standard deviation of the aggregate task completion time. It was constructed so that it would fit the mean accurately (L2-penalty) and the standard deviation with lower priority (L1-penalty). The formula was: $d = a \times (mean_{obs} - mean_{sim})^2 + b \times |std_{obs} - std_{sim}|$, where we used $a = b = 10^{-6}$ for a reasonable scale and the used feature was the aggregate TCT. **Study 2:** Our prior for $d_{sel}$ was a truncated Gaussian distribution with mean 300 ms, std 300 ms, min 0 ms, max 1000 ms. 300 ms was selected as our initial best guess for the delay, as the second peak in observed fixation duration when target was present (Figure \[fig:exp1-results-1\]) was around 600 ms and we thought it likely that the normal fixation duration was around 300 ms. However, as we had relatively high uncertainty about this, we chose a quite flat prior. Our prior for $p_{rec}$ and $p_{sem}$ were uniform distributions with \[min 0, max 1\]. Uninformative priors were used as we were uncertain about the possible true values of these parameters. The discrepancy was the average of $d(TCT_{pre})$ and $d(TCT_{abs})$. As the parameter space sizes varied, we chose the number of samples and CPUs for each case separately. Baseline: 100 samples, 40 CPUs (20 h). Variant 1: 200 samples, 80 CPUs (16 h). Variant 2: 400 samples, 80 CPUs (30 h). Variant 3: 600 samples, 100 CPUs (37 h). **Study 3:** The prior for $p_{rec}$ was a truncated Gaussian distribution with \[mean 69 %, std 20 %, min 0 %, max 100 %\]. The prior for $p_{sem}$ was similar but with mean 93 %. The priors were based on the knowledge gained from Study 2, and thus centered on the MAP estimate of Variant 3, but were reasonably flat to allow for individual variation. The discrepancy was the same as in Study 2. Out of the total 10,000 simulated sessions, we used the first 200 for calculating the discrepancy to match the individual dataset sizes. For each of the users, we computed 200 samples using 60 CPUs (22 h each). [^1]: For simpler models, such as regression models (e.g., Fitts’ law), there exist well-known methods for finding parameter values, such as ordinary least squares. [^2]: In technical terms, such models generally do not have a *likelihood function*—defining the likelihood of parameter values given the observations—that could be written in closed form. [^3]: The naive way to use ABC would be to simulate a large amount of samples densely covering the parameter space and keep those that have the lowest discrepancy values. This method is also known as Rejection ABC. However, as in our case the simulations take multiple hours each, this approach has infeasible total computation time. [^4]: For example, the saccade duration parameters were set based on a study by Baloh et al. [@baloh1975quantitative] and the fixation duration parameters based on a study by Brumby et al. [@Brumby:2014]. [^5]: The only technical difference between the original vs. our implementation was that in the original [@chen2015emergence] Q-learning was performed on a predetermined set of 10,000 menu realizations, whereas we generated a new menu for every training session. The original implementation thus converged slightly faster, as it explored a smaller part of the state space.
--- abstract: | We report the detection of the Pb [I]{} 4057.8Å line in the very metal-poor (\[Fe/H\]$=-2.7$), carbon-rich star, LP 625-44. We determine the abundance of Pb (\[Pb/Fe\] $ = 2.65$) and 15 other neutron-capture elements. The abundance pattern between Ba and Pb agrees well with a scaled solar system [*s*]{}-process component, while the lighter elements (Sr-Zr) are less abundant than Ba. The enhancement of [*s*]{}-process elements is interpreted as a result of mass transfer in a binary system from a previous AGB companion, an interpretation strongly supported by radial velocity variations of this system. The detection of Pb makes it possible, for the first time, to compare model predictions of [*s*]{}-process nucleosynthesis in AGB stars with observations of elements between Sr and Pb. The Pb abundance is significantly [*lower*]{} than the prediction of recent models [e.g., @gallino98], which succeeded in explaining the metallicity dependence of the abundance ratios of light [*s*]{}-elements (Sr-Zr) to heavy ones (Ba-Dy) found in previously observed [*s*]{}-process-enhanced stars. This suggests that one should either (a) reconsider the underlying assumptions concerning the $^{13}$C-rich [*s*]{}-processing site ($^{13}$C-pocket) in the present models, or (b) investigate alternative sites of [*s*]{}-process nucleosynthesis in very metal-poor AGB stars. author: - 'Wako Aoki, John E. Norris, Sean G. Ryan, Timothy C. Beers, Hiroyasu Ando' title: | Detection of Lead in the Carbon-Rich, Very Metal-Poor Star LP 625-44:\ A Strong Constraint on [*s*]{}-Process Nucleosynthesis at Low Metallicity --- Introduction {#sec:intro} ============ The slow neutron-capture process ([*s*]{}-process) is considered one of the major pathways for the creation of nuclei heavier than iron, and the asymptotic giant-branch (AGB) phase of low- and intermediate-mass stars has been studied as its most likely astrophysical site. One important component in understanding [*s*]{}-process nucleosynthesis is the correct identification of the neutron sources involved. Two reactions – $^{22}$Ne($\alpha,n$)$^{25}$Mg and $^{13}$C($\alpha,n$)$^{16}$O – have received most attention. Recent models of AGB stars prefer $^{13}$C as the main source, because the temperature of the He burning shell hardly reaches $3\times 10^{8}$ K required for the $^{22}$Ne($\alpha,n$)$^{25}$Mg reaction [e.g., @gallino98]. This is supported by the observed metallicity dependence of the abundance ratios of heavier [*s*]{}-process elements (e.g., Ba, Nd) to lighter ones (e.g., Sr, Zr) found in [*s*]{}-process-enhanced objects such as MS- and S-type stars [@smith90], barium stars [@luck91] and CH stars [@vanture92; @norris97a]. While the seed nuclei for the [*s*]{}-process, such as iron, are secondary (i.e., their abundances are proportional to metallicity), the production of $^{13}$C in AGB stars is primary, contrary to that of $^{22}$Ne. Therefore, higher neutron exposure, and thus larger enhancement of the heavier elements, is expected from $^{13}$C in stars of lower metallicity. Models of nucleosynthesis in AGB stars by @gallino98, followed by @busso99, successfully reproduced the observed trend for lighter elements (Sr-Zr), as well as for heavier ones (Ba-Gd). For stars of very low metallicity, according to these models, a large excess of lead (Pb) is expected. For instance, the enhancement of Pb by two or three orders of magnitude relative to that expected for solar-abundance stars is predicted for AGB stars with \[Fe/H\] $=-2.0$ [^1], while that of Ba is at most one order of magnitude [@busso99]. Thus Pb in metal-poor, [*s*]{}-process-enhanced, stars should provide an excellent diagnostic for models of [*s*]{}-process nucleosynthesis in AGB stars. However, the abundance of Pb is difficult to measure in most stars. Lead abundances for the metal-poor stars HD 115444 and HD 126238 were derived from [*Hubble Space Telescope*]{} ultraviolet spectra [@sneden98], but Pb has not yet been detected in the optical spectra of these objects. The @sneden00 analysis of a high-S/N Keck HIRES spectrum of the [*r*]{}-process-rich star CS 22892-052 detected Pb [I]{} lines in the visual spectrum of this star, and derived its abundance. We note that the Pb observed in all three of these stars is attributed primarily to the [*r*]{}-, rather than the [*s*]{}-process, due to the strong enhancements of other [*r*]{}-process-dominated nuclei, such as Eu. One study of the [*s*]{}-process for a solar metallicity star by @gonzalez98 reports the Pb abundance of the post-AGB star FG-Sge, based on an analysis of the Pb [I]{} 7229Å line. In this [*Letter*]{} we report the detection of Pb [I]{} 4057.8Å and derive a Pb abundance in the carbon-rich, very metal-poor star LP 625-44. This object was shown by @norris97a to exhibit very large excesses of carbon, nitrogen, and neutron-capture elements. Their interpretation was that the large excesses of heavy elements were likely to have originated from [*s*]{}-process nucleosynthesis in an AGB binary companion which provided LP 625-44 with carbon-rich material by mass transfer. The updated abundance pattern (see §\[sec:ana\]), and variation of radial velocity (see §\[sec:obs\]), reported in the present work make this interpretation quite convincing. The determination of a Pb abundance for this star (§\[sec:ana\]) provides the opportunity, for the first time, to test models of nucleosynthesis in AGB stars for [*s*]{}-process elements between Sr and Pb (§\[sec:disc\]). Observations and Measurements {#sec:obs} ============================= A high-resolution spectrum of LP 625-44 was obtained with the University College London coudé échelle spectrograph (UCLES) and Tektronix 1024$\times$1024 CCD at the Anglo-Australian Telescope on August 5, 1996. The wavelength range 3700–4720 Å was covered with resolving power $R \sim 40,000$. We also obtained a red spectrum (5015–8500 Å) with the same instrument on June 16, 1994. The numbers of detected photons are 12000 per 0.04Å pixel at 4300Å ($S/N \sim 150$ per resolution element) and 2000 per 0.06Å pixel at 6000Å ($S/N \sim 60$ per resolution element). Data reduction was performed in the standard way within the IRAF[^2] environment. Equivalent widths were measured by fitting Gaussian profiles to the absorption lines, and will be reported in @aoki00. The error for lines weaker than 60 mÅ, determined from the comparison of two measurements of lines which appear on adjacent échelle orders, was about 4 mÅ and 6 mÅ in the blue and red spectra, respectively. There is no systematic difference between the equivalent widths of Fe [I]{} lines measured in this work and those by @norris96, even though our $S/N$ is substantially higher. Additional spectra were obtained on 1998 August 11 and 2000 January 26, the former using UCLES, and the latter with the Utrecht échelle spectrograph (UES) on the William Herschel Telescope (WHT). Both have lower S/N than is necessary for an abundance analysis, the sole aim being to measure radial velocities. In each case, HD 140283 was also observed to provide a template for cross-correlation. That star has a similar metallicity but is free of the CH blends that affect many lines in LP 625-44. Radial velocities for LP 625-44 were obtained relative to HD 140283 by cross-correlation, and by measuring the radial velocity of HD 140283 from the central wavelengths of 175 (1998) and 122 (2000) unblended lines. Error estimates were based on the variation in velocity from different échelle orders and from the standard error in the measurement of HD 140283. The heliocentric values, which extend those presented by @norris97a, are: HJD 2451037.00: $v_{\rm rad}$ = 33.5$\;\pm\;$0.2 (1$\sigma$) km s$^{-1}$; and HJD 2451569.80: $v_{\rm rad}$ = 30.0$\;\pm\;$0.3 (1$\sigma$) km s$^{-1}$. @ryan99 estimated external errors of 0.3 km s$^{-1}$ for a similar procedure; this has been added to the internal errors for Fig. \[fig:rv\]. The data confirm that LP 625-44 is a binary with a period of at least 12 years. Abundance Analysis and Results {#sec:ana} ============================== In the region near the Pb [I]{} line at 4058Å , line-blending is so severe that the Pb abundance was derived by spectrum synthesis. This method was also applied to lines which are affected by blending and/or hyperfine splitting. The standard analysis, based on the equivalent widths, was applied to single (unblended) lines. The abundance analysis used model atmospheres in the ATLAS9 grid of @kurucz93a. We adopted an effective temperature $T_{\rm eff}=5500$K, determined by @norris97a from the $R-I$ color. This color is not severely affected by strong carbon and nitrogen features in stars of this temperature and abundance [@aoki00]. Surface gravity ($\log g$), metallicity, and microturbulent velocity ($\xi$), were re-determined in the present work. The surface gravity was obtained from the ionization balance between Fe [I]{} and Fe [II]{}, the metallicity was estimated from the abundance analysis of those lines, and $\xi$ was determined from the Fe [I]{} lines by demanding no dependence of the derived abundance on equivalent widths. The results are: $\log g=2.8$, $\xi=1.6$ km s$^{-1}$, and \[Fe/H\] $=-2.72$. The agreement with the results of @norris97a is good, with the exception of the microturbulent velocity, for which they derived 1.0 km s$^{-1}$. Pb lines are difficult to measure in optical stellar spectra. Even for the sun, only four Pb [I]{} lines (3639 Å , 3683 Å , 3739 Å , and 4057 Å) have been studied in the visual region. From these, @youssef89 determined the abundance $\log\epsilon_{\rm Pb}=2.0$, which agrees fairly well with meteoritic measurements [$\log\epsilon_{\rm Pb}=2.06;$ @grevesse96]. Our spectra covered the lines at 3739.9Å and 4057.8Å , listed in Table \[tab:line\]. They are expected to be weak, and no clear absorption appears in the solar spectrum. However, Pb [I]{} 4057.8Å was clearly detected in LP 625-44, as shown in the upper panel of Fig. \[fig:pb\], where the synthetic spectra fitted to the observed data are also shown. In spite of the presence of other lines, the contribution of Pb [I]{} is remarkable. To check possible contamination of the Pb region by other elements, we examined the spectrum of HD 140283, a very metal-poor subgiant ($T_{\rm eff}=5750$ K, $\log g=3.4$ and \[Fe/H\] $=-$2.54, Ryan et al. 1996). We found no distinct absorption feature at 4057.8Å [see Fig. 1b in @norris96]. As a further check for contamination due to CH and CN, the observed and synthetic spectra of CS 22957-027 are shown in the lower panel of Fig. \[fig:pb\]. @norris97b showed that this very metal-poor giant ($T_{\rm eff}=4850$K, $\log g=1.9$ and \[Fe/H\] $=-3.38$) has very large excesses of $^{12}$C, $^{13}$C and N but no excess of heavy elements. This spectrum indicates that the absorption feature at 4057.8Å in LP 625-44 is [*not*]{} due to CH and CN lines. To check our procedure for the determination of the Pb abundance, we also analyzed the solar spectrum [@kurucz93b] using a solar photospheric model [@holweger74]. Our result agrees very well with that of @youssef89 for Pb [I]{} 3683Å, which is the clearest Pb [I]{} line in the optical range. This demonstrates the reliability of the basic data (e.g., partition functions) and the software used in our analysis. (Line contamination is so severe at 4057.8Å in the solar spectrum that the exact abundance cannot be derived from this line.) An abundance ratio \[Pb/Fe\] $= 2.65$ was derived for LP 625-44 from a comparison between the synthetic spectra and the observed one. The other Pb [I]{} line covered by our spectrum is at 3739.9Å, but there is no distinct feature at this wavelength. Hence, we derive an upper limit on the abundance ratio \[Pb/Fe\] $<+3.2$ from this non-detection, which supports the Pb [I]{} 4057.8Å result (\[Pb/Fe\]=2.65). This upper limit is important, as in the next section we show that this Pb abundance is [*lower*]{} than predicted by some models of $s$-process nucleosynthesis in very metal-poor AGB stars. Our derived abundances for the heavy elements are similar to the results presented by @norris97a. Abundances of Er, Tm and Hf, not previously known in this star, could also be determined due to the better quality of the new spectra. All new results are given in Table \[tab:res\]. The line data used in the analysis will be compiled in @aoki00. Errors in our estimated abundances were obtained as follows. Errors arising from uncertainties in the atmospheric parameters were evaluated by adding in quadrature the individual errors on the parameters – $\Delta T_{\rm eff}=100$K, $\Delta \log g=0.3$, and $\Delta \xi=0.5$km s$^{-1}$. The internal errors were estimated by assuming the random error in the equivalent width measurements to be 4 mÅ (and 6 mÅ for Ba [II]{} in the red region; see §\[sec:obs\]), and taking the random error in less-certain $gf$ values to be 0.1 dex. Discussion and Concluding Remarks {#sec:disc} ================================= Fig. \[fig:abundance\] presents derived abundances as a function of atomic species for LP 625-44. The thick solid line indicates the abundance pattern of the main [*s*]{}-process component determined by @arlandini99, while the thin line indicates the [*r*]{}-process component. The dotted line is the total solar abundance adopted by @arlandini99. We see good agreement between the observed abundances of elements heavier than Ba with the scaled [*s*]{}-process component. This fact, found by @norris97a for Ba to Dy, is now extended to heavier elements and made even more compelling. The excesses of these elements, and their [*s*]{}-process nature, are interpreted as a result of the transfer of material rich in [*s*]{}-process elements across a binary system including an AGB star. Our new radial velocity measurements confirm the binarity and strengthen this interpretation. Since the excess of heavy elements is very large (e.g., \[Ba/Fe\]=2.7), the material from the AGB star should dominate the surface abundances of LP 625-44. Thus, the relative abundances of the heavy elements in this star should provide an almost pure representation of the nucleosynthesis products of the previously existing very metal-poor (\[Fe/H\] $=-2.7$) AGB companion. With the adoption of this interpretation the abundances in LP 625-44 can be compared with theoretical predictions of nucleosynthesis in AGB stars. @gallino98 and @busso99 showed that, at low abundance, the metallicity effect on [*s*]{}-process yields favors the production of heavier elements. As found in Fig. \[fig:abundance\], the Sr-Zr enhancement relative to the solar [*s*]{}-component is much smaller than that of heavy elements (Ba-Hf) in LP 625-44, a result in qualitative agreement with the expected metallicity dependence. The metallicity effect is essentially due to the level of neutron exposure, which is expected to be higher at lower metallicity (see §\[sec:intro\]). Higher neutron exposure necessarily requires larger production of the heaviest s-process element, Pb, in very metal-poor stars. @busso99 explicitly showed the metallicity effect on the enhancement factors of [*s*]{}-process elements relative to solar abundances for $-3.2 < $ \[Fe/H\] $ < 0.4$ in their Figure 12, where the enhancement factor of Pb is larger by about two orders of magnitude than that of Ba at \[Fe/H\] $= -2.7$. However, the enhancement of Pb in LP 625-44 (\[Pb/Fe\] = 2.65) is [*nearly the same*]{} as that of Ba (\[Ba/Fe\] = 2.74). If the observed Pb abundance of LP 625-44 generally represents the yields from very metal-poor AGB stars, their models of nucleosynthesis in AGB stars may overestimate its production by about two orders of magnitude at very low metallicity. This conflict might be resolved by tuning the models of @gallino98 or @busso99. For instance, the neutron flux can be changed by modifying the extension or chemical profile of the $^{13}$C-pocket, which is a free parameter in their models. Another parameter is the mass of the AGB star, upon which the number of thermal pulses (and hence episodes of neutron exposure) is strongly dependent. Another possibility is that the [*s*]{}-process production mechanism in very metal-poor (e.g., \[Fe/H\]$<-2.5$) AGB stars is quite different from that in more metal-rich stars. The calculation of low-mass stellar evolution in metal-deficient stars by @fujimoto00 showed that hydrogen mixing occurs during the helium shell flash for 1$-$3.5 M$_{\odot}$ stars with \[Fe/H\]$<-2.5$ (their case II’), contrary to the situation for stars with higher metallicity (their case IV). Their result suggests that the production of $^{13}$C, and subsequent [*s*]{}-process nucleosynthesis, is possible in the helium convective region during thermal pulses in these very metal-poor stars. Our detection of Pb in the very metal-poor, carbon- and [*s*]{}-process-rich star, LP 625-44 provides a strong constraint on models of nucleosynthesis in AGB stars. The observed abundance patterns for heavier elements (Ba-Pb) agree well with the solar main [*s*]{}-process component, rather than with nucleosynthesis models for very metal-poor AGB stars. Further observation of [*s*]{}-process elements, including Pb, for objects similar to LP 625-44, and revisions of the theoretical approach to the nucleosynthesis in very metal-poor environments, will impact on our understanding of the evolution of low- and intermediate-mass stars, as well as of the enrichment of heavy elements in the early Galaxy. In this context, we note that we have also measured Pb in the star LP 706-7, which is similar to LP 625-44 in many respects [@norris97a]. That object, whose [*s*]{}-process abundances nevertheless differ from those of LP 625-44, will be discussed separately in a future paper. We are grateful to the Director and staff of the Anglo-Australian Observatory, and the Australian Time Allocation Committee for providing the observational facilities used in this study. W.A. would like to acknowledge fruitful discussions with T. Kajino. T.C.B. acknowledges partial support of this work from grant AST 95-29454 awarded by the (US) National Science Foundation. Aoki, W., Norris, J. E., Ryan, S. G., Beers, T. C., Ando, H., 2000, in preparation Arlandini, C., Käppeler, F., Wisshak, K., Gallino, R., Lugaro, M., Busso, M., Straniero, O., 1999, , 525, 886 Busso, M., Gallino, R., Wasserburg, G. J., 1999, , 37, 239 Fujimoto, M. Y., Ikeda, Y., Iben, I., 2000, , 529, L25 Gallino, R., Arlandini, C., Busso, M., Lugaro, M., Travaglio, C., Straniero, O., Chieffi, A., Limongi, M., 1998, , 497, 388 Gonzalez, G., Lambert, D. L., Wallerstein, G., Rao, N. K., Smith, V. V., McCarthy, J. K., 1998, , 114, 133 Grevesse, N., Noels, A., Sauval, A. J., 1996, in ASP Conf. Ser., 99, Cosmic Abundances, ed. S. S. Holt & G. Sonneborn (Cambridge Univ. Press), 117 Holweger, H., Müller, E. A., 1974, , 39, 19 Käppeler, F., Beer, H., Wisshak, K., 1989, Rep. Prog. Phys., 52, 945 Kurucz, R. L., 1993, CD-ROM 13, ATLAS9 Stellar Atmospheres Programs and 2km/s Grid (Cambridge: Smithsonian Astrophys. Obs.) Kurucz, R. L., 1993, CD-ROM 18, SYNTHE Spectrum Synthesis Programs and Line Data (Cambridge: Smithsonian Astrophys. Obs.) Luck, R. E., Bond, H. E., 1991, , 77, 515 Norris, J. E., Ryan, S. G., Beers, T. C., 1996, , 107, 391 Norris, J. E., Ryan, S. G., Beers, T. C., 1997a, , 488, 350 Norris, J. E., Ryan, S. G., Beers, T. C., 1997b, , 489, L169 Ryan, S. G., Norris, J. E., Beers, T. C., 1996, , 471, 254 Ryan, S. G., Norris, J. E., Beers, T. C., 1999, , 523, 654 Smith, V. V., Lambert, D. L., 1990, , 72, 387 Sneden, C., Cowan, J. J., Burris, D. L., Truran, J. W., 1998, , 496, 235 Sneden, C., Cowan, J. J., Ivans, I. I., Fuller, G. M., Burles, S., Beers, T. C., Lawler, J. E., 2000, preprint Vanture, A. D., 1992, , 104, 1997 Youssef, N. H., Khalil, N. M., 1989, , 208, 271 [cccc]{} Wavelength (Å) & $\chi$ (eV) & $\log gf$ & \[Pb/Fe\]\ 3739.940 & 2.66 & $-0.12$ & $<+3.2$\ 4057.815 & 1.32 & $-0.20$ & +2.7\ [ccccc]{} Element & \[X/Fe\] & $\log\epsilon_{\rm el}$ & n & $\sigma$\ Fe I (\[Fe/H\]) & $-$2.71 & 4.78 & 34 & 0.13\ Fe II (\[Fe/H\]) & $-$2.70 & 4.79 & 3 & 0.18\ Sr II & +1.15 & 1.37 & 3 & 0.16\ Y II & +0.92 & 0.45 & 2 & 0.12\ Zr II & +1.31 & 1.22 & 4 & 0.12\ Ba II & +2.74 & 2.26 & 3 & 0.20\ La II & +2.50 & 1.02 & 5 & 0.13\ Ce II & +2.27 & 1.20 & 26 & 0.12\ Pr II & +2.45 & 0.55 & 5 & 0.12\ Nd II & +2.22 & 1.00 & 16 & 0.12\ Sm II & +2.20 & 0.48 & 16 & 0.12\ Eu II & +1.97 & $-$0.2 & 2 & 0.20\ Gd II & +2.31 & 0.70 & 6 & 0.13\ Dy II & +1.98 & 0.1 & 4 & 0.2\ Er II & +2.04 & 0.3 & 2 & 0.2\ Tm II & +1.96 & $-0.6$ & 1 & 0.2\ Hf II & +2.76 & 0.8 & 2 & 0.2\ Pb I & +2.65 & 2.0 & 1 & 0.2\ [^1]: \[A/B\]$\equiv\log(N_{\rm A}/N_{\rm B})$ $-\log(N_{\rm A}/N_{\rm B})_{\odot}$, and $\log\epsilon_{\rm A}$ $\equiv\log(N_{\rm A}/N_{\rm H})+12$ for elements A and B [^2]: IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation
--- abstract: '[We use the McKendrick equation with variable ageing rate and randomly distributed maturation time to derive a state dependent distributed delay differential equation. We show that the resulting delay differential equation preserves non-negativity of initial conditions and we characterise local stability of equilibria. By specifying the distribution of maturation age, we recover state dependent discrete, uniform and gamma distributed delay differential equations. We show how to reduce the uniform case to a system of state dependent discrete delay equations and the gamma distributed case to a system of ordinary differential equations. To illustrate the benefits of these reductions, we convert previously published transit compartment models into equivalent distributed delay differential equations. ]{}' author: - 'Tyler Cassidy, Morgan Craig and Antony R. Humphries' - | [ Tyler Cassidy$^1$,]{} [ Morgan Craig$^{2,3}$]{} and [Antony R. Humphries$^{1,3}$]{}\ $^1$ Department of Mathematics and Statistics, McGill University,\ 805 Sherbrooke Street West, Montreal, H3A 0B9, Canada.\ $^2$ Département de mathématiques et de statistique, Université de Montréal,\ 2920 chemin de la Tour, Montréal, H3T 1J4, Canada\ $^3$ Department of Physiology, McGill University,\ 3655 Promenade Sir-William-Osler, Montreal, H3G 1Y6, Canada.\ title: A Recipe for State Dependent Distributed Delay Differential Equations --- Introduction {#Sec:Introduction} ============ Age structured population models have been used extensively in mathematical biology throughout the past 90 years [@Mckendrick1925; @Trucco1965] (see [@Metz1986] for a review). These age structured models describe the progression of individuals through an ageing process by using partial differential equations (PDEs), that can, in certain cases, be reduced to a delay differential equation (DDE) [@Metz1986; @Smith1993; @Craig2016]. When individuals exit the ageing process in a deterministic manner upon reaching a threshold maturation age, the age structured model is typically reduced to a discrete DDE. In many populations, the speed at which an individual matures is often only weakly coupled to chronological time and is dynamically controlled by the availability of resources. Consequently, when considering the age of an individual in a population, it is the biological age – and not the chronological age– that is of interest. It is possible to allow for this dynamic accumulation of biological age by including a variable ageing rate in an age structured PDE model. PDE models with variable ageing rates and threshold maturation rates can be reduced to state dependent discrete DDEs. State dependent delays considerably complicate the study of these models, but incorporate external control of the maturation process and increase physiological relevance. However, imposing a threshold maturation age does not account for population heterogeneity and implicitly assumes a homogeneous maturation age. Given the importance of individual differences in a population, it is important that intraspecies heterogeneity is included in mathematical models. In light of these observations, we develop a technique to explicitly incorporate maturation age heterogeneity and external control of age accumulation by providing a framework for state dependent distributed DDEs. State dependent distributed DDEs account for a measure of population heterogeneity not present in discrete DDE models while retaining external control of the ageing process. Therefore, distributed DDEs offer a physiologically more realistic manner to model ageing processes in populations [@Cassidy2018]. To derive a state dependent distributed DDE, we consider a general age structured model with a variable ageing rate. We eschew a deterministic maturation process (which would lead to state dependent discrete DDEs), and instead utilise a randomly distributed maturation age $A$. This random variable defines a density function $K_A(t)$ through $$K_A(t) = \lim \limits_{\Delta t \to 0} \frac{\mathbb{P}\left[t{\leqslant}A {\leqslant}t+\Delta t \right]}{\Delta t}, \label{Eq:DensityDefinition}$$ which satisfies $$\int_0^{\infty} K_A(t) {\mathrm{d}}t = 1 \quad \textrm{and} \quad K_A(t) {\geqslant}0 \quad \forall t {\geqslant}0.$$ As shown by @Craig2016 [@Otto2017] and @Bernard2016, replacing existing discrete delays with state dependent delays requires careful attention to how solutions pass across the maturation boundary. @Craig2016 derived a “correction” factor to ensure that individuals are not spuriously created or destroyed during maturation. Our work generalises the correction factor derived by @Craig2016 for state dependent discrete DDEs to any state dependent DDE. Specifically, our derivation does not rely on a smoothness argument, but arises naturally from the age structured PDE after a careful derivation of the maturation rate. We show how the age structured PDE can be reduced to a state dependent distributed DDE. For specific densities $K_A(t)$, we show equivalence between the state dependent distributed DDE and state-dependent discrete DDEs with one or two delays or a finite dimensional systems of ordinary differential equations (ODEs). These equivalences arise from the explicit consideration of the ageing process modelled by the distributed DDEs. By applying the linear chain technique to the age variable, instead of the time variable, we are able to establish the desired equivalences. As there is not an available all purpose numerical method capable of solving distributed DDEs, these equivalences allow for the model to be analysed as a DDE and simulated using the highly efficient established techniques for discrete DDEs or ODEs. To illustrate the benefits of the techniques developed here, we consider two previously published models of hematopoietic cell production and show how using distributed DDEs can simplify the analysis of the resulting model. The structure of the article is as follows. In Section \[Sec:AgeStructuredPDEReduction\], we study the McKendrick equation for a generic population with a variable ageing rate and random maturation time. By solving the PDE using the method of characteristics, we derive a state-dependent distributed DDE for the general density $K_A(t)$ in Theorem \[Theorem:GeneralDDE\]. We discuss the naturally arising “correction” factor in Section \[Sec:CorrectionFactor\]. To illustrate the benefits of reducing age structured models to DDEs, we show that the resulting DDE preserves non-negativity of initial conditions and perform stability analysis to study the local stability of equilibria in Section \[Sec:AnalysisofDDE\]. By specifying $K_A(t)$ to be the degenerate distribution, we recover a state-dependent discrete DDE in Section \[Sec:StateDependentDDE\]. Next, we consider uniform distributions and the equivalent two delay DDE in Section \[Sec:CompactSupportDistribution\]. In Section \[Sec:GammaDistributedDDE\], we study a gamma distributed DDE. Through a generalization of the linear chain technique to include a variable transit rate, we show how this gamma distributed DDE can be reduced to a finite dimensional system of transit compartment ODEs in Section \[Sec:FiniteDimensionalRepresentation\]. In Section \[Sec:Examples\], we formalize the link between variable transit rate compartment models and state dependent delayed processes by converting two previously published transit compartment models to the corresponding distributed DDEs. Finally, we summarize our results with a brief conclusion. From McKendrick Type Equations to State Dependent Delays {#Sec:AgeStructuredPDEReduction} ======================================================== Consider a population divided into immature and mature compartments in which only mature individuals reproduce. Let $n(t,a)$ denote the number of immature individuals at time $t$ with age $a$ and $x(t)$ denote the number of mature members of the population at time $t$. The purpose of this section is to establish a state dependent distributed DDE model for $x(t)$. We begin with an age structured PDE for the immature population, $n(t,a)$. Immature individuals progress through maturation with a variable ageing rate $V_a(t)$, where $V_a(t)$ satisfies $$0< V_a^{min} {\leqslant}V_a(t) {\leqslant}V_a^{max} < \infty.$$ Following @Mckendrick1925, the PDE describing $n(t,a)$ is $$\left. \begin{aligned} \partial_t n(t,a) + V_a(t)\partial_a n(t,a) & = -\left[\mu(x(t))+h(a)\right]n(t,a)\\ V_a(t) n(t,0) = \beta x(t) \quad t {\geqslant}t_0; & \quad n(t_0,a) = f(a) {\geqslant}0 \quad \forall a \in (0, \infty ). \end{aligned} \right \} \label{Eq:McKendrickAgePDE}$$ The boundary condition $V_a(t) n(t,0) = \beta x(t)$ that we impose links the creation of immature individuals $n(t,0)$ with the birth rate $\beta x(t)$. The presence of $V_a(t)$ in this boundary term can be understood from the conveyor belt analogy [@Mahaffy1998; @Bernard2016]. In the following, we assume $\beta >0$. The initial conditions $n(t_0,a) = f(a) {\geqslant}0,$ describes immature individuals with non-zero age at time $t_0$. The death rate of immature individuals is given by $\mu(x(t))$ while transition from the immature state to the mature state is modelled by $ h(a)$. It is important to note that the transition rate is a function of the age of individuals at time $t$. Since we expect a link between time and physiological age, we will write $a(t)$. Later, we formalize the weakly coupled relationship between biological and chronological age and justify this notation by finding the characteristics of . We begin by deriving the transition rate from immaturity to maturity, $h(a(t))$. As mentioned, we assume that the age at which an individual matures is a non-negative random variable $A$ with density function $K_A(t)$. The transition rate, $h(a(t))$, is the instantaneous change in probability that an individual matures at age $a(t+\Delta t)$, given that the individual has not matured at age $a(t)$. Formally, using the definition of conditional probability, $$h(a(t)) = \lim \limits_{\Delta t \to 0} \frac{\mathbb{P}\left[a(t){\leqslant}A {\leqslant}a(t+\Delta t)| A {\geqslant}a(t) \right]}{\Delta t} = \lim \limits_{\Delta t \to 0} \frac{\mathbb{P}\left[a(t){\leqslant}A {\leqslant}a(t+\Delta t) \right]}{ \mathbb{P}[A {\geqslant}a(t)] \Delta t}.$$ Multiplying by unity gives $$h(a(t)) = \frac{1}{ \mathbb{P}[A {\geqslant}a(t)] }\lim \limits_{\Delta t \to 0} \frac{\mathbb{P}\left[a(t){\leqslant}A {\leqslant}a(t+\Delta t) \right]}{ (a(t+\Delta t)-a(t)) } \frac{a(t+\Delta t)-a(t)}{\Delta t}.$$ By and the derivative of $a(t)$, we obtain $$h(a(t)) = \frac{K_A(a(t))}{1-\int_{0}^{a(t)} K_A(\sigma){\mathrm{d}}\sigma} {\frac{\textrm{d}}{\textrm{dt}}}a(t). \label{Eq:HazardRateDefinition}$$ The transition (or maturation) rate, $h(a(t))$, is known as the hazard rate of the random variable $A$ and has applications in modelling failure rates [@Cox1972; @Kaplan1958]. @Metz1986 derived the identical expression for $h(a(t))$ without considering the conditional maturation probability. It is possible that immature individuals create multiple mature individuals upon transitioning to the mature compartment (i.e mitosis), so we model the influx rate into the mature compartment as a function $$F\left( x(t),\int_{0}^{\infty} h(s)n(t,s){\mathrm{d}}s \right),$$ where the integral term $$\int_{0}^{\infty} h(s)n(t,s){\mathrm{d}}s \label{Eq:NumberOfIndividualsMaturing}$$ is the number of immature individuals that reach maturity at time $t$. If mature individuals are cleared at a population dependent rate $\gamma(x(t))$, then the mature population satisfies $$\left. \begin{aligned} {\frac{\textrm{d}}{\textrm{dt}}}x(t) & = F\left( x(t),\int_{0}^{\infty} h(s)n(t,s){\mathrm{d}}s \right) - \gamma(x(t)) x(t) \\ x(0) & = x_0. \\ \end{aligned} \right \} \label{Eq:MaturePopulationDE}$$ We are now able to establish equivalence between the system of equations describing the populations $x(t)$ and $n(t,a)$ and a distributed DDE. To do this, we partially solve the PDE  using the method of characteristics. \[Theorem:GeneralDDE\] Let the immature population $n(t,a)$ satisfy the McKendrick age structured PDE  with the distribution dependent transition rate $h(a(t))$ . Assume that the mature population $x(t)$ is given by . Then, the mature population $x(t)$ satisfies the initial value problem (IVP) $$\begin{aligned} {\frac{\textrm{d}}{\textrm{dt}}}x(t) & = F\left( x(t), \int_{0}^{\infty} \beta x(t-{\varphi}) \frac{V_a(t)}{V_a(t-{\varphi})}\exp \left[ - \int_{t-{\varphi}}^t \mu(x(s)) {\mathrm{d}}s\right] K_A\left( \int_{t-{\varphi}}^t V_a(s){\mathrm{d}}s \right) {\mathrm{d}}{\varphi}\right) \\ & \qquad {} - \gamma(x(t)) x(t) \end{aligned} \label{Eq:MatureDDE}$$ with initial data $$x(s) = \rho(s) \quad \forall s \in (-\infty,t_0].$$ The characteristics of equation  satisfy $$\frac{{\mathrm{d}}}{{\mathrm{d}}{\varphi}} t({\varphi}) = 1, \quad \textrm{and} \quad \frac{{\mathrm{d}}}{{\mathrm{d}}t} a(t) = V_a(t), \label{Eq:CharacteristicLinesEq}$$ and hence are given by $$t = {\varphi}+ T_0 \quad \textrm{and} \quad a(t) = \int_{T_0}^{t} V_a(x){\mathrm{d}}x+a_0.$$ Along the characteristics, the age structured PDE  becomes $$\frac{{\mathrm{d}}}{{\mathrm{d}}t} n(t,a(t)) = -\left[ \mu(x(t)+ \frac{K_A(a(t))}{1-\int_{0}^{a(t)} K_A(\sigma){\mathrm{d}}\sigma} V_a(t) \right] n(t,a(t)). \label{Eq:ODEOnCharacteristic}$$ Equation  is a separable differential equation with solution $$n(t,a(t)) = n(T_0,a_0)\exp \left[ - \int_{T_0}^t \mu(x(s)) {\mathrm{d}}s\right] \left(1-\int_{0}^{a(t)} K_A(\sigma){\mathrm{d}}\sigma \right).$$ If $a_0 = 0$, we use the boundary condition of to find $$n(t,a(t)) = \frac{\beta x(T_0)}{V_a(T_0)} \exp \left[ - \int_{T_0}^t \mu(x(s)) {\mathrm{d}}s\right] \left(1-\int_{0}^{a(t)} K_A(\sigma){\mathrm{d}}\sigma \right), \label{Eq:TotalCellsAgeingProcess}$$ while, if $a_0 >0$, the initial condition of gives $$n(t,a(t)) = f(a_0)\exp \left[ - \int_{t_0}^{t} \mu(x(s)) {\mathrm{d}}s\right] \left(1-\int_{0}^{a(t)} K_A(\sigma){\mathrm{d}}\sigma \right).$$ To establish an equivalence between the PDE and the distributed DDE , it is necessary to define suitable initial data $x(s) = \rho(s)$ for $s<t_0$ for the DDE. To do this, it is natural to assume that an an immature individual with positive age $a >0$ at time $t_0$ was born at sometime $s<t_0$. Since the PDE is not defined for $s < t_0$, we are free to prescribe fixed values for $V_a(s) = V_a^*$ and $\mu(x(s)) = \mu^*$ for $ s < t_0$. Then, imposing that individuals born at time $s < t_0$ evolved according to the McKendrick Equation, we have $a = V_a^*(t_0-s)$, or $s = t_0-a/V_a^*$. Hence, the initial condition $f(a)$ defines the history function $\rho$ through $$f(a) = \frac{\beta}{V_a^*} \rho (t_0-a/V_a^*)\exp\left[ \int_{t_0-a/V_a^*}^{t_0} - \mu^*{\mathrm{d}}s\right]. \label{Eq:HistoryFunctionDefinition}$$ Therefore defining $x(s) = \rho(s)$, for $s < t_0$, this way, the solution applies. Now, we finalize the link between the age structured PDE and the distributed DDE by following the characteristic curves until they intersect with the $a=0$ axis. Along the characteristic curves, at time $t$, individuals born at time $T_0= t-{\varphi}$ have age $$a_{t}({\varphi}) = \int_{T_0}^t V_a(x){\mathrm{d}}x = \int_{t-{\varphi}}^t V_a(x){\mathrm{d}}x$$ for ${\varphi}>0.$ So we have $$n(t,a_{t}({\varphi}) ) = \frac{\beta x(t-{\varphi})}{V_a(t-{\varphi})} \exp \left[ - \int_{t-{\varphi}}^t \mu(x(s)) {\mathrm{d}}s\right] \left(1-\int_{0}^{a_{t}({\varphi}) } K_A(\sigma){\mathrm{d}}\sigma \right)$$ At time $t$, the rate at which individuals mature is $$\begin{aligned} \label{Eq:MaturationInput} \hspace{-.4cm} \int_{0}^{\infty} h(a_{t}({\varphi}) )n(t,a_{t}({\varphi}) ){\mathrm{d}}{\varphi}& = \int_{0}^{\infty} K_A(a_{t}({\varphi}) ) \beta x(t-{\varphi}) \frac{V_a(t)}{V_a(t-{\varphi})} \exp \left[ - \int_{t-{\varphi}}^t \mu(x(s)) {\mathrm{d}}s\right] {\mathrm{d}}{\varphi}.\end{aligned}$$ By defining, for any density $K_A(t)$, $$A_K(x(t)) := \int_{0}^{\infty} K_A(a({\varphi})) \frac{ \beta x(t-{\varphi}) }{V_a(t-{\varphi})} \exp \left[ - \int_{t-{\varphi}}^t \mu(x(s)) {\mathrm{d}}s\right] {\mathrm{d}}{\varphi}, \label{Eq:AKDefinition}$$ we have $$\hspace{-.2cm} \int_{0}^{\infty} h\left( a_{{\varphi}}(t) \right) n\left( t,a_{{\varphi}}(t)\right) {\mathrm{d}}{\varphi}= V_a(t) A_K(x(t)).$$ Consequently, using and defining the history $\rho(s)$ according to , we have established the equivalence between the system of and with the distributed DDE . Accounting for the Random Maturation Threshold {#Sec:CorrectionFactor} ---------------------------------------------- Further inspection of equation  reveals a ratio of ageing speeds $V_a(t)/V_a(t-{\varphi})$ in the integral term $$\int_{0}^{\infty} h(a({\varphi}))n(t,a({\varphi})){\mathrm{d}}{\varphi}= \int_{0}^{\infty} \beta x(t-{\varphi}) \frac{V_a(t)}{V_a(t-{\varphi})}\exp \left[ - \int_{t-{\varphi}}^t \mu(x(s)) {\mathrm{d}}s\right] K_A(\sigma){\mathrm{d}}{\varphi}.$$ The ratio of ageing velocities at the entrance and exit of the ageing process acts as a “correction factor”. As shown by @Bernard2016 and @Craig2016, models without the correction factor allow for spurious creation of individuals during maturation and some state-dependent DDEs have missed this important correction factor. Solutions of models without this correction factor do not necessarily preserve nonnegativity of initial data [@Bernard2016]. @Craig2016 derived the correction factor by carefully accounting for the number of cells crossing the maturation threshold in a discrete state-dependent DDE. In discrete DDEs, individuals mature following a deterministic process after accruing a specific threshold age, so the maturation boundary is well-defined. The derivation of the correction factor was based on the smoothness of the solution crossing the fixed maturation boundary. However, the idea of a fixed maturation boundary does not extend to random maturation ages. Consequently, the derivation of the correction factor by @Craig2016 does not generalise to generic distributed DDEs. Our derivation of the state-dependent distributed DDE produces the same correction factor through the instantaneous maturation probability, $h(a(t))$. The derivation of $h(a(t))$ in equation  produces the term $V_a(t)$ by accounting for the change of maturation probability due to the variable accumulation of age at time $t$. For the degenerate distribution, as shown in Section \[Sec:StateDependentDDE\], we obtain precisely the same ratio as @Craig2016. Properties of State Dependent Delay Differential Equations {#Sec:AnalysisofDDE} ========================================================== Replacing an age structured PDE by a DDE eliminates the need to explicitly model the ageing populations, which can be difficult to measure experimentally. DDEs offer a natural framework that explicitly incorporates delays and identifies the relationship between the current and past states. This can facilitate communication between mathematical biologists and biologists and physiologists. In particular, the explicit presence of the delay term allows for simple calculation of mean delay time that is important for translatability. As shown by @deSouza2017, models of delayed processes without DDEs do not always accurately calculate the mean delay time. However, DDEs typically define infinite dimensional semi-dynamical systems, which can introduce mathematical difficulties. As we have seen in Theorem \[Theorem:GeneralDDE\], partially solving an age structured PDE may lead to a DDE. As such, analysing these partially solved systems can be simpler than studying the corresponding PDE. As an example, we analyse the state-dependent distributed DDE in equation . Define $$\bar{x}(t) = V_a(t) A_K(t) = V_a(t) \int_{0}^{\infty} K_A(a({\varphi})) \frac{ \beta x(t-{\varphi}) }{V_a(t-{\varphi})} \exp \left[ - \int_{t-{\varphi}}^t \mu(x(s)) {\mathrm{d}}s\right] {\mathrm{d}}{\varphi},$$ and consider the IVP $$\left. \begin{array}{lll} {\frac{\textrm{d}}{\textrm{dt}}}x(t) & = & F\left[ x(t),\bar{x}(t)\right] - \gamma(x(t)) x(t) \quad t > t_0 \\ x(s) & = & \rho(s) \quad s \in (-\infty,t_0], \\ \end{array} \right \} \label{Eq:DDEIVP}$$ where $F(x,y) \in\mathcal{C}^1(\mathbb{R}^2,\mathbb{R})$ and $\gamma(x(t)) \in \mathcal{C}^1(\mathbb{R}, \mathbb{R})$ with $$F(x,y) >0 \quad \textrm{if} \; x>0 \; \textrm{or} \; y>0, \quad F(0,0)=0, \quad \textrm{and} \quad \gamma(x(t)) <\gamma_{max}<\infty. \label{Eq:FGammaConditions}$$ We recall that $A$ is the random variable representing the maturation age of immature individuals. The history function, $\rho(s)$, is chosen to belong to the space $L_1(A)$ where $$K_A(t) = \frac{{\mathrm{d}}A}{{\mathrm{d}}\lambda},$$ In population modelling, it is likely that any realistic history is uniformly continuous and bounded. The space of bounded and uniformly continuous functions is a subspace of $L_1(A)$ and is a suitable phase space. The age structured PDE  describes population dynamics in the presence of a maturation time. Consequently, solutions of must represent a population, and in particular, remain non-negative. However, the presence of delays in other models may lead to solutions that do not remain non-negative, as noted by @Liu2007. We begin our analysis by showing that the solution of the IVP , $x(t)$, evolving from non-negative initial conditions remains non-negative. This property is a natural requirement for models of population dynamics. \[Prop:NonNegativitiy\] Let $F(x,y)$ and $ \gamma(x(t))$ satisfy equations  Moreover, assume that the history function satisfies $$\rho(s) {\geqslant}0 \quad \forall s \in (-\infty,t_0].$$ Then, the solution of the IVP  remains non-negative for all time $t> t_0$. As $\rho(s) {\geqslant}0$, it is simple to see that $$\bar{x}(t_0) = V_a(t_0)\int_{0}^{\infty} K_A(a_{{\varphi}}(t_0)) \frac{ \beta \rho[t_0-{\varphi}] }{V_a(t_0-{\varphi})} \exp \left[ - \int_{t_0-{\varphi}}^{t_0} \mu(x(s)) {\mathrm{d}}s\right] {\mathrm{d}}{\varphi}{\geqslant}0.$$ We have a series of cases. 1\) If $\rho(t_0) = x(t_0) >0$ then $F(x(t_0),\bar{x}(t_0)) >0$. Therefore, $${\frac{\textrm{d}}{\textrm{dt}}}x(t) = F(x(t),\bar{x}(t)) - \gamma(x(t)) x(t){\geqslant}-\gamma(x(t)) x(t) > - \gamma_{max} x(t)$$ and using Gronwall’s inequality, we have $$x(t) {\geqslant}\rho(t_0)\exp\left(-\gamma_{max}[t-t_0]\right) > 0.$$ 2\) If $\rho(t_0) = 0$ and $\rho(s) = 0$ $A$-almost everywhere in $(-\infty,t_0)$, then $x(t)= 0$ is the solution of the IVP. 3\) Finally, if $\rho(t_0) = 0$ and $\rho(s)>0$ on a set of $A$-positive measure in $(-\infty,t_0)$ then $\bar{x}(t_0) >0$ and $${\frac{\textrm{d}}{\textrm{dt}}}x(t)|_{t=t_0} = F(x(t_0),\bar{x}(t_0)) - \gamma(t_0)x(t_0) = F(0,\bar{x}(t_0)) > 0.$$ Consequently, $x(t)$ becomes positive immediately and case 3 reduces to case 1. Therefore, solutions of the IVP  remain non-negative for all time $t>t_0$. Linearisation of the DDE ------------------------ We continue the analysis of equation  by studying the local stability of equilibrium solutions. To do this, let $x^*(t) = x^* \in L_1(A) $ be an equilibrium of the IVP , so $$F\left( x^*,\bar{x}^* \right) = \gamma(x^*) x^*. \label{Eq:EquilibriumCondition}$$ The homeostatic delayed term $\bar{x}^*$ in satisfies $$\bar{x}^* = \int_0^{\infty} \beta x^* K_A(V_a^*{\varphi})\exp\left[ -\mu^*{\varphi}\right] {\mathrm{d}}{\varphi}= \beta x^* \mathcal{L}[K_A](\mu^*/V_a^*),$$ where $\mathcal{L}[f](s)$ is the Laplace transform of $f(x)$ evaluated at $s$. Hence, $\bar{x}^*$ is a function of the density $K_A(t)$. However, if desired, it is possible to vary the homeostatic death rate $\mu^*$ to ensure that the equilibria value $x^*$ does not change for different densities $K_A(t)$ as shown by @Cassidy2018. Set $z(t) = x(t)-x^*$ and for $z(t)$ small, similar to the discrete state dependent delay case considered by @Hartung2006, freeze the ageing and clearance rates at their homeostatic rates, so $V_a(t) = V_a^*$ and $\mu(s) = \mu^*$. Then, we define $\bar{z}(t) = \bar{x}(t)- \bar{x}^*$ so that $$\begin{aligned} \label{Eq:ZbarDefinition} \notag \bar{z}(t) & = \int_{0}^{\infty} K_A(V_a^*{\varphi}) \beta x[t-{\varphi}] \exp \left[ - \mu^* {\varphi}{\mathrm{d}}s\right] - \beta x^* K_A(V_a^*{\varphi})\exp\left[ -\mu^*{\varphi}\right] {\mathrm{d}}{\varphi}\\ & = {} \int_{0}^{\infty} K_A(V_a^*{\varphi}) \beta z[t-{\varphi}] \exp \left[ - \mu^* {\varphi}{\mathrm{d}}s\right] {\mathrm{d}}{\varphi},\end{aligned}$$ to translate the equilibrium to the origin. Then, the differential equation for $z(t)$ is $${\frac{\textrm{d}}{\textrm{dt}}}z(t) = F(z(t)+x^*,\bar{z}(t)+\bar{x}^*)-\gamma(x(t))z(t) - \gamma(x(t))x^*.$$ Expanding the exponential integral in following [@Cassidy2018], we find $$I := \int_{t-{\varphi}}^t \mu(x(s)) {\mathrm{d}}s = \int_{t-{\varphi}}^t \mu^* + \mu'(x^*)\left( x(s) - x^* \right) + \mathcal{O}(x(s) - x^*)^2 {\mathrm{d}}s,$$ so that $$\begin{aligned} e^{-I} & = e^{-\mu^* {\varphi}} \left[ 1 - \int_{t-{\varphi}}^t \mu'(x^*)\left( x(s) - x^* \right) + \mathcal{O}(x(s) - x^*)^2 {\mathrm{d}}s\right] \\ & = e^{-\mu^* {\varphi}} \left[ 1 - \int_{t-{\varphi}}^t \mu'(x^*)z(s) + \mathcal{O}(|z(s)|^2) {\mathrm{d}}s \right].\end{aligned}$$ By making the ansatz $$z(t) = Ce^{\lambda t},$$ we compute the expression for $\bar{z}(t)$ from $$\begin{aligned} \bar{z}(t) & = Cz(t) \int_{0}^{\infty} K_A(V_a^* {\varphi}) \beta e^{-\lambda {\varphi}} \left[ \exp \left[ - \mu^* {\varphi}\right] + \mathcal{O}(z) \right] {\mathrm{d}}{\varphi}\\ & = Cz(t) \beta \mathcal{L}[K_A]([\mu^*+\lambda]/V_a^*) + \mathcal{O}(z^2). \end{aligned}$$ Therefore, we write $${\frac{\textrm{d}}{\textrm{dt}}}z(t) = k_1z(t) + k_2 \beta \mathcal{L}[K_A]([\mu^*+\lambda]/V_a^*) z(t)-\gamma^* z(t) + \mathcal{O}(z^2)$$ where $\gamma^* = \partial_x\gamma(x(t))|_{x=x^*}$, $k_1 = \partial_a F(a,b)|_{(x,\bar{x})}$ and $k_2 = \partial_{b} F(a,b)|_{(x,\bar{x})}$. Dropping nonlinear terms, the linearised equation is $${\frac{\textrm{d}}{\textrm{dt}}}z(t) = (k_1-\gamma^*)z(t) + k_2 \beta \mathcal{L}[K_A]([\mu^*+\lambda]/V_a^*)z(t) . \label{Eq:LinearisedEquation}$$ The characteristic equation corresponding to is $$0 = \lambda - (k_1-\gamma^*) - k_2\beta \mathcal{L}[K_A]([\mu^*+\lambda]/V_a^*). \label{Eq:CharacteristicEquationGeneric}$$ Through a standard analysis, we study the local stability of the equilibrium $x^*$ for a density $K_A(t)$. \[Prop:StabilityProposition\] 1) If $$|k_2| \beta \mathcal{L}[K_A](\mu^*/V_a^*)< \gamma^* - k_1,$$ the equilibrium point $x^*$ is locally asymptotically stable. 2\) If $$k_2 \beta \mathcal{L}[K_A](\mu^*/V_a^*) > \gamma^* - k_1,$$ the equilibrium point $x^*$ is unstable. 1\) Let $\lambda^*$ be a root of and assume for contradiction that $\Re(\lambda^*) {\geqslant}0. $ We necessarily have $$\lambda^* = (k_1-\gamma^*) + k_2 \beta \mathcal{L}[K_A]([\mu^*+\lambda^*]/V_a^*),$$ and we calculate $$\Re(\lambda^*) =(k_1-\gamma^*) + k_2 \beta \Re \left[ \mathcal{L}[K_A]([\mu^*+\lambda^*]/V_a^*) \right].$$ We note that $$k_2 \beta \Re \left[ \mathcal{L}[K_A]([\mu^*+\lambda^*]/V_a^*) \right] {\leqslant}|k_2 \beta \mathcal{L}[K_A]([\lambda^*+\mu^*]/V_a^*)|.$$ While, for arbitrary $\nu = \nu_r+i\nu_i \in \mathbb{C}$, $$\begin{aligned} \left\vert k_2 \beta\mathcal{L}[K_A]([\mu^*+\nu]/V_a^*) \right\vert & = \left\vert k_2\right\vert \beta \left\vert \int_0^{\infty}\exp\left[-(\mu^*+\nu_r+i\nu_i){\varphi}\right] K_A(V_a^*{\varphi}) {\mathrm{d}}{\varphi}\right\vert \\[0.2cm] & {\leqslant}|k_2| \beta\int_0^{\infty} \exp\left[ -(\mu^*+\nu_r){\varphi}\right] K_A(V_a^* {\varphi})\left\vert e^{-i\nu_i{\varphi}} \right\vert {\mathrm{d}}{\varphi}\\[0.2cm] & = |k_2| \beta \mathcal{L}[K_A]([\mu^*+\nu_r]/V_a^*). \end{aligned}$$ Moreover, if $\nu_r {\geqslant}0$, $$|k_2| \beta \mathcal{L}[K_A]([\mu^*+\nu_r]/V_a^*) {\leqslant}|k_2| \beta \mathcal{L}[K_A](\mu^*/V_a^*).$$ Therefore, using the assumption in 1), we find $$\begin{aligned} \Re(\lambda^*) & = (k_1-\gamma^*) + k_2 \beta \Re[ \mathcal{L}[K_A]([\lambda^*+\mu^*]/V_a^*)] {\leqslant}(k_1-\gamma^*) + | k_2| \beta \mathcal{L}[K_A](\mu^*/V_a^*) < 0, \end{aligned}$$ which is a contradiction, so no such $\lambda^*$ can exist. Therefore, all roots of the characteristic equation have negative real part and the equilibrium is stable. 2\) To show instability, we will prove that there must be one characteristic root with positive real part. Define $$g(\lambda) := k_1-\gamma-\lambda + k_2 \beta \mathcal{L}[K_A]([\lambda + \mu^*]/V_a^*),$$ and note that $g$ is continuous with $$g(0) = k_1-\gamma + k_2 \beta \mathcal{L}[K_A]( \mu^*/V_a^*) >0 \quad \textrm{and} \quad \lim \limits_{\lambda \to \infty} g(\lambda) = -\infty.$$ Then, there must be a real $\lambda^* > 0$ such that $g(\lambda^*) =0$. The equilibrium is therefore unstable. We note that if $k_2>0$, i.e. the production of mature individuals is controlled through positive feedback with the number of maturing individuals at time $t$, then Proposition \[Prop:StabilityProposition\] completely characterizes the local stability of $x^*$. If $k_2<0$, it seems likely that $x^*$ would lose stability through a Hopf bifurcation, similar to the discrete delay case. A similar analysis was done in the constant ageing rate by @Yuan2011. However, @Yuan2011 did not consider death of immature individuals, nor the linear clearance of mature individuals which corresponds to $\mu = \gamma =0$. Distributed Delay Differential Equations with Specific Maturation Probabilities =============================================================================== Next, we study the DDE found in Theorem \[Theorem:GeneralDDE\] for various density functions. By first considering the characteristic equation  for specific densities $K_A(t)$, we motivate the reduction of these population models to familiar discrete DDEs and transit compartment ODEs. In the discussion that follows, we once again assume that $x^*\in L_1(A)$ is an equilibrium point so that $\mu(x^*) = \mu^*$ and $V_a(t) = V_a^*$. Denote the homeostatic maturation time as the first moment of the random variable $A$ with constant ageing rate $V_a^*$, $$\tau^* = \int_0^{\infty}tK_A(V_a^*t){\mathrm{d}}t.$$ Consequently, the expected homeostatic maturation age is given by $\mathcal{T} = V_a^* \tau^*$. We first consider the degenerate distribution and recover the familiar state dependent discrete DDE. Next, we use a linear chain-type technique to reduce state dependent uniformly distributed DDEs to a system involving two state dependent delays. Finally, we show how to reduce a gamma distributed DDE to a transit compartment system of ODEs. However, true equivalence between the distributed DDE and the reduced form does not follow directly. We must take care when prescribing initial conditions and history functions so that solutions of the different formulations are in fact equivalent. Only then do these reductions allow for the use of the highly efficient numerical methods available for discrete DDEs and ODEs available in most programming languages. Deterministic Maturation {#Sec:StateDependentDDE} ------------------------ Assuming that maturation is a deterministic process and occurs after achieving the threshold age $\mathcal{T}$ implies that $K_A(t)$ is the degenerate distribution with $$K_A\left( \int_{t-{\varphi}}^t V_a(s){\mathrm{d}}s \right) = \delta \left( \int_{t-{\varphi}}^t V_a(s){\mathrm{d}}s-\mathcal{T} \right). \label{Eq:DeltaDensityDefinition}$$ where $\delta(x)$ is the Dirac delta function. In the deterministic case, all individuals mature at precisely the same age $\mathcal{T}$. At the equilibrium $x^*$, using , the characteristic equation is $$\label{Eq:DiscreteCharacteristicEquation} 0 = \lambda - (k_1-\gamma^*) - k_2\beta \exp\left[-(\mu^*+\lambda)\mathcal{T}/V_a^* \right] = \lambda - (k_1-\gamma^*) - k_2\beta \exp\left[-(\mu^*+\lambda)\tau^*\right],$$ which is exactly the characteristic equation of a discrete DDE. This is unsurprising, since it is well known that threshold conditions lead to discrete DDEs [@Otto2017; @Smith1993]. Returning to the DDE  with $K_A(t)$ given by , the threshold maturation age $\mathcal{T}$ allows us to calculate when an individual that matures at time $t$ began maturation. The maturation time, $\tau(x(t))$, must satisfy the implicit threshold condition $$\mathcal{T} = \int_{t-\tau(x(t))}^t V_a(s){\mathrm{d}}s. \label{Eq:TauImplicitEquation}$$ We use the definition of $\tau(x(t))$ to evaluate the convolution integral given in to find $$\begin{aligned} A_{\delta}(t) & =\int_{0}^{\infty} \delta \left( \int_{t-{\varphi}}^t V_a(s){\mathrm{d}}s-\mathcal{T} \right) \frac{ \beta x(t-{\varphi}) }{V_a(t-{\varphi})} \exp \left[ - \int_{t-{\varphi}}^t \mu(x(s)) {\mathrm{d}}s\right] {\mathrm{d}}{\varphi}\\ & = \frac{\beta x[t-\tau(x(t))]}{V_a(t-\tau(x(t)))} \exp \left[ - \int_{t-\tau(x(t))}^t \mu(x(s)) {\mathrm{d}}s\right].\end{aligned}$$ Consequently, the corresponding IVP to with state dependent discrete delay is $$\left. \begin{aligned} {\frac{\textrm{d}}{\textrm{dt}}}x(t) & = F\left( x(t), \beta x[t-\tau(x(t))] \exp \left[ - \int_{t-\tau(t)}^t \mu(x(s)) {\mathrm{d}}s\right]\frac{V_a(t)}{V_a(t-\tau(t))} \right) - \gamma x(t) \\ x(s) & = \rho(s)\quad s \in (-\infty,t_0]. \end{aligned} \right \} \label{Eq:StateDependentMatureDDE}$$ Choosing the history function for requires a careful consideration of how $\rho(s)$ controls the ageing velocity $V_a(t)$. For homeostatic histories $\rho(s) = x^*$, we can prescribe $\tau(\rho(s)) = \tau^*$. To implement numerically, it is necessary to solve to find the maturation time $\tau(x(t))$. This can be done by differentiating to find $${\frac{\textrm{d}}{\textrm{dt}}}\tau(x(t)) = 1-\frac{V_a(t)}{V_a(t-\tau(x(t)))}, \label{Eq:DelayTimeDDE}$$ and imposing the correct initial condition so that the solution of also solves . In the case that $\rho(s) = x^*$, the it is simple to set $\tau(0)= \tau^*$. However, for more general initial data $\rho(s)$, choosing an appropriate initial condition for can be delicate [@Otto2017]. Then, we can solve the discrete state dependent DDE by solving the system of equations given by and . Hence the age structured PDE framework in Section \[Sec:AgeStructuredPDEReduction\] offers an alternative to the “moving threshold” method to derive state dependent DDEs as described by @Otto2017. Uniformly Distributed Maturation {#Sec:CompactSupportDistribution} -------------------------------- We consider uniformly distributed DDEs centered about the expected homeostatic maturation age $\mathcal{T}$. In the simplest case, the uniform distribution defines lower and upper threshold ages and assigns equal weight to each age falling between the thresholds. The probability density function corresponding to a uniform distribution centred at $\mathcal{T}$ is $$K_U(a) = \left\{ \begin{array}{ll} \frac{1}{2V_a^*\delta} & \textrm{if} \; a \in [ \mathcal{T}-V_a^* \delta,\mathcal{T}+ V_a^*\delta]\\ 0 & \textrm{otherwise}. \\ \end{array} \right. \label{Eq:UniformDensityFunction}$$ At the equilibrium $x^*$, with $K_A(t)$ given by the uniform density , the characteristic equation is $$\begin{aligned} \label{Eq:UniformCharacteristicEquation} \notag 0 & = \lambda - (k_1-\gamma^*) - k_2\beta \frac{1}{2\delta V_a^*[\lambda+\mu^*]/V_a^*}\left[e^{-(\lambda+\mu^*)(\mathcal{T}-V_a^* \delta)/V_a^*}- e^{-(\lambda+\mu^*)(\mathcal{T}+V_a^* \delta)/V_a^*}\right] \\ & {} = \lambda - (k_1-\gamma^*) - k_2\beta \frac{1}{2\delta(\lambda+\mu^*)}\left[e^{-(\lambda+\mu^*)(\tau^*-\delta)}- e^{-(\lambda+\mu^*)(\tau^*+\delta)}\right].\end{aligned}$$ $\mathcal{T}-V_a^*\delta$ and $\mathcal{T}+V_a^*\delta$ represent the minimal and the maximal ages at which an individual can mature. Due to the variable ageing rate, the minimal and maximal delay times, $\tau_{min}(x(t))$ and $\tau_{max}(x(t))$, are state dependent, and implicitly defined by $$\mathcal{T}-V_a^*\delta = \int_{t-\tau_{min}(x(t))}^t V_a(s){\mathrm{d}}s \quad \textrm{and} \quad \mathcal{T}+V_a^*\delta = \int_{t-\tau_{max}(x(t))}^t V_a(s){\mathrm{d}}s.$$ We note that, at homeostasis, $V_a(s) = V_a^*$ so $$\mathcal{T}-V_a^*\delta = \tau_{min}(x^*)V_a^* \quad \textrm{and} \quad \mathcal{T}+V_a^*\delta = \tau_{max}(x^*)V_a^*.$$ Therefore, the terms $\tau^* - \delta$ and $\tau^*+\delta$ in correspond to the minimal and maximal homeostatic delay times. The presence of minimal and maximal delay terms in hints that a uniformly distributed DDE may be reducible to a discrete DDE with two distinct delays. Inserting the uniform density into the convolution integral gives $$\begin{aligned} A_U(t) & = \int_0^{\infty} K_U \left( \int_{t-{\varphi}}^t V_a(s){\mathrm{d}}s\right) \frac{ \beta x(t-{\varphi}) }{V_a (t-{\varphi})} \exp \left[ - \int_{t-{\varphi}}^t \mu(x(s)) {\mathrm{d}}s\right] {\mathrm{d}}{\varphi}\\ & = \int_{\tau_{min}(t)}^{\tau_{max}(t)} \frac{1}{2\delta} \frac{ \beta x(t-{\varphi}) }{\hat{V}_a (t-{\varphi})} \exp \left[ - \int_{t-{\varphi}}^t \mu(x(s)) {\mathrm{d}}s\right] {\mathrm{d}}{\varphi}.\end{aligned}$$ Thus the state dependent uniform distributed DDE is $$\left. \begin{aligned} {\frac{\textrm{d}}{\textrm{dt}}}x(t) &= F(x(t),A_U(t)V_a(t)) - \gamma(x(t)) x(t) \\ x(s) &= \rho(s), \quad s \in (-\infty,t_0]. \end{aligned} \right \} \label{Eq:UniformDDE}$$ ### Reduction to Discrete DDE Next, we show that can be reduced to an IVP with two state dependent discrete delays. Once again, this is advantageous, as numerical algorithms for systems of state dependent discrete DDEs are available in most programming languages. We begin by formalizing the link between uniformly distributed DDEs and discrete DDEs that was hinted at in . To do this, we proceed similarly to the linear chain technique and show how to write the delayed kernel as the solution of a differential equation. However, unlike the linear chain technique, we will not recover a system of ODEs, but rather a system of differential equations with two state dependent discrete delays. The technique here can also be adapted to “tent” like distributions (see [@Teslya2015]). \[Lemma:UniformDerivative\] $A_U(t)$ satisfies the differential equation $$\begin{aligned} \label{Eq:AUDifferentialEquation} {\frac{\textrm{d}}{\textrm{dt}}}A_U(t) & = \frac{1}{2\delta}\left[ \frac{ \beta x[t-\tau_{min}(t)] }{\hat{V}_a (t-\tau_{min}(t))} \exp \left[ - \int_{t-\tau_{min}(t)}^t \mu(x(s)) {\mathrm{d}}s\right] \frac{V_a(t)}{V_a(t-\tau_{min}(t))} \right. \\ \notag & \qquad - \left. \frac{ \beta x[t-\tau_{max}(t)] }{\hat{V}_a (t-\tau_{max}(t))} \exp \left[ - \int_{t-\tau_{max}(t)}^t \mu(x(s)) {\mathrm{d}}s\right]\frac{V_a(t)}{V_a(t-\tau_{max}(t))} \right] - \mu(x(t)) A_U(t).\end{aligned}$$ Similar to the linear chain technique, we differentiate $A_U(t)$ using Leibniz’s rule to find $$\begin{aligned} {\frac{\textrm{d}}{\textrm{dt}}}A_U(t) & = \frac{1}{2\delta} \left[ \frac{ \beta x[t-\tau_{max}(t)] }{\hat{V}_a (t-\tau_{max}(t))} \exp \left[ - \int_{t-\tau_{max}(t)}^t \mu(x(s)) {\mathrm{d}}s\right]{\frac{\textrm{d}}{\textrm{dt}}}\tau_{max}(t) \right. \\ & \quad\qquad {} - \left. \frac{ \beta x[t-\tau_{min}(t)] }{\hat{V}_a (t-\tau_{min}(t))} \exp \left[ - \int_{t-\tau_{min}(t)}^t \mu(x(s)) {\mathrm{d}}s\right] {\frac{\textrm{d}}{\textrm{dt}}}\tau_{min}(t)\right] \\ & \quad {} + \frac{1}{2\delta} \int_{\tau_{min}(t)}^{\tau_{max}(t)} {\frac{\textrm{d}}{\textrm{dt}}}\left( \frac{ \beta x(t-{\varphi}) }{\hat{V}_a (t-{\varphi})} \exp \left[ - \int_{t-{\varphi}}^t \mu(x(s)) {\mathrm{d}}s\right] \right) {\mathrm{d}}{\varphi}.\end{aligned}$$ We note that $$\begin{aligned} {\frac{\textrm{d}}{\textrm{dt}}}\left( \frac{ \beta x(t-{\varphi}) }{\hat{V}_a (t-{\varphi})} \exp \left[ - \int_{t-{\varphi}}^t \mu(x(s)) {\mathrm{d}}s\right] {\mathrm{d}}{\varphi}\right) = & - \frac{{\mathrm{d}}}{{\mathrm{d}}{\varphi}} \left( \frac{ \beta x(t-{\varphi}) }{\hat{V}_a (t-{\varphi})} \exp \left[ - \int_{t-{\varphi}}^t \mu(x(s)) {\mathrm{d}}s\right] \right) \\ & {} \quad -\mu(x(t))\frac{ \beta x(t-{\varphi}) }{\hat{V}_a (t-{\varphi})} \exp \left[ - \int_{t-{\varphi}}^t \mu(x(s)) {\mathrm{d}}s\right],\end{aligned}$$ so that, integrating by parts, $$\begin{aligned} & \int_{\tau_{min}(t)}^{\tau_{max}(t)} {\frac{\textrm{d}}{\textrm{dt}}}\left( \frac{ \beta x(t-{\varphi}) }{\hat{V}_a (t-{\varphi})} \exp \left[ - \int_{t-{\varphi}}^t \mu(x(s)) {\mathrm{d}}s\right] \right) {\mathrm{d}}{\varphi}\\ & \qquad\qquad = \left. \left( -\frac{1}{2 \delta} \frac{ \beta x(t-{\varphi}) }{\hat{V}_a (t-{\varphi})} \exp \left[ - \int_{t-{\varphi}}^t \mu(x(s)) {\mathrm{d}}s\right]\right) \right|_{{\varphi}= \tau_{min}(t)}^{\tau_{max}(t)} - \mu(x(t))A_U(t).\end{aligned}$$ Consequently, the derivative of $A_U(t)$ is $$\begin{aligned} {\frac{\textrm{d}}{\textrm{dt}}}A_U(t) = & \frac{1}{2\delta}\left[ \frac{ \beta x[t-\tau_{max}(t)] }{\hat{V}_a (t-\tau_{max}(t))} \exp \left[ - \int_{t-\tau_{max}(t)}^t \mu(x(s)) {\mathrm{d}}s\right]\left({\frac{\textrm{d}}{\textrm{dt}}}\tau_{max}(t)-1\right) \right. \\ & \qquad {} - \left. \frac{ \beta x[t-\tau_{min}(t)] }{\hat{V}_a (t-\tau_{min}(t))} \exp \left[ - \int_{t-\tau_{min}(t)}^t \mu(x(s)) {\mathrm{d}}s\right] \left( {\frac{\textrm{d}}{\textrm{dt}}}\tau_{min}(t)-1\right) \right]-\mu(x(t))A_U(t). \end{aligned}$$ To finish the proof, we note that, similar to , $\tau_{min}(x(t))$ and $\tau_{max}(x(t))$ solve the following differential equations $${\frac{\textrm{d}}{\textrm{dt}}}\tau_{min}(x(t))-1 = -\frac{V_a(t)}{V_a(t-\tau_{min}(x(t)))} \quad \textrm{and} \quad {\frac{\textrm{d}}{\textrm{dt}}}\tau_{max}(x(t))-1 = -\frac{V_a(t)}{V_a(t-\tau_{max}(x(t)))}. \label{Eq:MinMaxDelayDifferentialEquation}$$ The identities in equation  give . By writing the delay term $A_U(t)$ as a solution of a differential equation, we are able to reduce the distributed DDE to a system with state dependent discrete delays. Once again, this allows for simulation of the distributed DDE using existing techniques. This relationship is formalized in the following theorem. The IVP is equivalent to the IVP with the following system of discrete delay differential equations $$\left. \begin{aligned} {\frac{\textrm{d}}{\textrm{dt}}}x(t) & = F(x(t),y(t)V_a(t)) - \gamma(x(t)) x(t) \\ {\frac{\textrm{d}}{\textrm{dt}}}y(t) & = \frac{1}{2\delta}\left[ \frac{ \beta x[t-\tau_{min}(t)] }{\hat{V}_a (t-\tau_{min}(t))} \exp \left[ - \int_{t-\tau_{min}(t)}^t \mu(x(s)) {\mathrm{d}}s\right] \frac{V_a(t)}{V_a(t-\tau_{min}(t))} \right. \\ & \quad {} - \left. \frac{ \beta x[t-\tau_{max}(t)] }{\hat{V}_a (t-\tau_{max}(t))} \exp \left[ - \int_{t-\tau_{max}(t)}^t \hspace{-0.2cm}\mu(x(s)) {\mathrm{d}}s\right]\frac{V_a(t)}{V_a(t-\tau_{max}(t))} \right]\hspace{-0.1cm} - \hspace{-0.1cm}\mu(x(t)) y(t). \end{aligned} \right \} \label{Eq:TwoDelayDiscreteDDE}$$ with suitably chosen initial data. Using Lemma \[Lemma:UniformDerivative\], it is simple to see that $$y(t)V_a(t) = A_U(t)V_a(t), \label{Eq:DiscreteEquivalenceIdentity}$$ and the other terms in the differential equations are identical if the initial data are equivalent. It therefore remains to show that we can choose suitable history functions for the distributed and discrete DDEs. For the history function of the distributed DDE , $\rho(s)$, setting the initial data of to be $$x(s) = \rho(s)$$ and $$y(t_0) = \int_{\tau_{min}(t)}^{\tau_{max}(t)} \frac{1}{2\delta} \frac{ \beta \rho(t_0-{\varphi}) }{\hat{V}_a (t_0-{\varphi})} \exp \left[ - \int_{t_0-{\varphi}}^{t_0} \mu(\rho(s)) {\mathrm{d}}s\right] {\mathrm{d}}{\varphi}.$$ gives the desired equivalence [@Teslya2015]. To convert from with history function $x(s) = \eta(s)$ to , $y(t_0)$ must satisfy $$y(t_0) = \int_{\tau_{min}(t)}^{\tau_{max}(t)} \frac{1}{2\delta} \frac{ \beta \eta(t_0-{\varphi}) }{\hat{V}_a (t_0-{\varphi})} \exp \left[ - \int_{t_0-{\varphi}}^{t_0} \mu(\eta(s)) {\mathrm{d}}s\right] {\mathrm{d}}{\varphi}. \label{Eq:YICCondition}$$ By taking the initial data for to be $x(s) = \eta(s)$, we see that this condition is sufficient for equivalence of and . Now, if does not hold, then cannot be satisfied at $t =t_0$, so is a necessary and sufficient condition to be able to convert the system of DDEs into the distributed DDE . Gamma Distributed Maturation and a Generalized Linear Chain Technique {#Sec:GammaDistributedDDE} --------------------------------------------------------------------- Finally, we study gamma distributed DDEs and show how to reduce the state-dependent gamma distributed DDE to a transit chain of ODEs. The probability density function of the gamma distribution is $$g_b^j(x) = \frac{b^jx^{j-1}e^{-bx}}{\Gamma(j)}, \label{Eq:GammaDensityFunction}$$ where $j,b \in \mathbb{R}$. Again, let $\mathcal{T}$ denote the mean maturation age and fix $j>0$. Then, we have the following relationships $$\mathcal{T} = j/V_a^*, \quad \sigma^2 = j/(V_a^*)^2, \quad \textrm{and} \quad K_{g}(\sigma) = g_{V_a^*}^j(\sigma),$$ where $\sigma^2$ is the variance of the gamma distribution and we set $b=V_a^*$. Calculating for the gamma density in gives $$0= k_1-\gamma - \lambda + k_2 \beta\frac{(V_a^*)^j}{(V_a^*+[\lambda+\mu^*]/V_a^*)^j}. \label{Eq:GammaDistributedCharacteristicEquation}$$ Now, we use the relationships $V_a^* = j/\mathcal{T}$ and $\mathcal{T} = \tau^* V_a^*$ to rewrite the characteristic function as $$\begin{aligned} k_1-\gamma - \lambda + k_2 \beta\frac{1}{(1+\frac{\lambda+\mu^*}{V_a^*}^2)^j} & = k_1-\gamma - \lambda + k_2 \beta\frac{1}{(1+\frac{\mathcal{T}(\lambda+\mu^*)}{ V_a^* j})^j} \\ & = k_1-\gamma - \lambda + k_2 \beta\frac{1}{(1+\frac{\tau^*(\lambda+\mu^*)}{j})^j}.\end{aligned}$$ Using a common denominator gives $$0 = \left(k_1-\gamma-\lambda\right)\left(1+\frac{\tau^*(\lambda+\mu^*)}{j}\right)^j+k_2\beta \label{Eq:CommonGammaDistributedCharacteristicEquation}$$ Now, we consider multiple cases for the parameter $j$. If $j \in \mathbb{N}$, then is a polynominal of degree $j+1$, with $j+1$ roots. This is markedly different than the generic distributed DDE, as the characteristic equation  is typically a transcendental functions of $\lambda$ with infinitely many characteristic values. Now, with $j =n/m \in \mathbb{Q}$, we can rearrange to $$\left(k_1-\gamma-\lambda\right)\left(1+\frac{\tau^*(\lambda+\mu^*)}{j}\right)^j= -k_2\beta,$$ and raising both sides of the equality to the power $m$ gives $$0 = (k_1-\gamma-\lambda)^m(1+\frac{\tau^*(\lambda+\mu^*)}{j})^n + \left(-k_2\beta \right)^m . \label{Eq:RationalGammaCharacterisicEquation}$$ Not all solutions of will necessarily satisfy . However, every solution of will satisfy . Moreover, is a polynomial with $m+n$ roots, so with $j = n/m \in \mathbb{Q}$ has at most $m+n$ roots. However, if the parameter $j$ is not rational, then is once again a transcendental equation with possibly infinitely many roots. The relationship between the number of characteristic values and the parameter $j$ leads to interesting questions. If $j \in \mathbb{N}$ increases by unit steps, then the characteristic equation gains precisely one root. However, if $j$ increases smoothly between $j$ and $j+1$, do characteristic values spring in and out of existence depending on the rationality of $j$? This question, while important, is outside the scope of the current work. Having studied the characteristic equation of gamma distributed DDEs, we proceed to write down the gamma distributed DDE. We have parametrized the gamma distribution so that at homeostasis, the mean delay time is $\tau^*$. The variable ageing velocity must then be scaled so that at homeostasis, individuals age chronologically. Therefore, we define the scaled ageing velocity $$\hat{V}_a(t) = \frac{V_a(t)}{V_a^*}, \label{Eq:ScaledAgeingVelocity}$$ and will use $\hat{V}_a(t)$ throughout the remainder of our study. The scaled density function $g_{V_a^*}^j(a_t({\varphi}))$ is given by $$g_{V_a^*}^j\left( \int_{t-{\varphi}}^t \hat{V}_a(s){\mathrm{d}}s\right) = \frac{(V_a^*)^j}{\Gamma(j)}\left[ \int_{t-{\varphi}}^t \hat{V}_a(s){\mathrm{d}}s\right]^{j-1} \exp \left[-V_a^* \int_{t-{\varphi}}^t \hat{V}_a(s){\mathrm{d}}s \right].$$ By inserting $g_{V_a^*}^j(a_t({\varphi}))$ into equation , we define $$A_{g}(t) = \int_{0}^{\infty} g_{V_a^*}^j \left( \int_{t-{\varphi}}^t \hat{V}_a(s){\mathrm{d}}s\right) \frac{ \beta x(t-{\varphi}) }{\hat{V}_a (t-{\varphi})} \exp \left[ - \int_{t-{\varphi}}^t \mu(x(s)) {\mathrm{d}}s\right] {\mathrm{d}}{\varphi}. \label{Eq:AGammaDefinition}$$ Then, the IVP with a state-dependent distributed DDE corresponding to equation  is $$\left. \begin{aligned} {\frac{\textrm{d}}{\textrm{dt}}}x(t) & = F \left[ x(t),V_a(t) A_{g}(t)\right] - \gamma(x(t)) x(t)\\ x(s) & = \rho(s), s \in (-\infty,t_0]. \end{aligned} \right \} \label{Eq:GammaDistributedDDE}$$ As we show in Section \[Sec:Examples\], equivalent models to have been used in pharmacokinetic modelling. However, these models typically take the form of finite dimensional systems of ODEs and the direct link between these ODEs with variable transit rates and has not been established previously. ### A Generalized Linear Chain Technique {#Sec:FiniteDimensionalRepresentation} The finitely many roots of equation  for integer $j\in \mathbb{N}$ suggest that there is a finite dimensional representation of the DDE . The link between gamma distributed DDEs and transit chain ODEs with constant transit rates has been known since at least @Vogel1961. The method entered into the English literature in the works of @MacDonald1978 as the linear chain trick or the linear chain technique. Just as in Section \[Sec:CompactSupportDistribution\], the linear chain technique consists of replacing the convolution integral by the solution of a system of differential equations. To do this, we will exploit the fact that, for $j\in \mathbb{N}$, $$\frac{{\mathrm{d}}}{{\mathrm{d}}x} g_b^1(x) = -b g_b^1(x) \quad \textrm{and} \quad \frac{{\mathrm{d}}}{{\mathrm{d}}x} g_b^j(x) = b[g_b^{j-1}(x)-g_b^j(x)]. \label{Eq:GammaDerivatives}$$ The linear chain technique has been used extensively in pharmacology to model delayed drug absorption and action. However, typical applications of the technique require that transition rates between compartments are constant and identical. @deSouza2017 developed an adapted linear chain technique that allows for variable transition rates by rescaling time in a non-linear way. This non-linear time rescaling leads to difficulties in establishing a link between time rescaled simulations and time series patient data [@deSouza2017]. Here, we provide an alternative technique that allows for variable transition rates between compartments without rescaling time. We first show how to write as the solution of a system of ordinary differential equations. \[Lemma:GammaDerivative\] For $j\in \mathbb{N}$, $A_g(t) = x_j(t)$ where $\{ x_i(t) \}_{i=1}^j$ satisfies $$\begin{aligned} {\frac{\textrm{d}}{\textrm{dt}}}x_1(t) & = \frac{ \beta x(t) }{\hat{V}_a (t)} - V_a(t)x_1(t) -\mu(x(t)) x_1(t)\\ \notag {\frac{\textrm{d}}{\textrm{dt}}}x_i(t) & = V_a(t)\left[ x_{i-1}(t) - x_{i}(t)\right] - \mu(x(t)) x_i(t)\quad \textrm{for} \quad i = 2,3,...,j. \notag \end{aligned} \label{Eq:TransitChainODEGeneral}$$ We first note that $$g_{V_a^*}^i\left( \int_{t}^t \hat{V}_a(s){\mathrm{d}}s \right) = \left \{ \begin{array}{lll} V_a^* & \textrm{if} & i =1 \\ 0 & \textrm{if} & i = 2,3,...,j. \\ \end{array} \right.$$ Then using and , the chain and Leibniz rules show that $${\frac{\textrm{d}}{\textrm{dt}}}g_{V_a^*}^1 \left( \int_{{\varphi}}^t \hat{V}_a(s){\mathrm{d}}s\right) = -V_a^*\hat{V}_a(t) g_{V_a^*}^1 \left( \int_{{\varphi}}^t \hat{V}_a(s){\mathrm{d}}s\right) = -V_a(t) g_{V_a^*}^1\left( \int_{{\varphi}}^t \hat{V}_a(s){\mathrm{d}}s\right)$$ while, for $ i = 2,3,4,...$, $$\begin{aligned} {\frac{\textrm{d}}{\textrm{dt}}}g_{V_a^*}^i \left( \int_{{\varphi}}^t \hat{V}_a(s){\mathrm{d}}s \right) & = V_a^* \hat{V}_a(t) \left[ g_{V_a^*}^{i-1}\left( \int_{{\varphi}}^t \hat{V}_a(s){\mathrm{d}}s \right) -g_{V_a^*}^i\left( \int_{{\varphi}}^t \hat{V}_a(s){\mathrm{d}}s \right) \right]\\ & = V_a(t) \left[ g_{V_a^*}^{i-1} \left( \int_{{\varphi}}^t \hat{V}_a(s){\mathrm{d}}s\right)-g_{V_a^*}^i\left( \int_{{\varphi}}^t \hat{V}_a(s){\mathrm{d}}s\right) \right]. \end{aligned}$$ Now we define, $$a(x) = \int_{t-x}^t \hat{V}_a(s) {\mathrm{d}}s$$ and, for $i=1,2,...,j$, $$x_i(t) = \int_{-\infty}^{t} g_{V_a^*}^i (a(t-{\varphi})) \frac{ \beta x({\varphi}) }{V_a ({\varphi})} \exp \left[ - \int_{{\varphi}}^t \mu(x(s)) {\mathrm{d}}s\right] {\mathrm{d}}{\varphi}, \label{Eq:TransitChainDef}$$ and note that, after making the change of variable $u = t-{\varphi}$ in $A_g(t)$, $$x_j(t) = \int_{-\infty}^{t} g_{V_a^*}^j (a(t-{\varphi})) \frac{ \beta x[{\varphi}] }{V_a ({\varphi})} \exp \left[ - \int_{{\varphi}}^t \mu(x(s)) {\mathrm{d}}s\right] {\mathrm{d}}{\varphi}= A_g(t).$$ Now, by differentiating using the Leibniz rules, the transit chain $ x_i(t)$ satisfies the following system of equations $$\begin{aligned} \notag {\frac{\textrm{d}}{\textrm{dt}}}x_1(t) & = \frac{ \beta x(t) }{\hat{V}_a (t)} - V_a(t)x_1(t) -\mu(x(t)) x_1(t)\\ \tag*{\qed} {\frac{\textrm{d}}{\textrm{dt}}}x_i(t) & = V_a(t)\left[ x_{i-1}(t) - x_{i}(t)\right] - \mu(x(t)) x_i(t)\quad \textrm{for} \quad i = 2,3,...,j.\end{aligned}$$ Importantly, Lemma \[Lemma:GammaDerivative\] ensures that $$V_a(t) A_{g}(t) = V_a(t) x_j(t). \label{Eq:ClosedSystemGammaDistribution}$$ Now, we can use the relationship between equations  and to establish the following theorem: \[Theorem:FiniteDimensionRepresentation\] The distributed state dependent DDE  with $j \in \mathbb{N}$ is equivalent to the finite dimensional transit compartment ODE system given by $$\left. \begin{aligned} {\frac{\textrm{d}}{\textrm{dt}}}x(t) & = F(x(t),V_a(t)x_j(t) )- \gamma(x(t)) x(t) \\[0.2cm] {\frac{\textrm{d}}{\textrm{dt}}}x_1(t) & = \frac{ \beta x(t) }{\hat{V}_a (t)} - V_a(t)x_1(t)-\mu(x(t)) x_1(t) \\[0.2cm] {\frac{\textrm{d}}{\textrm{dt}}}x_i(t) & = V_a(t)\left[ x_{i-1}(t) - x_{i}(t)\right] -\mu(x(t)) x_i(t) \quad \textrm{for} \quad i = 2,3,...,j. \end{aligned} \right \} \label{Eq:GammaEquivalentODE}$$ Lemma \[Lemma:GammaDerivative\] ensures that the differential equations are equivalent. Therefore, we need only construct appropriate initial data for the distributed DDE and ODE formulation. For a history function $\rho(s)$ of , we set, for $i = 1,2,...,j,$ $$x_i(0) = \int_{-\infty}^{0} g_{V_a^*}^i (a(-{\varphi})) \frac{ \beta \rho({\varphi}) }{V_a ({\varphi})} \exp \left[ - \int_{{\varphi}}^t \mu(\rho(s)) {\mathrm{d}}s\right] {\mathrm{d}}{\varphi}.$$ If $\mu(s) = \mu^*$ is constant and the initial conditions satisfy $$x_i(0) = \left(\frac{V_a^*}{V_a^*+\mu^*}\right)^i x_1(0),$$ it is simple to choose $\rho(s) = x_1(0)$. However, in the more general case with $\mu(t) \neq \mu^* $ and arbitrary ODE initial conditions $x_i(0)= \alpha_i$ of , we can use a similar method to @Cassidy2018 to construct one of the infinitely many appropriate history functions. A form of the expression for the variable age transit chain in equation  was derived by @Krzyzanski2011 to study the equivalence between lifespan and transit compartment models in pharmacodynamics. However, the derivation did not include the underlying age structured PDE and was specific to the gamma distribution. @Gurney1986 derived a similar expression for the density of individuals progressing through a specific stage of maturation from a balance equation. However, they did not explicitly formulate the underlying DDE nor did they derive the correct initial conditions for each of the transit compartments. Consequently, they did not show equivalence between the transit compartment formulation and the DDE. \[Remark:ODEtoDDERecipe\] We note that the finite dimensional representation of  with $j\in \mathbb{N}$ includes a transit compartment chain. Due to the equivalence between and , we are able to identify the ingredients needed to transform a transit compartment ODE such as into a DDE such as . We first consider $${\frac{\textrm{d}}{\textrm{dt}}}x_1(t) = \frac{ \beta x(t) }{\hat{V}_a (t)} - V_a(t)x_1(t)-\mu(x(t)) x_1(t).$$ From the equation for $x_1(t)$, we can easily identify the ratio $\beta x(t)/\hat{V}_a (t)$ as the rate at which individuals in the 1st compartment are created. Next, by considering the rate at which individuals enter the second compartment, $$\begin{aligned} {\frac{\textrm{d}}{\textrm{dt}}}x_1(t) & = \frac{ \beta x(t) }{\hat{V}_a (t)} - V_a(t)x_1(t)-\mu(x(t)) x_1(t) \\[0.2cm] {\frac{\textrm{d}}{\textrm{dt}}}x_2(t) & = V_a(t)x_1(t) - V_a(t)x_{2}(t) -\mu(x(t)) x_2(t), \end{aligned}$$ we find the (possibly variable) transit rate between compartments. Then, a process of elimination immediately yields the mortality rate $\mu(x(t))$ (if $\mu(x(t))<0$, then population growth rather than decay is occurring through the transit chain). The creation and transit rates also yield the homeostatic ageing rate via . Further inspection of shows that these rates are all that are needed to transform the transit chain ODE to a distributed DDE. We note that the classic linear chain technique (see @Smith2011a) is a special case of Remark \[Remark:ODEtoDDERecipe\] where the ageing velocity, $V_a(t)$, is constant. Examples From Hematopoiesis {#Sec:Examples} =========================== Sometimes, analysis of distributed DDEs is more tractable and simpler than that of a high dimensional equivalent ODE system. For example, by rescaling time, @deSouza2017 converted Quartino’s ODE transit compartment model of granulopoiesis into a distributed DDE [@Quartino2014]. The distributed DDE formulation proved to be much more analytically tractable than the ODE case, and was used to show the positivity of solutions and establish the local stability of equilibrium solutions. However, due to the lack of a general numerical algorithm, simulation of distributed DDEs must be handled on a case by case basis. Simulation of transit compartment ODEs is routine in many programming languages and can be used for the calibration of models to existing data. Once calibrated, mathematical models can be simulated and used in a predictive manner. Consequently, by converting models between the equivalent distributed DDE or ODE formulations, researchers can use the form of the model that is most suitable to their needs. The hematopoietic system controls blood cell production and, through tight cytokine control, is able to quickly respond to challenges, including infection and blood loss. Cytokines control hematopoietic output by varying effective proliferation and maturation rates in each hematopoietic lineage. As cells are not produced instantaneously, there is necessarily a delay between cytokine signal and production response. Mathematical models have been used to understand the complex dynamics observed in so-called dynamical diseases since the 1970s [@Glass2015; @Rubinow1975; @Mackey1978]. Existing mathematical models of hematopoiesis have included discrete, distributed and state-dependent DDEs [@Mahaffy1998; @Colijn2005a; @Crauste2007; @Craig2016; @Hearn1998a] as well as transit compartment models [@Friberg2002; @VonSchulthess1982; @Krzyzanski2010]. Here, we use the equivalence between state dependent distributed DDEs and ODE transit compartment models derived in Section \[Sec:FiniteDimensionalRepresentation\] to convert two previously published ODE models of hematopoietic cell production to their equivalent state-dependent distributed DDE. The ODE models specify the entrance rate of individuals into the maturation compartment and the maturation speed, $V_a(t)$, which allows for the calculation the birth rate of immature individuals. As these models involve more than one population, the birth rate $\beta$ is no longer constant but is a function of other populations in the model. In the first example, we show how a model of reticulocyte production can be reduced to a renewal equation whose dynamics are completely characterized by a simple system of ordinary differential equations. In the second example, we extend the framework of Section \[Sec:FiniteDimensionalRepresentation\] to include non-identical transitions between ageing populations and a variable transition rate. This example shows how the state dependent distributed DDE framework addresses the inability of the linear chain technique to model dynamic ageing processes. Pérez-Ruixo Model of Reticulocyte Production {#Sec:PerezModel} -------------------------------------------- @Perez-Ruixo2008 studied the effect of recombinant human erythropoietin (EPO) on red blood cell precursors using a mathematical model. EPO is the protein responsible for controlling production of red blood cells and their precursors. The model arises from pharmacokinetic and pharmacodynamic data from patients receiving one dose of exogenous EPO. EPO was modelled through an open two compartment model of exogenous dose absorption and homoeostatic endogenous production rate, $k_{EPO}$, and the blood serum level ($BSL$). The bioavailable exogenous EPO was modelled as a dose dependent hyperbolic function satisfying $$F = F_0+ \frac{E_{max}\textrm{Dose}}{ED_{50}+\textrm{Dose}},$$ where $\textrm{Dose}$ is the amount of EPO administered. Exogenous EPO was absorbed through a dual absorption model into the depot and central compartments. The duration of first order absorption into the depot and central compartments are given by $D_1$ and $D_2$, respectively. A fraction of the bioavailable exogenous EPO, $f_r$, was absorbed into the depot compartment before entering the central compartment at rate $k_a$. The depot concentration of EPO follows $${\frac{\textrm{d}}{\textrm{dt}}}A_1(t) = \left \{ \begin{array}{lll} \frac{\textrm{Dose} f_r F}{D_1} - k_a A_1 & \textrm{if} & t {\leqslant}D_1\\ -k_a A_1 & \textrm{if} & t > D_1, \end{array} \right. \label{Eq:PerezA1Equation}$$ The remaining exogenous EPO, $(1-f_r)F$ enters the central compartment following a lag time $t_{\textrm{lag}2}$ and is cleared linearly at the rate $k_{20}$. The volume of the central compartment is $V_1$. The dynamics of exogenous EPO in the central compartment are given by $${\frac{\textrm{d}}{\textrm{dt}}}A_2(t) = \left \{ \begin{array}{lll} \frac{\textrm{Dose}(1- f_r) F}{D_2} + k_a A_1(t) + k_{32}A_3(t)- k_{23}A_2(t) & & \\ \quad - k_{20}A_2(t) + k_{epo} - \frac{V_{max}A_2(t)/V_2}{K_M+A_2(t)/V_2} & \textrm{if} & t_{\textrm{lag}2}{\leqslant}t {\leqslant}D_2 \\ k_{epo} - \frac{V_{max}A_2(t)/V_1}{K_M+A_2(t)/V_1} & \textrm{if} & t > D_2, t < t_{\textrm{lag}2}, \end{array} \right. \label{Eq:PerezA2Equation}$$ Finally, EPO enters the peripheral compartment from -and returns to- the central compartment linearly, so $${\frac{\textrm{d}}{\textrm{dt}}}A_3(t) = k_{23}A_2(t)-k_{32}A_3(t). \label{Eq:PerezA3Equation}$$ The total bioavailable EPO is given by $$C(t) = BSL + A_2(t)/V_1.$$ @Perez-Ruixo2008 considered 4 different pharmacodynamics models of erythrocyte response to exogenous EPO (titled the A,B,C and D models). In each of the 4 different pharmacodynamic models, the EPO dynamics are unchanged and described by equations , and . Here, we describe the “B” model from @Perez-Ruixo2008. Model “B” divides the erythrocyte progenitors, $P(t)$, into $N_P$ compartments further subdivided into two distinct populations; EPO only affects the growth rate of the first population. Thus, the first $N_P/2$ compartments constitute the EPO sensitive population. Progression through these $N_P$ compartments represents the ageing process of the progenitor cells. Once erythrocyte progenitors have reached maturity, they progress into the reticulocyte population. Once again, the maturation process of reticulocytes is modelled through a series of $N_R$ transit compartments that are not sensitive to EPO. In this manner, the @Perez-Ruixo2008 model uses a concatenation of transit compartments to model the separate ageing processes of reticulocytes. The Pérez-Ruixo “B” model of erythrocyte progenitor and reticulocyte production is $$\left. \begin{aligned} {\frac{\textrm{d}}{\textrm{dt}}}P_1(t) & = k_{in}- \frac{S_{max}C(t)}{SC_{50}+C(t)}\frac{N_P}{T_P}P_1(t) \\[0.2cm] {\frac{\textrm{d}}{\textrm{dt}}}P_i(t) & = \frac{S_{max}C(t)}{SC_{50}+C(t)} \frac{N_P}{T_P}\left[ P_{i-1}(t)-P_i(t)\right] \quad \textrm{for} \quad i = 2,3,...,N_P/2 \\[0.2cm] {\frac{\textrm{d}}{\textrm{dt}}}P_{N_P/2+1}(t) & = \frac{S_{max}C(t)}{SC_{50}+C(t)}\frac{N_P}{T_P} P_{N_P/2}(t) - \frac{N_P}{T_P} P_{N_P/2+1}(t) \\[0.2cm] {\frac{\textrm{d}}{\textrm{dt}}}P_i(t) & = \frac{N_P}{T_P}\left[ P_{i-1}(t)-P_i(t)\right] \quad \textrm{for} \quad i = N_P/2+2,...,N_P. \\[0.2cm] {\frac{\textrm{d}}{\textrm{dt}}}R_1(t) & = \frac{N_P}{T_P}P_{N_P}(t) - \frac{N_R}{T_R}R_1(t) \\[0.2cm] {\frac{\textrm{d}}{\textrm{dt}}}R_i(t) & = \frac{N_R}{T_R}\left[ R_{i-1}(t)-R_i(t)\right] \quad \textrm{for} \quad i = 2,3,...N_R. \end{aligned} \right \} \label{Eq:PerezModel}$$ By identifying the ingredients necessary from Remark \[Remark:ODEtoDDERecipe\], we will show how the distributed DDE framework from Section \[Sec:FiniteDimensionalRepresentation\] can account for these separate ageing processes with distinct ageing velocities. Accounting for multiple ageing processes is not possible by rescaling time so approach of @deSouza2017 cannot be generalized to this case. The most immature erythrocyte progenitors are modelled by $P_1(t)$ and are created from multipotent progenitors differentiating into the erythrocyte lineage at a constant rate $k_{in}$. Transit between the first $N_P/2$ compartments occurs at the variable rate $$V_e(t)= \frac{S_{max}C(t)}{SC_{50}+C(t)} \frac{N_P}{T_P} \quad \textrm{with} \quad V_e^* = \frac{S_{max}BSL}{SC_{50}+BSL} \frac{N_P}{T_P}.$$ Using , we define $\hat{V}_e(t) = V_e(t)/V_e^*$, so the birth rate of precursor cells into $P_2(t)$ is $$V_e(t) P_1(t) = \displaystyle \frac{\beta_e(t)}{\hat{V}_{e}(t)}.$$ Further, we see that the only removal of cells from the compartment model is due to transition to later compartments. Therefore, $\mu(t) =0$, and we have identified all the ingredients necessary in Remark \[Remark:ODEtoDDERecipe\]. Therefore, for $ i = 2,3,...N_P/2$, $$P_i(t) = \int_{-\infty}^t \frac{V_e({\varphi})}{V_e^*} \ P_1({\varphi})g_{V_e^*}^{i}\left[\int_{{\varphi}}^t \hat{V}_e(s) {\mathrm{d}}s\right]{\mathrm{d}}{\varphi}. \label{Eq:ErythroProgenitorExpression1}$$ The $N_P/2+1$st compartment satisfies $${\frac{\textrm{d}}{\textrm{dt}}}P_{N_P/2+1}(t) = V_e(t) P_{N_P/2}(t) - \frac{N_P}{T_P} P_{N_P/2+1}(t).$$ Erythrocyte progenitors enter the first non-EPO sensitive ageing compartment, $P_{N_P/2+1}(t)$, with appearance rate $$\frac{\tilde{\beta}_e(t)}{V_p(t)} = V_e(t)P_{N/2}(t),$$ and then progress through the remaining $N_P/2$ compartments at a constant rate $V_p(t) = V_p^* = N_P/T_P$. Once again, we note that there is no removal of cells in any of the $N_P/2$ compartments, so $\mu(t)= 0$. Further, since the ageing velocity is constant, $\hat{V}_p^* = 1$. Therefore, a simple application of Remark \[Remark:ODEtoDDERecipe\] for constant ageing velocity, and using gives $$\begin{aligned} \label{Eq:PerezProgenitors}\notag P_{N_P}(t) & = \int_0^{\infty} \frac{\tilde{\beta}_e(t-\theta)}{N_P/T_P}g_{N_P/T_P}^{N_P/2}(\theta) {\mathrm{d}}\theta = \int_{-\infty}^{t} \frac{\tilde{\beta}_e(\theta)}{N_P/T_P}g_{N_P/T_P}^{N_P/2}(t-\theta) {\mathrm{d}}\theta \\[0.25cm] &= \int_{-\infty}^{t} \left[\frac{V_e(\theta)}{N_P/T_P} \int_{-\infty}^{\theta} V_e({\varphi}) P_1({\varphi})g_{V_e^*}^{i-1}\left( \int_{{\varphi}}^{\theta} \hat{V}_e(s) {\mathrm{d}}s\right){\mathrm{d}}{\varphi}\right]g_{N_P/T_P}^{N_P/2}(t-\theta) {\mathrm{d}}\theta.\end{aligned}$$ Mature erythrocyte precursors enter into the most immature reticulocyte compartment, $R_1(t)$. Given , the differential equation for $R_1(t)$ becomes $$\begin{aligned} {\frac{\textrm{d}}{\textrm{dt}}}R_1(t) & = \frac{N_P}{T_P}\underbrace{ \int_{-\infty}^{t} \left[\frac{V_e(\theta)}{N_P/T_P} \int_{-\infty}^{\theta} V_e({\varphi}) P_1({\varphi})g_{V_e^*}^{i-1}\left( \int_{{\varphi}}^{\theta} \hat{V}_e(s) {\mathrm{d}}s\right) {\mathrm{d}}{\varphi}\right]g_{N_P/T_P}^{N_P/2}(t-\theta) {\mathrm{d}}\theta}_{P_{N_P}(t)} \\ & \qquad {} - \frac{N_R}{T_R}R_1. \end{aligned}$$ Hence, the Pérez-Ruixo “B” model of reticulocyte production is equivalent to $$\begin{aligned} C(t) & = BSL + A_2(t)/V_1 \\ {\frac{\textrm{d}}{\textrm{dt}}}P_1(t) & = k_{in}- \frac{S_{max}C(t)}{SC_{50}+C(t)}\frac{N_P}{T_P}P_1(t) \\[0.2cm] {\frac{\textrm{d}}{\textrm{dt}}}R_1(t) & = \frac{N_P}{T_P}\int_{-\infty}^{t} \left[\frac{V_e(\theta)}{N_P/T_P} \int_{-\infty}^{\theta} V_e({\varphi}) P_1({\varphi})g_{V_e^*}^{N_p/2}\left( \int_{{\varphi}}^{\theta} \hat{V}_e(s) {\mathrm{d}}s\right) {\mathrm{d}}{\varphi}\right]g_{N_P/T_P}^{N_P/2}(t-\theta) {\mathrm{d}}\theta \\ & \qquad {} - \frac{N_R}{T_R}R_1 \\ {\frac{\textrm{d}}{\textrm{dt}}}R_i(t) & = \frac{N_R}{T_R}\left[ R_{i-1}(t)-R_i(t)\right] \quad \textrm{for} \quad i = 2,3,...N_R. \end{aligned}$$ Finally, we can use Remark 4.5 with the constant ageing velocity $V_r(t) = V_r^* = N_R/T_R$ to solve the transit compartment system for $R_i(t)$ to find $$R_{i}(t) = \int_{0}^{\infty} \frac{T_R}{N_R} \beta_R(\sigma)g_{N_R/T_R}^{i}(\sigma) {\mathrm{d}}\sigma, \label{Eq:ReticulocyteRenewal}$$ where $$\beta_R(\sigma) = \frac{N_P}{T_P}\int_{-\infty}^{\sigma} \left[\frac{V_e(\theta)}{N_P/T_P} \int_{-\infty}^{\theta} V_e({\varphi}) P_1({\varphi})g_{V_e^*}^{N_p/2}\left( \int_{{\varphi}}^{\theta} \hat{V}_e(s) {\mathrm{d}}s\right) {\mathrm{d}}{\varphi}\right]g_{N_P/T_P}^{N_P/2}(t-\theta) {\mathrm{d}}\theta. \\$$ Using the techniques developed in Section \[Sec:FiniteDimensionalRepresentation\], we have transformed the differential equations for the transit compartments for the erythrocyte progenitors and the reticulocytes into renewal type equations given by and [@Diekmann2017]. Since @Perez-Ruixo2008 did not model reticulocyte mediated clearance of EPO, the cytokine and early progenitor dynamics are independent of the $P_{N_P}(t)$ and $R_{N_R}(t)$ concentrations. Consequently, the dynamics of equation  are completely determined by the dynamics of $$\begin{aligned} C(t) & = BSL + A_2(t)/V_1 \\ {\frac{\textrm{d}}{\textrm{dt}}}P_1(t) & = k_{in}- \frac{S_{max}C(t)}{SC_{50}+C(t)}\frac{N_P}{T_P}P_1(t), \end{aligned}$$ and the EPO concentrations given by equations , , and . We are now able to completely characterise the homeostatic behaviour of erythropoiesis by studying $$\left. \begin{aligned} {\frac{\textrm{d}}{\textrm{dt}}}A_1(t) & = -k_aA_1(t) \\ {\frac{\textrm{d}}{\textrm{dt}}}A_2(t) & = k_{epo} - \frac{V_{max}A_2/V_1}{K_M+A_2/V_1} \\ {\frac{\textrm{d}}{\textrm{dt}}}A_3(t) & = k_{23}A_2(t)-k_{32}A_3(t) \\ {\frac{\textrm{d}}{\textrm{dt}}}P_1(t) & = k_{in}- \frac{S_{max}C(t)}{SC_{50}+C(t)}\frac{N_P}{T_P}P_1(t), \end{aligned} \right \} \label{Eq:ErythropoiesisRenewalEquation}$$ To ensure that the initial value problem is equivalent to the Pérez-Ruixo model [@Perez-Ruixo2008], we re-use the initial conditions for $A_1(0),A_2(0),A_3(0)$. Since $\mu=0$ and the initial conditions $P_1(0) = P_i(0) $ are constant, we can set the history function for the progenitors, $\rho_p(s)$, to be $\rho_p(s) = P_1(0)$. The same can be done for the reticulocytes with $\rho_r(s) = R_1(0)$. We find the homeostatic concentration of EPO in the depot, central and peripheral compartments by solving $${\frac{\textrm{d}}{\textrm{dt}}}A_1(t) = 0, \quad {\frac{\textrm{d}}{\textrm{dt}}}A_2(t) = 0, \quad {\frac{\textrm{d}}{\textrm{dt}}}A_3(t) = 0, \quad \textrm{and} \quad {\frac{\textrm{d}}{\textrm{dt}}}P_1(t) = 0.$$ This yields the following homeostatic EPO concentrations $$A_1^* = 0, \quad A_2^* = \frac{V_1k_{epo}k_M}{V_{max}-k_{epo}}, \quad A_3^*= \frac{k_{23}}{k_{32}}A_2^*, \quad \textrm{and} \quad C^* = BSL+A_2^*,$$ while the homeostatic progenitor concentration is $$P_1^* = \frac{k_{in}(SC_{50}+C^*)}{S_{max}C^*}\frac{T_P}{N_P}.$$ The simplified erythropoiesis dynamics and homeostatic concentrations lead to the following proposition: For positive parameter values, the homeostatic equilibrium point of equation  is locally asymptotically stable. The linearisation matrix of equation  about the equilibrium $x^* = (A_1^*,A_2^*,A_3^*,P_1^*)$ is $$\mathbb{J}(x^*) = \left[ \begin{array}{cccc} -k_a & 0 & 0 & 0 \\ 0 & \frac{-V_{max}/V_1 k_M}{(k_M+A_2^*/V_1)^2} & 0 & 0 \\ 0 & k_{23} & -k_{32} & 0 \\ 0 & \frac{1}{V_1}\frac{S_{max}C^*}{(SC_{50}+C^*)^2} & 0 & -\frac{S_{max}C^*}{SC_{50}+C^*}\frac{N_P}{T_P}\\ \end{array} \right].$$ The matrix $\mathbb{J}(x^*)$ is lower triangular with strictly negative diagonal entries, so the eigenvalues are strictly negative and the equilibrium is locally asymptotically stable. This example illustrates how Remark \[Remark:ODEtoDDERecipe\] can be adapted to include a series of concatenated ageing processes. In the age structured PDE interpretation, each ageing process corresponds to a unique random variable modelling the transition between distinct stages. As we do not *a priori* expect the transition ages to be independent, interpreting the resulting ageing processes requires some care. The final renewal equation  includes a joint multivariate distribution representing the concatenation of distinct ageing processes. Further, @Perez-Ruixo2008 did not show that the homeostatic equilibrium is locally asymptotically stable. For the ODE system , the Jacobian would be a $(3+N_P+N_R) \times (3+N_P+N_R) $ matrix with a degree $(3+N_P+N_R)$ characteristic polynominal. In general, analytically finding the roots of a large degree polynominal is difficult. Hence, while the ODE is obviously finite dimensional, it is analytically intractable. Conversely, the equivalent renewal equation  is simple to analyse and a similar argument to Proposition \[Prop:NonNegativitiy\] shows that solutions of the renewal equation  evolving from non-negative initial conditions remain non-negative. The “A”, “C” and “D” models can be modelled as renewal equations through a simple application of the classical linear chain technique and the technique shown here. Roskos’s Model of Granulocyte Production ---------------------------------------- @Roskos2006 modelled the impact of exogenous administration of granulocyte colony stimulating factor (G-CSF) on neutrophil proliferation and maturation speed. G-CSF is a proinflammatory cytokine that binds to G-CSF specific receptors on mature neutrophil cells and controls neutrophil kinetics through a negative feedback loop [@Roberts2005; @Shochat2007]. G-CSF governs neutrophil production by increasing the effective proliferation of neutrophil precursors, reducing the maturation time of non-mitotic neutrophil precursors, and increasing release of neutrophil cells from the bone marrow into the blood. The dynamics of neutrophil production have been well-studied from both a mathematical and a pharmacometric point of view [@Craig2016; @deSouza2017; @Quartino2014]. These models have used different techniques to incorporate the delays intrinsic to the system, such as discrete DDEs or transit compartment ODEs. @Roskos2006 model distinct stages of granulocyte production such as the bone marrow concentrations of metamyelocytes, $M(t)$; band cells, $B(t)$; and segmented neutrophil cells, $S(t)$. The ageing and maturation processes for each of these cell types is modelled through a series of three transit chains with $N_M,N_B$ and $N_S$ compartments, respectively. Moreover, band and segmented neutrophil cells can be shunted into circulation following the administration of G-CSF. We denote the metamyelocyte, band and segmented neutrophil cell shunting rates as $\mu_m(t),\mu_b(t)$ and $\mu_s(t)$ Administration of G-CSF is modelled in a similar way to the EPO model of Section \[Sec:PerezModel\] using a first order delayed absorption model. However, @Roskos2006 do not give the differential equations for exogenous administration of G-CSF other than to state that the clearance of G-CSF includes neutrophil receptor mediated clearance through the term $$CL_N/F = \frac{k_{cat}/F (B_p(t)+S_p(t))}{K_M+C(t)},$$ where $B_p(t)$ and $S_p(t)$ are the number of circulating band and segmented neutrophil cells, respectively. Due to the feedback between the circulating neutrophil precursors and the cytokine $C(t)$, we are unable to completely reduce the Roskos model to a renewal type equation as was done in Section \[Sec:PerezModel\]. The Roskos model for granulocyte production is $$\begin{aligned} {\frac{\textrm{d}}{\textrm{dt}}}M_1(t) & = S_0+\frac{E_{mit}C(t)}{EC_{50}+C(t)} - \frac{N_M}{\tau_{meta}\left( 1-\frac{f_{mmt}C(t)}{EC_{50}+C(t)}\right) }M_1(t) \\ {\frac{\textrm{d}}{\textrm{dt}}}M_i(t) & = \frac{N_M}{\tau_{meta}\left( 1-\frac{f_{mmt}C(t)}{EC_{50}+C(t) } \right) }\left( M_{i-1}(t)-M_{i}(t) \right) \quad \textrm{for} \quad i =2,...,N_M \\[0.2cm] {\frac{\textrm{d}}{\textrm{dt}}}B_1(t) & = \frac{N_M}{\tau_{meta}\left( 1-\frac{f_{mmt}C(t)}{EC_{50}+C(t)} \right)}M_{N_M}(t) - \frac{N_B}{\tau_{band}\left( 1-\frac{f_{mmt}C(t)}{EC_{50}+C(t)} \right) } B_1(t) \\[0.2cm] & \qquad {} - \frac{E_{band}C(t)}{EC_{50}+C(t)}B_1(t) \\[0.2cm] {\frac{\textrm{d}}{\textrm{dt}}}B_i(t) & = \frac{N_B}{\tau_{band}\left( 1-\frac{f_{mmt}C(t)}{EC_{50}+C(t)} \right) }\left[ B_{i-1}(t)-B_i(t)\right] - \frac{E_{band}C(t)}{EC_{50}+C(t)}B_i(t); \quad i = 2,...N_B \\[0.2cm] {\frac{\textrm{d}}{\textrm{dt}}}B_p(t) & = \displaystyle \sum_{i=1}^{N_B} \frac{E_{band}C(t)}{EC_{50}+C(t)}B_i(t)- (k_{\lambda}+ k_{bpmat})B_p(t) \\ {\frac{\textrm{d}}{\textrm{dt}}}S_1(t) & = \frac{N_B}{\tau_{band}\left( 1-\frac{f_{mmt}C(t)}{EC_{50}+C(t)} \right)}B_{N_B}(t) - \left( \frac{N_S}{\tau_{seg}\left( 1-\frac{f_{mmt}C(t)}{EC_{50}+C(t)} \right) } + \frac{E_{seg}C(t)}{EC_{50}+C(t)}\right) S_1(t)\\[0.2cm] {\frac{\textrm{d}}{\textrm{dt}}}S_i(t) & = \frac{N_S}{\tau_{seg}\left( 1-\frac{f_{mmt}C(t)}{EC_{50}+C(t)} \right) } \left[ S_{i-1}(t) - S_i(t) \right] - \frac{E_{seg}C(t)}{EC_{50}+C(t)}S_i(t); \quad i = 2,...,N_S. \\ {\frac{\textrm{d}}{\textrm{dt}}}S_p(t) & = \displaystyle \sum_{i=1}^{N_S} \frac{E_{band}C(t)}{EC_{50}+C(t)}S_i(t)- (k_{\lambda}+ k_{bpmat})S_p(t), $$ and is an example of a transit compartment model with variable ageing speed and linear clearance. The linear clearance terms are Hill type functions with a maximal clearance rate $E_{j}$ given by $$\mu_j(t) = \frac{E_{j}C(t)}{EC_{50}+C(t)}.$$ Including these linear clearance in a transit compartment model is uncommon, but allows for the direct modelling of G-CSF mediated shunting of immature cells into circulation. By converting the model into a distributed DDE, we underline the link between clearance of cells in a transit compartment to the exponential decay present in the distributed DDE. Once again, we will proceed by identifying the ingredients discussed in Remark \[Remark:ODEtoDDERecipe\]. As in Section \[Sec:PerezModel\], the most immature metamyelocytes, $M_1(t)$, are produced from the earlier progenitors at a constant baseline rate $S_0$ with the G-CSF dependent recruitment rate $$\frac{\beta_m(t)}{V_m(t)} = S_0 + \frac{E_{mit}C(t)}{EC_{50}+C(t)}.$$ Metamyelocytes progress through maturation at a G-CSF dependent rate $$V_m(t) = \frac{N_M}{\tau_{meta}\left( 1-\frac{f_{mmt}C(t)}{EC_{50}+C(t)}\right) }.$$ Metamyelocytes are not shunted into circulation following the administration of G-CSF, so $\mu_m(t) =0$. Therefore, the metamylocyte transit compartment model can be reduced to a distributed DDE using Remark \[Remark:ODEtoDDERecipe\] in an identical procedure to the Pérez-Ruixo model in Section \[Sec:PerezModel\]. The most mature metamyelocyte population is given by $$M_{N_M}(t) = \int_{-\infty}^t \frac{\beta_m(t)}{V_m(t)}g_{V_m^*}^{N_M}\left[\int_{{\varphi}}^t \hat{V}_m(s) {\mathrm{d}}s\right]{\mathrm{d}}{\varphi}. \label{Eq:MetamyelocyteProgenitorExpression}$$ Immature neutrophil band cells, $B_1(t)$, are created at the birth rate $$\displaystyle \frac{\beta_b(t)}{\hat{V}_b(t)} = \frac{N_M}{\tau_{meta}\left( 1-\frac{f_{mmt}C(t)}{EC_{50}+C(t)} \right)}M_{N_M}(t).$$ These band cells progress through the maturation compartments at the G-CSF dependent ageing rate $$V_b(t) = \frac{N_B}{\tau_{band}\left( 1-\frac{f_{mmt}C(t)}{EC_{50}+C(t)} \right) } \quad \textrm{with} \quad V_b^* = \frac{N_B}{\tau_{band}\left( 1-\frac{f_{mmt}C^*}{EC_{50}+C^*} \right) },$$ so the scaled ageing rate is $\hat{V}_b(t) = V_b(t)/V_b^*$. Inspecting the remaining terms in the equation for $B_1(t)$ gives $$\mu_b(t) = \frac{E_{band}C(t)}{EC_{50}+C(t)}.$$ Therefore, using Remark \[Remark:ODEtoDDERecipe\], we find that the $i$-th band compartment satisfies $$\begin{aligned} B_{i}(t) & = \hspace{-0.2cm} \int_{-\infty}^t \frac{\beta_b({\varphi})}{V_b({\varphi})} \exp\left[-\int_{{\varphi}}^t \mu_b(s) {\mathrm{d}}s\right]g_{V_B^*}^{i}\left(\int_{{\varphi}}^t \hat{V}_b(s){\mathrm{d}}s\right) {\mathrm{d}}{\varphi}\end{aligned} \label{Eq:BandCellExpression}$$ for $i = 1,2,...N_B$. Mature band cells, given by with $i=N_B$, transition into the first segmented neutrophil cell compartment $S_1(t)$ with creation rate $$\frac{\beta_s(t)}{\hat{V}_s(t)} = \frac{N_B}{\tau_{band}\left( 1-\frac{f_{mmt}C(t)}{EC_{50}+C(t)} \right)}B_{N_B}(t) = V_b(t)B_{N_B}(t).$$ These cells transit through the segmented neutrophil population with G-CSF dependent ageing ($V_s(t)$) and clearance ($\mu_s(t)$) rates $$V_s(t) = \frac{N_S}{\tau_{seg}\left( 1-\frac{f_{mmt}C(t)}{EC_{50}+C(t)} \right) } \quad \textrm{and} \quad \mu_s(t) = \frac{E_{seg}C(t)}{EC_{50}+C(t)}.$$ Therefore, we have identified all the ingredients in Remark \[Remark:ODEtoDDERecipe\] for the segmented neutrophil precursors, $S(t)$. The first segmented neutrophil cell compartment satisfies $$\begin{aligned} {\frac{\textrm{d}}{\textrm{dt}}}S_1(t) & = \overbrace{ V_b(t) \int_{-\infty}^t \frac{\beta_b({\varphi})}{V_b({\varphi})} \exp\left[-\int_{{\varphi}}^t \mu_b(s) {\mathrm{d}}s\right]g_{V_B^*}^{N_b}\left(\int_{{\varphi}}^t \hat{V}_b(s){\mathrm{d}}s\right) {\mathrm{d}}{\varphi}}^{\beta_s(t)/\hat{V}_s(t)} \\ & \qquad {} -V_s(t) S_1(t) - \mu_s(t) S_1(t). \end{aligned}$$ Therefore, it is possible to replace the transit compartment system of ODEs for $S_i(t)$ using Remark \[Remark:ODEtoDDERecipe\] to find $$\begin{aligned} \label{Eq:SegmentedCompartmentExpression} \notag S_i(t) & = \int_{-\infty}^t \overbrace{ V_b(\theta)\left[ \int_{-\infty}^{\theta} \frac{\beta_b({\varphi})}{V_b({\varphi})} \exp\left[-\int_{{\varphi}}^{\theta} \mu_b(s) {\mathrm{d}}s\right] g_{V_B^*}^{N_b}\left(\int_{{\varphi}}^{\theta} \hat{V}_b(s){\mathrm{d}}s\right) {\mathrm{d}}{\varphi}\right]}^{\beta_s(\theta)/\hat{V}_s(\theta)} \\ & \qquad {} \times \exp\left[- \int_{\theta}^{t} \mu_s(x){\mathrm{d}}x \right]g_{V_s^*}^{i}\left(\int_{\theta}^{t} \hat{V}_s(s){\mathrm{d}}s\right) {\mathrm{d}}\theta \qquad \textrm{for} \quad i = 1,2,...,N_s.\end{aligned}$$ The initial value problem studied by @Roskos2006 was equipped with initial conditions for the cytokine equations as well as the $N_M+N_S+N_B+2$ compartments. Since $\mu \neq 0$ in general, to create an equivalent renewal type equation, we use the same initial conditions as @Roskos2006 for the cytokine differential equations and follow @Cassidy2018 to construct appropriate history functions for $M(t),B(t)$ and $S(t)$. Therefore, we can reduce the ODE model of granulopoiesis to a renewal-type equation with unchanged cytokine dynamics from @Roskos2006 using the resulting DDEs for $B_p(t)$ and $S_p(t)$. The resulting renewal equation is given by the equations describing the cytokine dynamics and the system of distributed DDEs $$\begin{aligned} {\frac{\textrm{d}}{\textrm{dt}}}B_p(t) & = \displaystyle \sum_{i=1}^{N_B} \frac{E_{band}C(t)}{EC_{50}+C(t)}B_i(t)- (k_{\lambda}+ k_{bpmat})B_p(t) \\ {\frac{\textrm{d}}{\textrm{dt}}}S_p(t) & = \displaystyle \sum_{i=1}^{N_S} \frac{E_{band}C(t)}{EC_{50}+C(t)}S_i(t)- (k_{\lambda}+ k_{bpmat})S_p(t), \end{aligned}$$ where $B_i(t)$ and $S_i(t)$ are given by and , respectively. In this example, we have shown how to concatenate multiple ageing processes with distinct ageing velocities, as well as how to include the loss of cells throughout the ageing process. Once again, we can use a similar argument to Proposition \[Prop:NonNegativitiy\] to ensure that the solutions evolving from non-negative initial data remain non-negative. Discussion {#Sec:Discussion} ========== In this work, we have shown how to reduce age structured PDEs to possibly state-dependent DDEs. Our derivation shows how the correction factor discussed in Section \[Sec:CorrectionFactor\] results naturally from considering the hazard rate at which cells exit maturation, and generalises the derivation of @Craig2016 to the non-deterministic case. In Section \[Sec:AnalysisofDDE\], we analysed the general distributed DDE that arises from the age structured population model. We showed, in Proposition \[Prop:NonNegativitiy\], that populations evolving from non-negative initial conditions remain non-negative, regardless of the density $K_A(t)$. By linearising the distributed DDE, we showed, in Proposition \[Prop:StabilityProposition\], that stability analysis of the general DDE is analytically tractable. We characterized the stability of a generic equilibrium solution as a function of the linearisation of the growth function $F(x^*,\bar{x}^*)$. Next, we considered the state-dependent DDE in the case of the degenerate, uniform and gamma distribution. Choosing the degenerate distribution leads to the familiar state-dependent discrete DDE, while uniformly distributed DDEs are reducible to discrete DDEs with two state dependent delays. Finally, in the case of gamma distributed DDEs, we explicitly related transit compartment models that include variable transit rates with gamma distributed DDEs in Theorem \[Theorem:FiniteDimensionRepresentation\]. As shown by @deSouza2017, it can be simpler to analyse stability of equilibria and positivity of solutions of a distributed DDE than the corresponding ODE. However, the ODE models may be simpler to simulate numerically. The equivalence between the differential equations allows for the resulting model to be analysed in the more convenient setting. By the means of two examples, we showed how to express transit compartment models as an equivalent DDE or renewal equation. First, we showed how to incorporate a variable transit rate into a distributed DDE using a simple application of Theorem \[Theorem:FiniteDimensionRepresentation\]. Next, we demonstrated that our method is capable of including multiple distinct ageing processes in the form of a multivariate distributed DDE. Lastly, we showed how a linear clearance term in each of the transit compartments can be included in the equivalent DDE model. Analysis of the renewal equation was shown to be simpler than the corresponding ODE system, and we were able to easily characterise the stability of the homeostatic equilibria. This work emphasizes the link between transit compartment ODEs and delay differential equations. While this link has been known for over 50 years, we explicitly establish it for compartment models with variable transit rates. We demonstrated that these transit compartment models are equivalent to state dependent distributed DDEs. The equivalence between easy-to-simulate ODE models and the simpler to analyse distributed DDEs allows modellers to use the formulation that is most convenient for their purposes. Consequently, the framework developed in this article allows for researchers to incorporate both external control of ageing rates and heterogeneous, non-deterministic maturation age into models of physiological maturation processes. Acknowledgments {#acknowledgments .unnumbered} =============== TC would like to thank the Natural Sciences and Engineering Research Council of Canada (NSERC) for funding through the PGS-D program and the Alberta Government for funding through the Sir James Lougheed award of distinction. MC and ARH are grateful for funding through the NSERC Discovery Grant program. Bernard, S. (2016). . , 78(12):2358–2363. , D., Craig, M., Cassidy, T., Li, J., Nekka, F., B[é]{}lair, J., and Humphries, A. R. (2018). . , 45(1):59–77. Cassidy, T. and Humphries, A. R. (2018). . , pages 1–26. Colijn, C. and Mackey, M. (2005). . , 237(2):117–132. Cox, D. R. (1972). . , 34(2):187–220. Craig, M., Humphries, A. R., and Mackey, M. C. (2016). . , 78(12):2304–2357. Crauste, F. and Adimy, M. (2007). . , 8(1):19–38. Diekmann, O., Gyllenberg, M., and Metz, J. A. J. (2018). . , 30(4):1439–1467. Friberg, L. E., Henningsson, A., Maas, H., Nguyen, L., and Karlsson, M. O. (2002). . , 20(24):4713–4721. Glass, L. (2015). . , 25(9):097603. Gurney, W., Nisbet, R., and Blythe, S. (1986). . In Metz, J. A. J. and Diekmann, O., editors, [*Dyn. Physiol. Struct. Popul.*]{}, chapter 11, pages 474–493. Springer Berlin Heidelberg, Berlin, Heidelberg, 3 edition. Hale, J. K. and [Verduyn Lunel]{}, S. M. (1993). , volume 99 of [*Applied Mathematical Sciences*]{}. Springer New York, New York, NY. Hartung, F., Krisztin, T., Walther, H.-O., and Wu, J. (2006). . In Canada, A., Drabek, P., and Fonda, A., editors, [*Handb. Differ. Equations*]{}, chapter 5, pages 435–545. Elsevier, North Holland 2004, 3rd edition. Hearn, T., Haurie, C., and Mackey, M. C. (1998). . , 192:167–181. Hino, Y., Murakami, S., and Naito, T. (1991). , volume 1473 of [*Lecture Notes in Mathematics*]{}. Springer Berlin Heidelberg, Berlin, Heidelberg. Kaplan, E. L. and Meier, P. (1958). . , 53(282):457. Krzyzanski, W. (2011). , 38(2):179–204. Krzyzanski, W., Wiczling, P., Lowe, P., Pigeolet, E., Fink, M., Berghout, A., and Balser, S. (2010). . , 50(S9):101S–112S. Liu, W., Hillen, T., and Freedman, H. I. (2007). , 4(2):239–59. MacDonald, N. (1978). . Springer, Berlin, 1 edition. Mackey, M. (1978). . , 51(5):941–956. Mahaffy, J. M., B[é]{}lair, J., and Mackey, M. C. (1998). . , 190(2):135–146. McKendrick, A. G. (1925). . , 44:98–130. Metz, J. and Diekmann, O., editors (1986). , volume 68 of [*Lecture Notes in Biomathematics*]{}. Springer Berlin Heidelberg, Berlin, Heidelberg, 3 edition. Otto, A. and Radons, G. (2017). . In Insperger, T., Ersal, T., and Orosz, G., editors, [*Time Delay Syst.*]{}, volume 7 of [*Advances in Delays and Dynamics*]{}, pages 169–183. Springer International Publishing, Cham. P[é]{}rez-Ruixo, J. J., Krzyzanski, W., and Hing, J. (2008). , 47(6):399–415. Quartino, A. L., Karlsson, M. O., Lindman, H., and Friberg, L. E. (2014). . , 31(12):3390–3403. Roberts, A. (2005). , 23(1):33–41. Roskos, L. K., Lum, P., Lockbaum, P., Schwab, G., and Yang, B.-B. (2006). . , 46(7):747–757. Rubinow, S. and Lebowitz, J. (1975). . , 225:187–225. Shochat, E., Rom-Kedar, V., and Segel, L. (2007). , 69:299–338. Smith, A., McCullers, J., and Adler, F. (2011). . , 276(1):106–116. Smith, H. L. (1993). . , 113(1):1–23. Teslya, A. (2015). . Doctoral dissertation, McMaster University. Trucco, E. (1965). . , 27(4):449–471. Vogel, T. (1961). . In [*Proc. Int. Symp. Nonlinear Vib.*]{}, pages 123–130, Kiev. Academy of Sciences USSR. von Schulthess, G. and Mazer, N. (1982). . , 59:27–37. Yuan, Y. and B[é]{}lair, J. (2011). . , 10(2):551–581.
--- abstract: 'This is a continuation of Ref.\[1\](arXiv:nlin.PS/2001.07758v1). In the present paper, we consider the solution to the modified Benjamin-Bona-Mahony equation $u_{ t} + C u_{z} + \beta u_{zzt} + a u^{2} u_{z}=0$ using the generalized perturbation reduction method. The equation is transformed to the coupled nonlinear Schrödinger equations for auxiliary functions. Explicit analytical expression for the shape and parameters of the two-component vector breather oscillating with the sum and difference of frequencies and wavenumbers are obtained.' author: - 'G. T. Adamashvili' title: 'Two-component vector breather solution of the modified BBM equation' --- Introduction ============ The nonlinear solitary waves, such as breathers and their various varieties are one of the main objects of study in the nonlinear wave theory. Although these waves are met in completely different physical systems and describe various physical phenomena, their general properties are quite similar. Nonlinear solitary waves can be divided into two main kinds: single-component and two-component solitary waves \[1-16\]. The properties and the methods of investigations of each of them are completely different. Namely, methods for studying single-component waves are completely unacceptable for studying two-component solitary waves. This is because the study of two-component waves requires a larger number of auxiliary functions and parameters. For instance, the perturbative reduction method (PRM) is adapted for the study of single-component solitary waves, which uses one complex auxiliary function and two constant parameters \[17,18\]. Later, the generalized version of the PRM is developed in which two complex auxiliary functions and eight constant parameters are used, which made it possible to study two-component nonlinear solitary waves. In the beginning, the generalized PRM was used in nonlinear optics under the condition of self-induced transparency (SIT). Namely, it was proved that the second derivatives in the Maxwell wave equation lead not only to small corrections to the parameters of the SIT pulses, as was previously supposed before applying the generalized PRM \[19-27\], but also cause the formation of a bound state of two breathers and the formation of a two-component nonlinear wave - vector $0\pi$ pulse. One component of such a vector pulse oscillating with the sum, and the second at the difference of frequencies and wavenumbers (OSDFW). As a result, it was obtained that one of the main SIT pulse is not scalar $0\pi$ pulse, as was previously supposed, but the two-component vector $0\pi$ pulse of SIT, and the scalar $0\pi$ pulse is only some approximation \[14-16, 28-35\]. Later, a similar two-component vector $0\pi$ pulse was obtained for acoustic nonlinear SIT waves \[36-40\]. Using the generalized PRM for studying nonresonant nonlinear waves in a dispersive and Kerr-type nonlinear susceptibility medium led to similar results - the formation of a two-component vector pulse OSDFW \[41\]. A completely different phenomena and corresponding equations, compared with nonlinear equations studied in nonlinear optics and nonlinear acoustics, was considered for surface waves in dispersive nonlinear media, the anharmonic phonons in crystals, acoustic-gravity waves in fluids, in plasma physics, etc. It was proved that in such physical systems also has a solution in the form of a two-component vector breather OSDFW \[1\]. Even though the generalized PRM was developed relatively recently, using this method, it was possible to establish a connection between a number of the nonlinear differential equations (or system of equations) with the coupled nonlinear Schrödinger equations (NSEs) and find solutions in the form of two-component vector breathers OSDFW. To these equations belong: the Sin-Gordon equation, the system of Maxwell-Bloch equations, the system of Maxwell-Liouville equations for the linearly and the circularly polarized pulses in an ensemble of semiconductor quantum dots, Maxwell wave equation in the dispersive Kerr-type medium, the system of Maxwell equation and the material equations for two-photon resonant transitions, the system of magnetic Bloch equations and the elastic wave equation, the modified Korteweg-de Vries equation or the modified Benjamin-Bona-Mahony (BBM) equation \[1, 14-16, 28-41\]. The purpose of the present work, using the generalized PRM, to consider the modified BBM equation in the form \[42-46\] $$\label{bbm} u_{ t}+C u_{z}+\beta u_{zzt} +a u^{2} u_{z}=0,$$ where $u(z,t)$ is a real function of space and time, $z$ is spatial variable and $t$ is time variable, $a$, $C$ and $\beta$ are arbitrary constants (see, \[47\]). +0.5cm The equation for the slowly envelope function ============================================= We can simplify Eq.(1) using the method of slowly changing envelope. For this purpose, we represent $u$ in the form $$\label{ez} u=\sum_{l=\pm1}\hat{u}_{l}Z_{l},\;\;\;\;\;\;\;\;\;\;\;\;Z_{l}= e^{il(kz -\omega t)},$$ where $\hat{u}_{l}$ is the slowly varying complex amplitude. This function is complex in view of the fact that the wave is phase modulated. $Z_{l}= e^{il(k z -\omega t)}$ is the fast oscillating function.To take into account that ${u}$ is a real function, we set $\hat{u}_{1}=\hat{u}_{-1}^{*}$. The envelope $\hat{u}_{l}$ vary sufficiently slowly in space and time compared with the carrier wave parts, so that the following inequalities are valid $$\label{ap} \left|\frac{\partial \hat{u}_{l}}{\partial t}\right|\ll\omega |\hat{u}_{l}|,\;\;\;\left|\frac{\partial \hat{u}_{l}}{\partial z}\right|\ll k|\hat{u}_{l}|.$$ Substituting Eq.(2) in the nonlinear equation (1), and taking into account Eq.(3), we obtain the dispersion law for the propagating pulse in the medium $$\label{dis} \omega=\frac{Ck}{1 -\beta k^2 }$$ and the nonlinear wave equation for the envelope function $\hat{u}_{l}$ in the form: $$\label{equ} \sum_{l=\pm 1} Z_{l} [ \frac{{\partial}\hat{u}_{l}}{{\partial}t} +( C -3\beta k^2 )\frac{{\partial}\hat{u}_{l}}{{\partial}z} + 3\beta ilk \frac{{\partial}^{2} \hat{u}_{l}}{{\partial}z^2} +\beta \frac{{\partial}^{3} \hat{u}_{l}}{{\partial}z^3}] +a \sum_{L,m,l'} Z_{L+m+l'} \hat{u}_{L} \hat{u}_{m}(il'k \hat{u}_{l'}+\frac{{\partial}\hat{u}_{l'}}{{\partial}z})=0.$$ +0.5cm Two-component vector breather and the generalized PRM ===================================================== For the study of the two-component nonlinear solitary wave solution of Eq.(1) we apply the generalized PRM \[1, 14-16, 28-41\] by means of which we can transform Eq.(1) into the coupled NSEs. In this method the function $\hat{u}_{l}(z,t)$ can be represented as: $$\label{cemi} \hat{u}_{l}(z,t)=\sum_{\alpha=1}^{\infty}\sum_{n=-\infty}^{+\infty}\varepsilon^\alpha Y_{l,n} f_{l,n}^ {(\alpha)}(\zeta_{l,n},\tau),$$ where $$Y_{l,n}=e^{in(Q_{l,n}z-\Omega_{l,n} t)},\;\;\;\zeta_{l,n}=\varepsilon Q_{l,n}(z-{v_g}_{l,n} t),\;\;\;\tau=\varepsilon^2 t,\;\;\; {v_g}_{l,n}=\frac{d\Omega_{l,n}}{dQ_{l,n}},$$ $\varepsilon$ is a small parameter. Such an expansion allows us to separate from $\hat{u}_{l}$ the even more slowly changing auxiliary function $ f_{l,n}^{(\alpha )}$. It is assumed that the quantities $\Omega_{l,n}$, $Q_{l,n}$, and $f_{l,n}^{(\alpha)}$ satisfy the conditions: $$\label{ryp}\nonumber\\ \omega\gg \Omega_{l,n},\;\;k\gg Q_{l,n},\;\;\;$$ $$\left|\frac{\partial f_{l,n}^{(\alpha )}}{ \partial t}\right|\ll \Omega_{l,n} \left|f_{l,n}^{(\alpha )}\right|,\;\;\left|\frac{\partial f_{l,n}^{(\alpha )}}{\partial z }\right|\ll Q_{l,n}\left|f_{l,n}^{(\alpha )}\right|.$$ for any value of indexes $l$ and $n$. Although the quantities $Q_{l,n}$, $\Omega_{l,n}$, $\zeta_{l,n}$ and ${v_{g;}}_{l,n}$ depend on $l$ and $n$, for simplicity, we omit these indexes in the next expressions when this does not cause confusion. The generalized PRM Eq.(6) is acceptable for the phase modulated complex function $\hat{u}_{l}$. But if the wave is not phase-modulated, then the function $\hat{u}_{l}=\hat{u}_{-l}=\hat{u}$ is real and does not depend on the index $l$. Substituting the expansion (ansatz) Eq.(6) into Eq.(5) we obtain $$\label{wjh} \sum_{l=\pm 1}\sum_{\alpha=1}^{\infty}\sum_{n=-\infty}^{+\infty}\varepsilon^\alpha Z_{l} Y_{l,n} [ \mathcal{W}_{l,n}f_{l,n}^{(\alpha)} +\varepsilon \mathfrak{J}_{l,n} \frac{\partial f_{l,n}^{(\alpha)}}{\partial \zeta_{l,n}} +\mathcal{H}_{l,n} \varepsilon^2 \frac{\partial^{2} f_{l,n}^{(\alpha)}}{\partial \zeta^{2}_{l,n}}+\mathfrak{h}_{l,n}\varepsilon^2 \frac{\partial f_{l,n}^{(\alpha)}}{\partial \tau} + O(\varepsilon^3)] $$$$ +a \sum_{L,m,l'} Z_{L+m+l'} \hat{E}_{L} \hat{E}_{m}(il'k \hat{E}_{l'}+\frac{{\partial}\hat{E}_{l'}}{{\partial}z})=0,$$ where $$\mathcal{W}_{l,n}=-in(\Omega - \beta k^2 \Omega - C Q -2\beta k \omega Q - 2\beta l n k Q \Omega - l n\beta \omega Q^{2} - \beta n^2 Q^{2} \Omega), $$$$ \mathcal{J}_{l,n}=- Q v_g +\beta k^2 Q v_g + C Q + 2\beta k \omega Q + 2\beta l n k Q (Q v_g +\Omega) + 2 n l\beta \omega Q^{2} + \beta n^2 Q^{2}(Q v_g + 2 \Omega)], $$$$ \mathcal{H}_{l,n}= -i \beta Q^{2} [2 ( l k + n Q) v_{g} + (l \omega +n \Omega)], $$$$ \mathfrak{h}_{l,n}= 1-\beta ( l k + n Q)^{2}.$$ Eq.(7) contains four independent equations for different values $l=\pm1$ and $n=\pm1$. If we equating to zero, the members of the Eq.(7) with the same powers of $\varepsilon$, we will be able to a several of equations. In the first order of $\varepsilon$ we have the equation $$\label{fr} \sum_{l=\pm 1}\sum_{n=-\infty}^{+\infty} Z_{l} Y_{l,n} \mathcal{W}_{l,n}\mathfrak{f}_{l,n}^{(1)}=0.$$ From Eq.(9) we obtain connection between the parameters $\Omega_{l,n}$ and $Q_{l,n}$: $$\label{diss} (\beta k^2-1) \Omega_{l,n} + (C + 2\beta k \omega) Q_{l,n} +l n 2\beta k Q_{l,n} \Omega_{l,n} +l n \beta \omega Q^{2}_{l,n} + \beta Q^{2}_{l,n} \Omega_{l,n}=0,$$ where $l=\pm1$ and $n=\pm1$. From Eq.(10) we obtain the expression $$\label{rrv} {v_{g;}}_{l,n}=\frac{ C + 2\beta ( k \omega +l n k \Omega_{l,n} + l n \omega Q_{l,n} + Q_{l,n} \Omega_{l,n})} {1- \beta (l k +n Q_{l,n})^{2}}.$$ Eq.(10) consists from the four equations with different values of the indexes $l$ and $n$. But these four equations are reduced to the two independent equations When $l=\pm1,\;n=\pm1$, than $f_{\pm1,\pm1}^{(1)}\neq0$ and we have the equation $$\label{dis1} (\beta k^2-1) \Omega_{\pm1,\pm1} + (C + 2\beta k \omega) Q_{\pm1,\pm1} + 2\beta k Q_{\pm1,\pm1} \Omega_{\pm1,\pm1} + \beta \omega Q^{2}_{\pm1,\pm1} + \beta Q^{2}_{\pm1,\pm1} \Omega_{\pm1,\pm1}=0,$$ and when $l=\pm1,\;n=\mp1$, than $f_{\pm1,\mp1}^{(1)}\neq0$ and we have $$\label{dis2} (\beta k^2-1) \Omega_{\pm1,\mp1} + (C + 2\beta k \omega) Q_{\pm1,\mp1} - 2\beta k Q_{\pm1,\mp1} \Omega_{\pm1,\mp1} - \beta \omega Q^{2}_{\pm1,\mp1} + \beta Q^{2}_{\pm1,\mp1} \Omega_{\pm1,\mp1}=0.$$ Taking into account Eqs.(12) and (13), from Eq.(7) in the second order of $\varepsilon$ we obtain $$\mathfrak{J}_{\pm1,\pm1}=\mathfrak{J}_{\pm1,\mp1}=0,\;\;\;\;\;\;\;\; f_{+1,\pm2}^{(2)}=f_{-1,\pm2}^{(2)}=0.$$ From the equation (7) we have in the third order of $\varepsilon$ the system of equations proportional $Z_{+1}$ and $Z_{-1}$, respectively $$\label{eq22} -i \mathfrak{h}_{+1,\pm1} \frac{\partial f_{+1,\pm1}^{(1)}}{\partial \tau}-i \mathcal{H}_{+1,\pm1} \frac{\partial^{2} f_{+1,\pm1}^{(1)}}{\partial \zeta^{2}_{+1,\pm1}} + a (k \pm Q_{+1,\pm1}) [ |f_{+1,\pm1}^{(1)}|^{2} + 2 |f_{+1,\mp1}^ {(1)}|^{2} ]f_{+1,\pm1}^{(1)}=0, $$$$ +i \mathfrak{h}_{-1,\pm1}\frac{\partial f_{-1,\pm1}^{(1)}}{\partial \tau}+ i\mathcal{ H}_{-1,\pm1} \frac{\partial^{2} f_{-1,\pm1}^{(1)}}{\partial \zeta^{2}_{-1,\pm1}} + a ( k \mp Q_{-1,\pm1})(| f_{-1,\pm1}^ {(1)}|^{2} + 2 |f_{-1,\mp1}^ {(1)}|^{2} )f_{-1,\pm1}^{(1)}=0.$$ We consider the equations proportional to the $Z_{+1}$ in detail, the complex-conjugation equation proportional to the $Z_{-1}$ can be considered similarly. After transformation back to the variables $z$ and $t$, from the Eq.(14) we obtain the coupled NSEs in the form $$\label{cnse} i (\frac{\partial \lambda_{\pm}}{\partial t}+ v_{\pm}\frac{\partial \lambda_{\pm}} {\partial z})+p_{\pm} \frac{\partial^{2} \lambda_{\pm} }{\partial z^{2}} + \mathfrak{q}_{\pm} ( |\lambda_{\pm}|^{2} + 2 |\lambda_{\mp}|^{2} )\lambda_{\pm}=0,$$ where $$\label{ppe} \lambda_{\pm}=\varepsilon f_{+1,\pm1}^{(1)}, $$$$ p_{\pm}=\beta\frac{2 ( k \pm Q_{\pm1}) v_{\pm} + ( \omega \pm \Omega_{\pm1})}{ 1-\beta ( k\pm Q_{\pm1} )^{2}}, $$$$ \mathfrak{q}_{\pm}= - \frac{ a (k \pm Q_{\pm 1})}{1-\beta ( k\pm Q_{\pm1} )^{2}}, $$$$ v_{\pm}=\frac{ C + 2\beta (\omega \pm \Omega_{\pm1})(k \pm Q_{\pm1})}{1 - \beta( k\pm Q_{\pm1})^{2} }, $$$$ \Omega_{+1,+1}=\Omega_{-1,-1}=\Omega_{+1},\;\;\;\;\;\;\;\;\;\;\;\Omega_{+1,-1}=\Omega_{-1,+1}=\Omega_{-1}, $$$$ Q_{+1,+1}=Q_{-1,-1}=Q_{+1},\;\;\;\;\;\;\;\;\;\;\;Q_{+1,-1}=Q_{-1,+1}=Q_{-1}.$$ The solution of Eq.(15) is given by Eq.(18). Substituting Eq.(18) into the Eqs.(6) and (2) we obtain the two-component vector breather OSDFW of the modified BBM equation Eq.(1) in the form $$\label{myi} u(z,t)= \frac{2}{\mathfrak{b} T}sech(\frac{t-\frac{z}{V_{0}}}{T})\{ K_{+} \cos[(k+Q_{+1}+k_{+})z -({\omega}+\Omega_{+1}+\omega_{+}) t]+ $$$$ K_{-}\cos[(k-Q_{-1}+k_{-})z -({\omega}-\Omega_{-1}+\omega_{-})t]\}.$$ where T is the width of the two-component nonlinear pulse (see, Appendix). +0.5cm Conclusion =========== Using generalized PRM the two-component vector breather OSDFW solution of the modified BBM equation (1) is obtained. The first term of Eq.(17) is the small amplitude breather oscillating with the sum of the frequencies ${\omega}+\Omega_{+1}$ and wavenumbers $k+Q_{+1} $(taking into account Eqs.(19)) and the second term is the small amplitude breather oscillating with the difference of the frequencies ${\omega}-\Omega_{-1}$ and wavenumbers $k-Q_{-1} $. The nonlinear connection between breathers are determined by the terms $|\lambda_{-}|^{2} \lambda_{+}$ and $|\lambda_{+}|^{2} \lambda_{-}$ of Eq.(15). The parameters of the nonlinear wave by the equations (8), (11), (16), (20), and (21) are determined. The dispersion relation and connections between oscillating parameters are presented by Eqs.(4), (12) and (13). When comparing PRM and the generalized version of PRM we can see the following differences. The PRM adapted for the study of single-component solitary waves, and uses one complex auxiliary function and two constant parameters. As a result, the considered nonlinear differential equation is reduced to the scalar NSE for the single complex auxiliary function that has a soliton solution. In contrast to the PRM, the generalized PRM uses the two complex auxiliary functions and eight constant parameters, that to give possibility to study two-component nonlinear solitary waves. By means of this method the considered nonlinear differential equation is transformed to the coupled NSEs for the two complex auxiliary function having the vector soliton solution Eq.(18). Using the generalized PRM, it became possible to find the solutions of the series of different nonlinear differential equations (see, above) and obtain for these equations the solutions in the form of the two-component vector breathers OSDFW. These circumstances give grounds to hope that the two-component vector breathers OSDFW, also can be found for other nonlinear equations. +0.5cm Appendix ========= The vector soliton (VS) solution of Eq.(15) can be written as (see, for instance \[14-16, 28, 33\] and references therein) $$\label{ue1} \lambda_{\pm}=\frac{K_{\pm }}{\mathfrak{b} T}Sech(\frac{t-\frac{z}{V_{0}}}{T}) e^{i(k_{\pm } z - \omega_{\pm } t )},$$ where $K_{\pm },\; k_{\pm }$ and $\omega_{\pm }$ are the real constants, $V_{0}$ -the velocity of the nonlinear wave. We assume that $$\label{kom} k_{\pm }<<Q_{\pm 1},\;\;\;\;\;\;\omega_{\pm }<<\Omega_{\pm 1}.$$ The parameters of VS Eq.(18) are given by $$\label{rrw} T^{-2}=V_{0}^{2}\frac{v_{+}k_{+}+k_{+}^{2}p_{+}-\omega_{+}}{p_{+}}, \;\;\;\;\;\;\;k_{\pm }=\frac{V_{0}-v_{\pm}}{2p_{\pm}}, $$$$ \mathfrak{b}^{2}=\frac{V_{0}^{2}\mathfrak{ q}_{+}}{2p_{+}}(K_{+}^{2}+2 K_{-}^{2}) .$$ The connections between both components of VS are defined as $$\label{ttw} K_{+}^{2}=\frac{p_{+}\mathfrak{q}_{-}- 2p_{-}\mathfrak{q}_{+}}{p_{-}\mathfrak{q}_{+}-2 p_{+}\mathfrak{q}_{-}}K_{-}^{2}, \;\;\;\;\;\;\;\;\; \omega_{+}=\frac{p_{+}}{p_{-}}\omega_{-}+\frac{V^{2}_{0}(p_{-}^{2}-p_{+}^{2})+v_{-}^{2}p_{+}^{2}-v_{+}^{2}p_{-}^{2} }{4p_{+}p_{-}^{2}}.$$ When $a>0$ and $\beta <0$, from the Eq.(16) we obtain that $p_{\pm} \mathfrak{q}_{\pm}>0$ and consequently, both components of VS are bright solitons and this case corresponds to a two-component bright VS. +0.5cm [20]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont\#1[\#1]{}citenamefont\#1[\#1]{}url\#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , (). , ** (, ). , ** ( ). , , , , (). , , (). , ****, (). , ****, (). , , (, ). , ****, (). , , ****, (). , ** (, ). , , (). , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , **, (, ). , , , ****, (). , , , ****, (). , ****, (). , , ****, (). , ****, (). , , ****, (). , , ****, (). , ****, (). , , , , (). , , , ****, (). , ****, (). , ****, (). , (). , , ****, (). , (). , , ****, (). , ****, (). , , ****, (). , (). , ****, (). , , , . , , , (). , , , ****, (). , , , ****, (). , , ****, (). , ****, (). , ****, (). , ****, (). , , , , ****, (). , , ****, ().
--- abstract: 'We propose a method inspired from discrete light cone quantization (DLCQ) to determine the heat kernel for a Schrödinger field theory (Galilean boost invariant with $z=2$ anisotropic scaling symmetry) living in $d+1$ dimensions, coupled to a curved Newton-Cartan background, starting from a heat kernel of a relativistic conformal field theory ($z=1$) living in $d+2$ dimensions. We use this method to show the Schrödinger field theory of a complex scalar field cannot have any Weyl anomalies. To be precise, we show that the Weyl anomaly $\mathcal{A}^{G}_{d+1}$ for Schrödinger theory is related to the Weyl anomaly of a free relativistic scalar CFT $\mathcal{A}^{R}_{d+2}$ via $\mathcal{A}^{G}_{d+1}= 2\pi \delta (m) \mathcal{A}^{R}_{d+2}$ where $m$ is the charge of the scalar field under particle number symmetry. We provide further evidence of vanishing anomaly by evaluating Feynman diagrams in all orders of perturbation theory. We present an explicit calculation of the anomaly using a regulated Schrödinger operator, without using the null cone reduction technique. We generalise our method to show that a similar result holds for one time derivative theories with even $z>2$.' author: - Sridip Pal - and Benjamín Grinstein bibliography: - 'refs.bib' title: On the Heat Kernel and Weyl Anomaly of Schrödinger invariant theory --- Introduction ============ The Weyl anomaly in relativistic Conformal Field Theory (CFT) has a rich history [@Capper:1974ic; @Deser:1976yx; @Brown:1976wc; @Dowker:1976zf; @Hawking:1976ja; @Christensen:1977jc; @Duff:1977ay; @Duff:1993wm]. In $1+1$ dimensions irreversibility of RG flows has been established by Zamoldchikov [@Zamolodchikov:1986gt] who showed monotonicity of a quantity $C$ that equals the Weyl anomaly $c$ at fixed points. Remarkably, the anomaly $c$ equals the central charge of the CFT. In $3+1$ dimension, there is a corresponding “$a$-theorem” [@Osborn:1989td; @Jack:1990eb; @Komargodski:2011vj; @Komargodski:2011xv] where $a$ again appears in the Weyl anomaly, and there is strong evidence for a similar $a$-theorem in higher, even dimensions [@Grinstein:2013cka; @Grinstein:2014xba; @Grinstein:2015ina; @Stergiou:2016uqq]. In contrast, much less is known in the case of non-relativistic field theories admitting anisotropic scale invariance under the following transformation $$\begin{aligned} \vec{x}\to\lambda\vec{x},\qquad t\to \lambda^{z}t\,.\end{aligned}$$ Nonetheless, non-relativistic conformal symmetry does emerge in various scenarios. For example, fermions at unitarity, in which the $S$-wave scattering length diverges, $|a|\rightarrow \infty$, exhibit non-relativistic conformal symmetry. In ultracold atom gas experiments, the $S$-wave scattering length can be tuned freely along an RG flow and this has renewed interest in the study of the RG flow of such theories [@Regal:2004zza; @Zwierlein:2004zz]. In fact, at $a^{-1}=-\infty$ the system behaves as a BCS superfluid while at $a^{-1}=\infty$ it becomes a BEC superfluid. The BCS-BEC crossover, at $a^{-1}=0$, is precisely the unitarity limit, exhibiting non-relativistic conformal symmetry [@Nishida:2007pj; @Nishida:2010tm]. In this regime, we expect universality, with features independent of any microscopic details of the atomic interactions. Other examples of non-relativistic systems exhibiting scaling symmetry come with accidentally large scattering cross section. Examples include various atomic systems, like ${ }^{85}Rb$[@Roberts:1998zz],${}^{138}Cs$ [@Chin:2001uan], and few nucleon systems like the deuteron [@Kaplan:1998tg; @Kaplan:1998we]. Galilean CFT, which enjoys $z=2$ scaling symmetry is special among Non-Relativistic Conformal Field Theories (NRCFTs). On group theoretic grounds, there is a special conformal generator for $z=2$ that is not present for $z\neq 2$ theories [@Balasubramanian:2008dm; @Jensen:2014aia]. The coupling of such theories to the Newton Cartan (NC) structure is well understood [@Jensen:2014aia; @Son:2013rqa; @Geracie:2014nka; @Perez-Nadal:2016tzr]. The generic discussion of anomalies in such theories has been initiated by Jensen in [@Jensen:2014hqa]. Moreover, there have been recent works classifying and evaluating Weyl anomalies at fixed points [@Baggio:2011ha; @Arav:2014goa; @Arav:2016xjc; @Arav:2016akx; @Barvinsky:2017mal] and even away from the fixed points; the latter have resulted in proposed $C$-theorem candidates [@Auzzi:2016lrq; @Pal:2016rpz]. It has been proposed in [@Jensen:2014hqa], using the fact that Discrete Light Cone Quantization (DLCQ) of a relativistic CFT living in $d+2$ dimensions yields a non-relativistic Galilean CFT in $d+1$ dimensions with $z=2$, that the Weyl anomaly of the relativistic CFT survives in the non-relativistic theory. The conjecture states that the Weyl anomaly $\mathcal{A}^{G}$ for a Schrödinger field theory (Galilean boost invariant with $z=2$ scale symmetry and special conformal symmetry) is given by $$\begin{aligned} \mathcal{A}^{G}_{d+1}=aE_{d+2}+\sum_{n}c_{n}W_{n}\end{aligned}$$ where $E_{d+2}$ is the $d+2$ dimensional Euler density of the parent space-time and $W_{n}$ are Weyl covariant scalars with weight $(d+2)$. The right hand side is computed on a geometry given in terms of the $d+2$ dimensional metric; this will be explained below, see Eq. . A specific example of particular interest is $$\begin{aligned} \mathcal{A}^{G}_{2+1}=aE_{4}-cW^{2}\end{aligned}$$ where $W^{2}$ stands for the square of the Weyl tensor. The purpose of this work is twofold. First, we show that these proposed relations must be corrected to include a factor of $\delta(m)$, when the Schrödinger invariant theory involves a single complex scalar field having charge $m$ under the $U(1)$ symmetry. To be precise, we show that $$\begin{aligned} \label{mainresult} \mathcal{A}^{G}_{d+1}= 2\pi \delta (m) \mathcal{A}^{R}_{d+2}\end{aligned}$$ where $\mathcal{A}^{R}_{d+2}$ is the Weyl anomaly of the corresponding relativistic CFT in $d+2$ dimensions. This is derived explicitly for the case of a bosonic (commuting) scalar field, but the derivation applies equally to the case of a fermionic (anti-commuting) scalar field. The second purpose is to develop a framework inspired from DLCQ to evaluate the heat kernel of a theory with one time derivative kinetic term in a non-trivial curved background. This framework enables us to calculate not only the heat kernel but also the anomaly coefficients. In fact, using this method and its appropriately modified form enables us to generalise Eq.  to one time derivative theories with arbitrary even $z$, where the parent $d+2$ dimensional theory enjoys $SO(1,1) \times SO(d)$ symmetry with scaling symmetry acting as $t\to\lambda^{z/2}t, x^{d+2}\to\lambda^{z/2}x^{d+2}, x^{i}\to\lambda x^{i}$, ($i=1,\ldots,d+1$). The paper is organised as follows. We will briefly review coupling of a Schrödinger field theory to the Newton-Cartan structure in Sec. \[sec:NC\]. In Sec. \[DLCQ\], we sketch how DLCQ can be used to obtain Schrödinger field theories following the procedure of [@Jensen:2014hqa] and propose its modified cousin, that we call Lightcone Reduction (LCR), to obtain a Schrödinger field theory. In Sec. \[heatkernel\] we determine the heat kernel for free Galilean CFT coupled to a flat NC structure in two different ways, on the one hand using LCR and on the other without the use of DLCQ, providing a check on our proposed method for determining the heat kernel for Galilean field theory coupled to a curved NC geometry. We then proceed to evaluate the heat kernel on curved spacetime according to the proposal and subsequently derive the Weyl anomaly for Schrödinger field theory of a single complex scalar. In Sec. \[perturbation\] we reconsider the computation using perturbation theory; we find that for a wide class of models on a curved background all vacuum diagrams vanish. In fact, we show that an anomaly is not induced in the more general case that $U(1)$ invariant dimensionless couplings are included, regardless of whether we are at a fixed point or away from it, in all orders of a perturbative expansion in the dimensionless coupling and metric. In Sec. \[generalisation\], we give a formal proof of our prescription and generalise the framework to calculate the heat kernel and anomaly for theories with one time derivative and arbitrary even $z$. We conclude with a brief summary of the results obtained and discuss future directions of investigation. Technical aspects of defining heat kernel for one time derivative theory in flat space-time are explored in App. \[ap1\], and on a curved background in App. \[riemann\]. Finally, in App. \[hketa\] we present an explicit calculation of the anomaly using a regulated Schrödinger operator, without using the null cone reduction technique. Newton-Cartan Structure $\&$ Weyl Anomaly {#sec:NC} ========================================= The study of the Weyl anomaly necessitates coupling of non-relativistic theory to a background geometry, which can potentially be curved. Generically, the prescription for coupling to a background can depend on the global symmetries of the theory on a flat background. Of interest to us are Galilean and Schrodinger field theories. The algebra of the Galilean generators is given by [@Balasubramanian:2008dm] $$\begin{gathered} [M_{ij},N]=0\,, \qquad [M_{ij}, P_{k}]=\imath (\delta_{ik}P_{j}-\delta_{jk}P_{i})\,, \qquad [M_{ij},K_{k}]=\imath (\delta_{ik}K_{j}-\delta_{jk}K_{i})\,,\nonumber \\ [M_{ij},M_{kl}]= \imath ({{\color{black} \delta_{ik}M_{jl}}}-\delta_{jk}M_{il}+\delta_{il}M_{kj}-\delta_{jl}M_{ki})\,,\nonumber \\ \label{eq:GalAlg1} [P_{i},P_{j}]=[K_{i},K_{j}]=0\,, \qquad [K_{i},P_{j}]=\imath\delta_{ij}N\,,\\ [H,N]=[H,P_{i}]=[H,M_{ij}]=0\,, \quad [H,K_{i}]=-\imath P_{i}\,, \nonumber \end{gathered}$$ [[ and the commutators of dilatation generator with that of Galilean ones are given by]{}]{} $$\begin{gathered} \qquad [D,P_{i}]=\imath P_{i}\,, \qquad [D,K_{i}]= (1-z)\imath K_{i}\,, \quad [D,H]=z\imath H\,,\nonumber \\ \label{eq:GalAlg2} \quad [D,N]=\imath (2-z)N\,, \quad [M_{ij},D]=0\end{gathered}$$ where $i,j =1,2, \ldots, d$ label the spatial dimensions, $z$ is the anisotropic exponent, $P_{i}$, $H$ and $M_{ij}$ are generators of spatial translations, time translation spatial rotations, respectively, $K_{i}$ generates Galilean boosts along the $x^{i}$ direction, $N$ is the particle number (or rest mass) symmetry generator and $D$ is the generator of dilatations. The generators of Schrödinger invariance include, in addition, a generator of special conformal transformations, $C$. [[ The Schrödinger algebra consists of]{}]{} the $z=2$ version of , plus the commutators of $C$, $$\begin{aligned} [M_{ij},C]=0\,, \qquad [K_{i},C]=0\,, \qquad [D,C]=-2\imath C\,, \qquad [H,C]=-\imath D.\end{aligned}$$ In what follows, by Schrödinger invariant theory we will mean a $z=2$ Galilean, conformally invariant theory. For $z\neq 2$ we only discuss anisotropic scale invariant theories invariant under a group generated by $P_{i}$, $M_{ij}$, $H$, $D$ and $N$ such that the kinetic term involves one time derivative only. The most natural way to couple Galilean (boost) invariant field theories to geometry is to use the Newton-Cartan (NC) structure [@Jensen:2014aia; @Son:2013rqa; @Geracie:2014nka]. In what follows we briefly review NC geometry, following Ref. [@Jensen:2014hqa]. The NC structure defined on a $d+1$ dimensional manifold $\mathcal{M}_{d+1}$ consists of a one form $n_{\mu}$, a symmetric positive semi-definite rank $d$ tensor $h_{\mu\nu}$ and an $U(1)$ connection $A_{\mu}$, such that the metric tensor $$\begin{aligned} \label{eq:Themetric} g_{\mu\nu}=n_{\mu}n_{\nu}+h_{\mu\nu} \end{aligned}$$ is positive definite. The upper index data $v^{\mu}$ and $h^{\mu\nu}$ is defined by $$\begin{aligned} \label{eq:Milneconds} v^{\mu}n_{\mu}=1,\qquad v^{\nu}h_{\mu\nu}=0,\qquad h^{\mu\nu}n_{\nu}=0,\qquad h^{\mu\rho}h_{\rho\nu}=\delta^{\mu}_{\nu}-v^{\mu}n_{\nu}\end{aligned}$$ Physically $v^{\mu}$ defines a local time direction while $h_{\mu\nu}$ defines a metric on spatial slice of $\mathcal{M}_{d}$. As prescribed in [@Jensen:2014aia], while coupling a Galilean invariant field theory to a NC structure, we demand 1. Symmetry under reparametrization of co-ordinates. Technically, this requirement boils down to writing the theory in a diffeomorphism invariant way. 2. $U(1)$ gauge invariance. The fields belonging to some representation of Galilean algebra carry some charge under particle number symmetry, which is an $U(1)$ group. Promoting this to a local symmetry requires a gauge field $A_{\mu}$ that is sourced by the $U(1)$ current. 3. Invariance under Milne boost under which $(n_{\mu},h^{\mu\nu})$ remains invariant, while $$\begin{aligned} \label{boostsymmetry} v^{\mu}\to v^{\mu}+\psi^{\mu}, \quad h_{\mu\nu}\to h_{\mu\nu}- \left(n_{\mu}\psi_{\nu}+n_{\nu}\psi_{\mu}\right)+n_{\mu}n_{\nu}\psi^{2}, \quad A_{\mu}\to A_{\mu}+\psi_{\mu}-\frac{1}{2}n_{\mu}\psi^{2}\end{aligned}$$ where $\psi^{2}=h^{\mu\nu}\psi_{\mu}\psi_{\nu}$ and $v^{\nu}\psi_{\nu}=0$. The action of a free Galilean scalar $\phi_m$ with charge $m$, coupled to this NC structure satisfying all the symmetry conditions listed above is given by $$\begin{aligned} \int d^{d+1}x \sqrt{g}\left[\imath m v^{\mu} \left(\phi_{m}^{\dagger}D_{\mu}\phi_m-\phi_{m}D_{\mu}\phi_{m}^{\dagger}\right)-h^{\mu\nu}D_{\mu}\phi_{m}^{\dagger}D_{\nu}\phi_{m}\right]\end{aligned}$$ where $D_{\mu}=\partial_{\mu}-\imath m A_{\mu}$ is the appropriate gauge invariant derivative. From a group theory perspective, a Galilean group can be a subgroup of a larger group that includes dilatations. That is, besides the symmetries mentioned earlier, a Galilean invariant field theory coupled to the flat NC structure can also be scale invariant, [ *i.e.,*]{} invariant under the following transformations $$\begin{aligned} \vec{x} \to \lambda \vec{x}, \qquad t \to \lambda^{z}t,\end{aligned}$$ where $z$ is the dynamical critical exponent of the theory. As mentioned earlier, for $z=2$, the symmetry algebra may further be enlarged to contain a special conformal generator, resulting in the Schrödinger group. On coupling a Galilean CFT with arbitrary $z$ to a nontrivial curved NC structure, the scale invariance can be thought of as invariance under following scaling of NC data (also known as anisotropic Weyl scaling; henceforth we omit the word *anisotropic*, and by Weyl transformation it should be understood that we mean the transformation with appropriate $z$): $$\begin{aligned} \label{weylvariation} n_{\mu}\to e^{z\sigma}n_{\mu},\qquad h_{\mu\nu}\to e^{2\sigma}h_{\mu\nu}, \qquad A_{\mu}\to e^{(2-z)\sigma}A_{\mu},\end{aligned}$$ where $\sigma$ is a function of space and time. Even though classically a Galilean CFT may be scale invariant, it is not necessarily true that it remains invariant quantum mechanically. Renormalisation may lead to anomalous breaking of scale symmetry much like in the Weyl anomaly in relativistic CFTs (where $z=1$). The anomaly $\mathcal{A}$ is defined from the infinitesimal Weyl variation of the connected generating functional $W$: $$\begin{aligned} \delta_{\sigma}W= \int d^{d+1}x\,\sqrt{g}\, \delta\sigma\, \mathcal{A},.\end{aligned}$$ We mention in passing that away from the fixed point the coupling is scale dependent, that is, the running of the coupling under the RG must be accounted for, hence the variation $\delta_{\sigma}$ on the couplings needs to be incorporated. The generic scenario has been elucidated in Ref. [@Pal:2016rpz]. In this work, we are interested in anomalies at a fixed point. Even in the absence of running of the coupling, the background metric can act as an external operator insertion on vacuum bubble diagrams leading to new UV divergences that are absent in flat space-time. Removing these new divergences can potentially lead to anomalies. The anomalous ward identity for anisotropic Weyl transformation is given by[@Jensen:2014hqa] $$\label{anomaly} zn_{\mu}\mathcal{E}^{\mu}-h^{\mu\nu}T_{\mu\nu}= \mathcal{A}\,,$$ where $n_{\mu}\mathcal{E}^{\mu}$ and $h^{\mu\nu}T_{\mu\nu}$ are respectively diffeomorphic invariant measure of energy density and trace of spatial stress-energy tensor. In what follows, we will be interested in evaluating the quantity appearing on the right hand side of Eq. . A standard method is through the evaluation of the heat kernel in a curved background. Hence, our first task is to figure out a way to obtain the heat kernel for theories with kinetic term involving only one time derivative. In the next few sections we will introduce methods for computing heat kernels and arrive at the same result from different approaches. Discrete Light Cone Quantization (DLCQ) $\&$ its cousin Lightcone Reduction (LCR) {#DLCQ} ================================================================================= One elegant way to obtain the heat kernel is to use Discrete Light Cone Quantization (DLCQ). This exploits the well known fact that a $d+1$ Galilean invariant field theory can be constructed by starting from a relativistic theory in $d+2$ dimensional Minkowski space in light cone coordinates $$\begin{aligned} ds^{2}=2dx^{+}dx^{-}+dx^{i}dx^{i}\end{aligned}$$ where $i=2,3,\ldots, d+1$ and $x^{\pm}=\frac{x^{1}\pm t}{\sqrt{2}}$ define light cone co-ordinates, followed by a compactification in the null co-ordinate $x^{-}$ on a circle. From here on, by *reduced* theory we will mean the theory in $d+1$ dimensions while by *parent* theory we will mean the $d+2$ dimensional theory on which this DLCQ trick is applied. We first present a brief review of DLCQ. The generators of $SO(d+1,1)$ which commute with $P_{-}$, the generator of translation in the $x^{-}$ direction, generate the Galilean algebra. $P_{-}$ is interpreted as the generator of particle number of the reduced theory. In light cone coordinates the mass-shell condition for a massive particle becomes[^1] $$\begin{aligned} \label{massshellcondition} p_{+}=\frac{|\vec{p}|^{2}}{2(-p_{-})}+\frac{M^{2}}{4(-p_{-})}\end{aligned}$$ [[ Eq.  can be interpreted as the non-relativistic energy of a particle, $p_+$, with mass $m= -p_{-}$ in a constant potential. The reduced mass-shell condition is Galilean invariant, that is, invariant under boosts ($\vec{v}$) and rotations ($\mathbf{R}$): $$\begin{aligned} \nonumber \vec{p} \to \mathbf{R}\vec{p}-\vec{v} p_-, \qquad p_{+} \to p_+ +\vec{v}\cdot \left(\mathbf{R}\vec{p}\right)-\frac{1}{2}|\vec{v}|^2p_-\end{aligned}$$ ]{}]{} Setting $M=0$, the dispersion relation is of the form $$\omega=\frac{k^{2}}{2m}$$ and enjoys $z=2$ scaling symmetry. [[ To rephrase, setting $M=0$ will allow one to append a dilatation generator, which acts as follows: $$\begin{aligned} \nonumber p_{+} \to \lambda^2p_{+}\,,\quad p_{-} \to p_-, \quad \vec{p} \to \lambda \vec{p}\end{aligned}$$]{}]{} Had we not compactified in the $x^-$ direction, $p_{-}$ would be a continuous variable. The parameter $p_{-}$ can be changed using a boost in the ${+-}$ direction, but compactification in the $x^{-}$ direction spoils relativistic boost symmetry and the eigenvalues of $p_{-}$ become discretized, $p_{-}=\frac{n}{R}$, where $R$ is the compactification radius. We note that Lorentz invariance is recovered in the $R\to\infty$ limit. For convenience, by appropriately rescaling the generators of spatial translations and of special conformal transformations, as well as $P_-$, we can set $R=1$. One can technically perform DLCQ even in a curved space-time as long as the metric admits a null isometry. This guarantees that we can adopt a coordinate system with a null coordinate $x^{-}$ such that all the metric components are independent of $x^{-}$. To be specific, we will consider the following metric: $$\begin{aligned} \label{eq:backgroundmetric} ds^{2}=G_{MN}dx^{M}dx^{N}, \qquad G_{\mu -}= n_{\mu}, \qquad G_{\mu\nu}=h_{\mu\nu}+n_{\mu}A_{\nu}+n_{\nu}A_{\mu}, \qquad G_{--}=0\end{aligned}$$ where $M,N=+,-, 1,2,\ldots,d$ run over all the indices in $d+2$ dimensions, the index $\mu=+,1,2,\ldots,d$ runs over $d+1$ dimensions and $h_{\mu\nu}$ is a rank $d$ tensor. Ultimately, $h_{\mu\nu}, n_{\mu}, A_{\mu}$ are to be identified with the NC structure, and just as above we can construct $h^{\mu\nu}$ and $v^{\mu}$ such that Eq.  holds. Moreover, these quantities transform under Milne boost symmetry as per Eq. . Hence, the boost invariant inverse metric is given by $$\label{inversemetric} G^{-\mu}=v^{\mu}-h^{\mu\nu}A_{\nu}, \qquad G^{\mu\nu}=h^{\mu\nu}, \qquad G^{--}= -2 v^{\mu}A_{\mu} + h^{\mu\nu}A_{\mu}A_{\nu}\,.$$ Reduction on $x^{-}$ yields a Galilean invariant theory coupled to an NC structure given by $(n_{\mu},h^{\mu\nu},A_{\mu})$, with metric given by . Moreover, all the symmetry requirements listed above Eq.  are satisfied by construction. This prescription allows us to construct Galilean QFT coupled to a non trivial NC structure starting from a relativistic QFT placed in a curved background with one extra dimension. For example, we can consider DLCQ of a conformally coupled scalar field in $d+2$ dimensions, $$\begin{aligned} \label{actionr} S_{R}=\int d^{d+2}x \sqrt{-G} \left[-G^{MN}\partial_{M}\Phi^{\dagger}\partial_{N}\Phi - \xi \mathcal{R}\Phi^{\dagger}\Phi\right]\,, \qquad\xi=\frac{d}{4(d-1)}\end{aligned}$$ where $\mathcal{R}$ stands for the Ricci scalar corresponding to the $G_{MN}$ metric. We compactify $x^{-}$ with periodicity $2\pi$ and expand $\Phi$ in fourier modes as $$\begin{aligned} \Phi=\frac{1}{\sqrt{2\pi}}\sum_{m} \phi_{m}(x^{\mu})e^{\imath m x^{-}}, \qquad \phi_{m} = \frac{1}{\sqrt{2\pi}}\int_{0}^{2\pi}dx^{-}\ \Phi e^{-\imath m x^{-}}\,.\end{aligned}$$ In terms of $\phi_{m}$ , we recast the action, Eq.  in following form using Eq.  $$\begin{aligned} \label{actiong} S_{R}= \sum_{m} \int d^{d+1}x \sqrt{g}\left[\imath m v^{\mu} \left(\phi_{m}^{\dagger}D_{\mu}\phi_m-\phi_{m}D_{\mu}\phi_{m}^{\dagger}\right) -h^{\mu\nu}D_{\mu}\phi_{m}^{\dagger}D_{\nu}\phi_{m}-\xi\mathcal{R}\phi_{m}^{\dagger}\phi_{m}\right]\end{aligned}$$ where $D_{\mu}=\partial_{\mu}-\imath m A_{\mu}$ and where each of the $\phi_{m}$ carry charge $m$ under the particle number symmetry and sit in distinct representations of the Schrödinger group. The theory described by Eq.  is not Lorentz invariant because we have a discrete sum over $m$, breaking the boost invariance along the null direction. The point of DLCQ is to break Lorentz invariance to Galilean invariance. As explained above, one can work in the uncompactified limit, and still break the Lorentz invariance by dimensional reduction. In the uncompactified limit, the sum over eigenvalues of $P_-$ becomes integration over the continuous variable $p_{-}$. Nonetheless, one can focus on any particular Fourier mode. Technically, we can implement this by performing a Fourier transformation with respect to $x^{-}$ of quantities of interest. This procedure also yields a Galilean invariant field theory where the elementary field is the particular Fourier mode under consideration. Henceforth we will refer to this modified version of DLCQ as Lightcone Reduction (LCR). Taking a cue from the relation between the actions given by Eqs.  and we propose the following prescription to extract the heat kernel in the reduced theory: *The heat kernel operator $K_{G}$ in $d+1$ dimensional Galilean theory is related to the heat kernel operator $K_{R}$ of the parent $d+2$ dimensional relativistic theory via* $$\label{eq:prescription} \langle (\vec{x}_2,t_2)|K_{G}|(\vec{x}_1,t_1)\rangle = \int_{-\infty}^{\infty} dx^{-}\ \langle \vec{x}_{2}, x^{-}_{2}, x_{2}^{+}|K_{R}|\vec{x}_{1}, x^{-}_{1}, x_{1}^{+}\rangle \ e^{-\imath m x^{-}_{12}}$$ *where $x^{-}_{12}=x^{-}_{2}-x^{-}_{1}$ and the time $t$ in the reduced theory is to be equated with $x^{+}$ in the parent theory.* We will postpone the proof of our prescription to Sec. \[generalisation\]. In the next section, we will lend support to our prescription by verifying our claim using two different methods of calculating the heat kernel. [[ We emphasize that the reduction prescription, described above, is applicable to the $z=2$ case of Galilean and scale invariant theories. The generic reduction procedure for arbitrary $z$ (though not Galilean boost invariant) is discussed later in sec. \[sec:gener\].]{}]{} Heat Kernel for a Galilean CFT with $z=2$ {#heatkernel} ========================================= Preliminaries: Heat Kernel, Zeta Regularisation ----------------------------------------------- We start by briefly reviewing the heat kernel and zeta function regularisation method [@Jack:1983sk; @jack1986renormalizability; @Jack:1990eb; @Grinstein:2015ina]. A pedagogical discussion can be found in [@mukhanov2007introduction; @Vassilevich:2003xt]. Let us consider a theory with partition function $\mathcal{Z}$, formally given by $$\begin{aligned} \mathcal{Z}= \int\, [\mathcal{D}\phi] [\mathcal{D}\phi^\dagger]\, e^{-\int d^{d}x\, \phi^{\dagger}\mathcal{M}\phi}\end{aligned}$$ where the eigenvalues of the operator $\mathcal{M}$ have positive real part.[^2]The path integral over the field variable $\phi$ suffers from ultraviolet (UV) divergences and requires proper regularization and renormalisation to be rendered as a meaningful finite quantity. Similarly, the quantum effective action $W=-\ln\mathcal Z$ corresponding to this theory, given by a formal expression $$\begin{aligned} W = \ln (\det(\mathcal{M}))\end{aligned}$$ requires regularization and renormalisation.[^3] The method of zeta-function regularization introduces several quantities; the heat kernel operator $$\begin{aligned} \label{heatkerneloperator} \mathcal{G}= e^{-s\mathcal{M}}\,,\end{aligned}$$ its trace $K$ over the space $L^2$ of square integrable functions $$\begin{aligned} \label{eq:Kdefined} K(s,f, \mathcal{M})= \Tr_{L^{2}}\left(f\mathcal{G}\right)= \Tr_{L^{2}} \left(fe^{-s\mathcal{M}}\right)\,,\end{aligned}$$ where $f\in L^2$, and the zeta-function, defined as $$\begin{aligned} \label{eq:zetadefined} \zeta (\epsilon , f, \mathcal{M} ) = \Tr_{L^{2}} \left(f \mathcal{M}^{-\epsilon}\right)\,.\end{aligned}$$ $K$ and $\zeta$ are related via Mellin transform, $$\label{eq:mellintransf} K(s,f, \mathcal{M})= \frac{1}{2\pi\imath}\int_{c-\imath\infty}^{c+\imath\infty} \!\!\!\!d\epsilon\ s^{-\epsilon} \Gamma(\epsilon) \zeta(\epsilon, f, \mathcal{M})\quad \text{and}\quad \zeta(\epsilon, f, \mathcal{M})= \frac1{\Gamma(\epsilon)}\int_{0}^{\infty}\!\! \!ds\ s^{\epsilon-1}K(s,f, \mathcal{M})\,.$$ As is customary, below we use $f=1$. However this should be understood as taking the limit $f\to1$ at the end of the computation to ensure all expressions in intermediate steps are well defined. Formally $W$ is given by the divergent expression $$\begin{aligned} W= -\int_{0}^{\infty} ds \frac{1}{s} K (s,1,\mathcal{M})\end{aligned}$$ The regularized version, $W_{\epsilon}$, is defined by shifting the power of $s$ $$\begin{aligned} \label{eq:Weps} W_{\epsilon}=-\tilde{\mu}^{2\epsilon}\int_{0}^{\infty} ds \frac{1}{s^{1-\epsilon}} K (s,1,\mathcal{M}) =-\tilde{\mu}^{2\epsilon}\Gamma(\epsilon)\zeta(\epsilon, 1,\mathcal{M})\end{aligned}$$ where the parameter $\tilde{\mu}$ with length dimension $-1$ is introduced so that $W_\epsilon$ remains adimensional. In this context, the parameter $\epsilon$ behaves like a regulator, the divergences re-appearing as $\epsilon\to 0$. In this limit $$\begin{aligned} W_{\epsilon}=-\left(\frac{1}{\epsilon}-\gamma_{E}+\ln(\tilde{\mu}^{2})\right)\zeta(0,1,\mathcal{M}) -\zeta^{\prime}(0,1,\mathcal{M})+ O(\epsilon)\,,\end{aligned}$$ so that subtracting the $\frac{1}{\epsilon}$ term gives the renormalized effective action $$\begin{aligned} \label{eq:Wren} W^{\text{ren}}= -\zeta^{\prime}(0,1,\mathcal{M})-\ln(\mu^{2})\zeta(0,1,\mathcal{M})\,.\end{aligned}$$ where $\mu^{2}=\tilde{\mu}^{2}e^{-\gamma_{E}}$ and [[ $\gamma_E$ is the Euler constant]{}]{}. On a compact manifold $\zeta(\epsilon, 1,\mathcal{M})$ is finite as $\epsilon\to0$ and the renormalized effective action given by is finite, as it should. For non-compact manifolds the standard procedure for computing a renormalized effective action is to subtract a reference action that does not modify the physics. One may, for example, define $W=\ln(\det(\mathcal{M})/\det(\mathcal{M}_0))$, where the operator $\mathcal{M}_0$ is defined on a trivial (flat) background. This amounts to replacing $K(s,1,\mathcal{M})\to K(s,1,\mathcal{M})-K(s,1,\mathcal{M}_0)$ in Eq.  and correspondingly $\zeta(\epsilon, 1,\mathcal{M})\to \zeta(\epsilon, 1,\mathcal{M})-\zeta(\epsilon, 1,\mathcal{M}_0)$. The expression for $W^{\text{ren}}$ in remains valid if it is understood that this subtraction is made before the $\epsilon\to 0$ limit is taken. Classical symmetry under Weyl variations (both in the relativistic case and the anisotropic one) guarantees $\mathcal{M}$ transforms homogeneously, [*i.e.,*]{} $\delta_\sigma\mathcal{M}= -\Delta\sigma \mathcal{M}$ under $\delta_\sigma g_{\mu\nu}= {2\sigma}g_{\mu\nu}$ where $\Delta$ is the scaling dimension of $\mathcal{M}$. Hence, we have $$\begin{aligned} \delta_\sigma \zeta (\epsilon,1,\mathcal{M}) = {{\color{black} -\epsilon\Tr_{L^{2}} \left( \delta\mathcal{M}\mathcal{M}^{-\epsilon-1}\right)}} = \Delta \epsilon \zeta (\epsilon, \sigma, \mathcal{M})\,.\end{aligned}$$ Consequently, the anomalous variation of $W$ is given by $$\begin{aligned} \delta_\sigma W^{ren} = -\Delta\zeta(0,\sigma,\mathcal{M})\,.\end{aligned}$$ In the relativistic case, using the fact that $$\begin{aligned} \delta_{\sigma}W = \frac12\int d^{d+1}x \sqrt{g} T_{\mu\nu}\delta g^{\mu\nu}= - \int d^{d+1}x \sqrt{g} T^{\mu}{}_{\mu} \delta\sigma\,,\end{aligned}$$ one has the trace anomaly equation $$\begin{aligned} \mathcal{A}=-T^{\mu}{}_{\mu}=-\frac{1}{\sqrt{g}}\Delta\left(\frac{\delta\zeta(0,\sigma,\mathcal{M})}{\delta\sigma}\right)_{\sigma=0}\,.\end{aligned}$$ In the non-relativistic case, the Weyl anisotropic scaling is given by $h_{\mu\nu}\to e^{2\sigma}h_{\mu\nu}$ and $n_{\mu}\to e^{z\sigma}n_{\mu}$. We have $$\begin{aligned} \delta_{\sigma}W = \int d^{d+1}x \sqrt{g} \left(\frac{1}{2} T_{\mu\nu} \delta h^{\mu\nu} - \mathcal{E}_{\mu} \delta n^{\mu}\right)= \int d^{d+1}x \sqrt{g} \left(h^{\mu\nu}T_{\mu\nu}- zn^{\mu}\mathcal{E}_{\mu}\right) \delta\sigma\end{aligned}$$ leading to $$\begin{aligned} \mathcal{A}=zn_{\mu}\mathcal{E}^{\mu}-h^{\mu\nu}T_{\mu\nu}=-\frac{1}{\sqrt{g}}{\Delta}\left(\frac{\delta\zeta(0,\sigma,\mathcal{M})}{\delta\sigma}\right)_{\sigma=0}\,.\end{aligned}$$ One can evaluate [[ $\left.\delta\zeta(0,\sigma,\mathcal{M})/\delta\sigma\right|_{\sigma=0}$]{}]{} using the asymptotic form ($s\to0$) of [[ the]{}]{} heat kernel[[ ,]{}]{} $K$. The asymptotic expansion depends on the operator $\mathcal{M}$ and its scaling dimension. Schematically, one has [[ $$\begin{aligned} \nonumber K(s,1,\mathcal{M})= \frac{1}{s^{d_{\mathcal{M}}}}\sum_{n=0}^{\infty}{\hspace{1pt}}s^{\kappa(n)}{\hspace{1pt}}\sqrt{g}{\hspace{1pt}}a_{n},\end{aligned}$$ where $\kappa(n)$ is a linear function of $n$. T]{}]{}he singular [[ pre-factor, $\frac{1}{s^{d_{\mathcal{M}}}}$, is determined by the heat kernel in the background-free, flat space-time limit while the expansion accounts for corrections from background fields or geometry]{}]{}. The asymptotic expansion is guaranteed to exist if the heat kernel is well behaved [[ for $s> 0$ in the]{}]{} flat space-time limit, [[ that is, if $\sum_{i}e^{-s\lambda_i}$, with $\lambda_{i}$, the eigenvalues of the operator $\mathcal{M}$, is convergent. The convergence requires that $\lambda_i$ have,]{}]{} at worst, a power law growth and positive real part [@gilkey1980spectral]. We are interested in operators $\mathcal{M}$ of generic form $$\begin{aligned} \nonumber \mathcal{M}= 2 \imath m \partial_{t^{\prime}}- (-1)^{z/2}(\partial_{i}\partial_{i})^{z/2}\,,\end{aligned}$$ [[ for which]{}]{} the heat kernel has a small $s$ expansion of the following form $$\begin{aligned} \label{eq:Gexp} K(s,1,\mathcal{M})= \frac1{s^{1+d/z}}\sum_{n=0}^{\infty}{\hspace{1pt}}s^{2n/z} \int d^{d+1}x{\hspace{1pt}}\sqrt{g}{\hspace{1pt}}a_{n}\,,\end{aligned}$$ where $d$ is number of spatial dimension and $z$ is dynamical exponent.[^4] Then the zeta function is given by $$\begin{aligned} \zeta(0,f,\mathcal{M})=\int d^{d+1}x \sqrt{g}{\hspace{1pt}}f{\hspace{1pt}}a_{(d+z)/2}\,,\end{aligned}$$ so that we arrive at an expression for the Weyl anomaly $$\begin{aligned} \label{anomalyheat} \mathcal{A}= -{\Delta} \ a_{(d+z)/2}\,.\end{aligned}$$ Hence, in order to determine the Weyl anomaly, one has to calculate the coefficient $a_{(d+z)/2}$ of the heat kernel expansion .[^5] In subsequent sections, we will find out a way to evaluate the heat kernel in flat space-time and then in curved space-time for a Schrödinger invariant field theory. We will be doing this first without using DLCQ/LCR, and then again with LCR (modified cousin of DLCQ) using the prescription introduced above. Heat Kernel in Flat Space-time {#sec:heatKdircalc} ------------------------------ ### Direct calculation (without use of DLCQ) {#sec:heatKdircalcsub} The action for a free Galilean CFT on a flat space-time (which is in fact invariant under the Schrödinger group) is given by $$S= \int dt\ d^{d}x \phi^{\dagger}\left[2m\imath\partial_{t}+\nabla^{2}\right]\phi$$ In order to improve convergence of the functional integral defining the partition function we perform a continuation to imaginary time : $$e^{\imath \int dt d^{d}x \phi^{\dagger}\left[2m\imath\partial_{t}+\nabla^{2}\right]\phi}\underset{t=-\imath\tau}{\mapsto}e^{- \int d\tau d^{d}x \phi^{\dagger}\left[2m\partial_{\tau}-\nabla^{2}\right]\phi}$$ Hence, the Euclidean version of $\mathcal{M}=2m\imath\partial_{t}+\nabla^{2}$ is given by $$\mathcal{M}_{E}=2m\partial_{\tau}-\nabla^{2}\,,$$ and it is this operator for which we will compute the heat kernel. The prescription $t=-\imath\tau$ is equivalent to adding $+\imath\epsilon$ to the propagator in Minkowskian flat space. In fact, the same $+\imath\epsilon$ prescription is obtained by deriving the non-relativistic propagator as the non-relativistic limit of the relativistic propagator. The Heat kernel for $\mathcal{M}_E$ is a solution to the equation[^6] $$\label{heatkerneleq} (\partial_{s}+\mathcal{M}_E)\mathcal{G}=0\,,$$ that is $$\begin{aligned} \label{heatkerneleq1} (\partial_{s}+2m\partial_{\tau_2}-\nabla^{2}_{x_2}) \mathcal{G}(s; (\vec{x}_2,\tau_2),(\vec{x}_1,\tau_1) )=0\,,\end{aligned}$$ [[ with boundary condition $ \mathcal{G}(0; (\vec{x}_2,\tau_2), (\vec{x}_1,\tau_1))= \delta(\tau_2-\tau_1)\delta^{d}(\vec{x}_2-\vec{x}_1)$]{}]{}. Equation  is solved by $$\label{heatkernelg00} \mathcal{G}(s; (\vec{x}_2,\tau_2),(\vec{x}_1,\tau_1) ) =\delta\left(2ms - (\tau_2-\tau_1)\right)\frac{e^{-\frac{|\vec{x}_2-\vec{x}_1|^{2}}{4s}}}{(4\pi s)^{\frac{d}{2}}}$$ Consequently, the Eulcidean two point correlator is given by $$G((\vec{x}_2,\tau_2), (\vec{x}_1,\tau_1)) = \int_{0}^{\infty}\!\!\! ds\, \mathcal{G}(s) = \frac{\theta(\tau)}{2m}\frac{e^{-\frac{m|\vec{x}|^{2}}{2\tau}}}{(2\pi \frac{\tau}{m})^{\frac{d}{2}}}$$ where $\tau=\tau_{2}-\tau_{1}$ and $\vec{x}=\vec{x}_{2}-\vec{x}_{1}$. The same two point correlator can be obtained by Fourier transform from the Minkowski momentum space propagator $G_{M}$, or its imaginary time version, $$G_{M}(p,\omega) = \frac{\imath}{2m\omega -|\vec{p}|^{2}+i0^{+}} \underset{\underset{\omega=\imath\omega_{E}}{t=-\imath\tau}}{\mapsto} G=\frac{1}{2m\omega_{E} +\imath |\vec{p}|^{2}}$$ In the coincidence limit the heat kernel of  contains a Dirac-delta factor, $\delta(ms)$. Since this non-analytic behavior is unfamiliar, it is useful to re-derive this result by directly computing the trace $K$, [[ Eq. ]{}]{}. One can conveniently choose the test function $f=e^{-|\eta\omega|}$. Hence $$\begin{aligned} K(s,f,\mathcal{M}_{E,g})=\text{Tr}\ \left(fe^{-s\mathcal{M}_{E,g}}\right)=\int \left(\frac{d^{d}k}{(2\pi)^{d}} e^{-sk^{2}}\right)\left(\int \frac{d\omega}{2\pi} e^{-2m\imath s \omega-|\eta\omega|}\right)\end{aligned}$$ The integral over $k$ gives the factor of $1/s^{d/2}$, while the integral over $\omega$ gives $$\frac{1}{\pi}\frac{\eta}{4m^{2}s^{2}+\eta^{2}}$$ that tends to $\delta(2ms)$ as $\eta\to0$. Before taking the limit, this factor gives a well behaved function for which the Mellin transform that defines $\zeta$, Eq. , is well defined for $d/2<\text{Re}(\epsilon)<d/2+2$ and can be analytically continued to $\epsilon=0$. One may be concerned that the derivation above is only formal as it does not involve an elliptic operator. This is easily remedied by considering the elliptic operator[^7] $\mathcal{M}^{\prime}=\imath \eta \sqrt{-\partial_{t}^{2}} + \imath (2m) \partial_{t} +\nabla^{2}$. Its spectrum, $\left(2m\omega-k^2+\imath \eta|\omega|\right)$, tends to that of the Minkowskian Schrödinger operator $\mathcal{M}$ as $\eta\to0$. Consequently, the spectrum for the Euclidean avatar[^8] ($\mathcal{M}^{\prime}_{E,g}$) of $\mathcal{M}^{\prime}$ becomes $\left(k^{2}+2m\imath\omega+|\eta\omega|\right)$ and the heat kernel for that operator is given by $$\begin{aligned} K(s,1,\mathcal{M}^{\prime}_{E,g})=\text{Tr}\ \left(e^{-s\mathcal{M}^\prime_{E,g}}\right)=\int \left(\frac{d^{d}k}{(2\pi)^{d}} e^{-sk^{2}}\right)\left(\int \frac{d\omega}{2\pi} e^{-2m\imath s \omega-s|\eta\omega|}\right)\end{aligned}$$ The integral over $k$ gives the factor of $1/s^{d/2}$ as before, while the integral over $\omega$ gives $$\frac{1}{\pi s}\left(\frac{\eta}{4m^{2}+\eta^{2}}\right)$$ that tends to $\frac{1}{s}\delta(2m)$ as $\eta\to0$. As we will see later, Light Cone Reduction technique indeed reproduces this factor of $\delta(2m)$. ### Derivation using LCR In Euclidean, flat $d+2$ dimensional space-time, the heat kernel $\mathcal{G}_{R,E}$ of a relativistic scalar field at free fixed point is given by [@Mukhanov:2007zz] $$\begin{aligned} \label{heatkernelrel} \mathcal{G}_{R, E}(s;x_{2}^{M},x_{1}^{M}) =\frac{1}{(4\pi s)^{d/2+1}}e^{-\frac{(x_{1}-x_{2})^2}{4s}}\end{aligned}$$ where the superscript reminds us that this is the relativistic case and $(x_{1}-x_{2})^2=(x_{1}^{M}-x^{M}_{2})(x^{N}_{1}-x^{N}_{2})\delta_{MN}$. In preparation for using LCR, we rewrite the expression by first reverting to Minkowski space, $t=-\imath x^0$, and then switching to light-cone coordinates.[^9] Using $x^{\pm}=x^{\pm}_{2}-x^{\pm}_{1}$ we have: $$\mathcal{G}_{R, M} (s; (x^{+}_{2}, x^{-}_{2},\vec{x}_{2}), (x^{+}_{1}, x^{-}_{1}, \vec{x}_{1})) = \frac{1}{(4\pi s)^{d/2+1}}e^{-\frac{x^{+}x^{-}}{2s}-\frac{|\vec{x}|^2}{4s}}$$ where $\mathcal{G}_{R, M}$ is the heat kernel in Minkowski space. Now, in the reduced theory, the co-ordinate $x^{+}$ becomes the time coordinate $t$. Going to imaginary time, $t\to \tau = \imath t$, and Fourier transforming we obtain the heat kernel $\mathcal{G}_{g,E}$ for the Galilean invariant theory in Euclidean space: $$\begin{aligned} \mathcal{G}_{g,E}(s;(\vec{x}_2,\tau_2), (\vec{x}_1,\tau_1)) )&= \int_{-\infty}^{\infty } \frac{1}{(4\pi s)^{d/2+1}}e^{\frac{\imath\tau x^{-}}{2s}-\frac{|\vec{x}|^2}{4s}} e^{-\imath m x^{-}} dx^{-}\nonumber\\ \label{heatkernelg10} &= 2\pi \delta\left(\frac{\tau}{2s}-m\right)\frac{1}{(4\pi s)^{d/2+1}}e^{-\frac{|\vec{x}|^2}{4s}}\end{aligned}$$ where $\tau=\tau_{2}-\tau_{1}$, in [[ detailed]{}]{} agreement with Eq. . For later use we note that in the coincidence limit we have $$\begin{aligned} \label{heatkernelg11} \mathcal{G}_{g,E}((\vec{x},\tau), (\vec{x},\tau)) )=\frac{2\pi\delta(m)}{(4\pi s)^{d/2+1}}.\end{aligned}$$ It is interesting to note that LCR directly gives $\sim \delta(m)/s^{d/2+1}$ while the direct computations gives $\sim\delta (ms)/s^{d/2}$. Our main result, below, follows from the coincidence limit of the heat kernel expansion in Eq. , which is useful only for $s\ne0$, since it is used to extract the coefficients of powers of $s$ in the expansion. The limiting behavior as $s\to0$ of the function $\mathcal{G}_{g,E}$ is a delta function enforcing coincidence of the points, by construction (and this is why $a_0=1$ at coincidence), and therefore the behavior as $s\to0$ is correct but of no significance. The spectral dimension of the operator $\mathcal{M}_{E}$ is given by $$d_{\mathcal{M}}= -\frac{d\ln(K)}{d\ln(s)} =\frac{d}{2}+1$$ which explains why there can not be any trace anomaly when the spatial dimension $d$ is odd. This has to be contrasted with the relativistic case where the spectral dimension of the laplacian operator is given by $\frac{d+1}{2}$, so that in the relativistic case the anomaly is only present when the spatial dimension $d$ is odd. Heat Kernel in Curved spacetime {#sec:heatKcurv} ------------------------------- Now that we know that LCR works in flat space-time, we can go ahead and implement it in curved space-time exploiting the known fact that for relativistic field theories coupled to a curved geometry, the heat kernel can be obtained as an asymptotic series. The method is explained in, [*e.g.,*]{} Refs. [@Jack:1983sk; @Mukhanov:2007zz; @Grinstein:2015ina]. The method, first worked out by DeWitt [@DeWitt:1965jb], starts with an Ansatz for the form of the heat kernel taking a cue from the form of the solution in flat space-time for the heat equation. For small enough $s$ the Ansatz for the heat kernel, corresponding to a relativistic theory in $d+2$ dimensions, reads: $$\mathcal{G}_{R,E}(x_2,x_1; s) = \frac{\Delta_{\text{VM}}^{1/2}(x_2,x_1)} {(4\pi s)^{d/2+1}}e^{-\sigma (x_2,x_1)/2s} \sum_{n=0}^\infty a_{n} (x_2,x_1) {\hspace{1pt}}s^n\,,\qquad a_{0}(x_1,x_2)=1$$ with $a_{n}(x_2,x_1)$ the so-called Seeley–DeWitt coefficients and where $\sigma(x_2,x_1)$ is the biscalar distance-squared measure (also known as the geodetic interval, as named by DeWitt), defined by $$\label{eq:geodeticinterval} \sigma(x_2,x_1) = \tfrac{1}{2} \left( \int_0^1 d\lambda \, \sqrt{G_{MN} \frac{dy^{M}}{d\lambda}{\hspace{1pt}}\frac{dy^{N}}{d\lambda}} \, \right)^2 \,,\qquad y(0)=x_1\,,\,\,y(1)=x_2\,,$$ with $y(\lambda)$ a geodesic. The bi-function $\Delta_{\text{VM}}(x_2,x_1)$ is called the van Vleck-Morette determinant; this biscalar describes the spreading of geodesics from a point and is defined by $$\begin{aligned} \label{eq:vandet} \Delta_{\text{VM}} (x_2,x_1) =G(x_2)^{-1/2}{\hspace{1pt}}G(x_1)^{-1/2} \:\det\!\left(-\frac{\partial^2}{\partial x_{2}^{M} \partial x_{1}^{N^{\prime}}} \sigma (x_2,x_1) \right). \end{aligned}$$ where $G$ is the negative of determinant of metric $G_{MN}$. [[ Now, to implement LCR, recall that a Schrödinger invariant theory coupled to a generic curved NC structure is obtained by reducing from the $d+2$ dimensional metric $G_{MN}$ in Eq. . [[ In taking the coincident limit we must keep $x^{-}_{1}$ and $x^{-}_{2}$ arbitrary in order to Fourier transform with respect to $x^{-}$ per the prescription . Therefore, we work in the coincident limit where $x^{\mu}_{1}=x^{\mu}_{2}$, with $\mu=+,1,2,\cdots, d$. ]{}]{} Now, since $x^{-}$ is a null direction, in [[ this]{}]{} limit we have [[ $\sigma((x^{-}_{1},x^\mu),(x^{-}_2,x^\mu))=0$ or $\left[\sigma\right]=0$ for brevity]{}]{}. Furthermore, null isometry guarantees that metric components are independent of $x^{-}$ and so are [[ $[a_{n}]$ and $[\Delta_{VM}]$.]{}]{} Thus the coincident limit is equivalent to the coincident limit of the parent theory, hence [[ $\big[\Delta_{VM}\big]=1$]{}]{}. We refer to appendix \[riemann\] for [[ details]{}]{}.]{}]{} [[ Thus, in the coincidence limit, we have the following expression for the heat kernel corresponding to the reduced theory: $$\label{heatkernelgcurved} \mathcal{G}_{g,E}(s; (\tau,\vec{x}), (\tau,\vec{x}) ) =\frac{2\pi\delta(m)}{(4\pi s)^{d/2+1}}\sum_{n=0}^\infty a_{n} ((\tau,\vec{x}),(\tau,\vec{x})) {\hspace{1pt}}s^n\,,\qquad a_{0}((\tau_1,\vec{x}_1),(\tau_2,\vec{x}_2))=1$$ where to define $\tau$, we have proceeded just as in flat space: first revert to a Minkowski metric, then switch to light cone coordinates, and finally go over to imaginary $x^+$ time, $\tau$.]{}]{} Subsequently, using Eq.  the anomaly is given by $$\begin{aligned} \label{an} \mathcal{A}^{G}_{d+1}=-4\pi \delta(m) \frac{a_{d/2+1} }{(4\pi)^{d/2+1}} .\end{aligned}$$ From Eq.  it is clear that only the zero mode of $P_-$ can contribute to the anomaly; the anomaly vanishes for fields with non-zero $U(1)$ charge. We already know that the anomaly for the relativistic complex scalar case is given by $$\begin{aligned} \mathcal{A}^{R}_{d+2}=-\frac{2a_{d/2+1} }{(4\pi)^{d/2+1}}\,.\end{aligned}$$ Thereby we establish the result advertised in the introduction, giving the Weyl anomaly of a $d+1$ dimensional Schrödinger invariant field theory of a single complex scalar field carrying charge $m$ under $U(1)$ symmetry), $\mathcal{A}^{G}_{d+1}$, in terms of the anomaly in the relativistic theory in $d+2$ dimensions, $\mathcal{A}^{R}_{d+2}$: $$\begin{aligned} \mathcal{A}^{G}_{d+1}=2\pi \delta(m) \mathcal{A}^{R}_{d+2}\,,\end{aligned}$$ computed on the class of metrics given in Eq. . [[ At this point, we pause to remark on [[ the interpretation of the $\delta(m)$ factor]{}]{}. [[ While it trivially]{}]{} shows that the anomaly is absent for $m\neq 0$ [[ , t]{}]{}he interpretation becomes subtle when $m=0$. [[ The apparent divergence in the anomaly]{}]{} is just an artifact of [[ the]{}]{} usual zero mode problem associated with null reduction. [[ A]{}]{} similar issue has been pointed out in [@Jensen:2014aia] [[ in]{}]{} reference to [@Maldacena:2008wh; @Adams:2008wt]. The reduced theory in [[ the]{}]{} $m\to0$ limit becomes infrared divergent[[ ;]{}]{} the fields become non-dynamical in that limit. The infrared divergence is also evident [[ from E]{}]{}q. . One [[ may]{}]{} further [[ understand]{}]{} the presence of $\delta(m)$ by [[ letting $m$ be a continuous parameter]{}]{} and considering a continuous set of fields $\phi_m$, [[ of]{}]{} charge $m$. [[ T]{}]{}he anomaly [[ arising]{}]{} from the continuous set of fields is given by [[ summing over their contributions:]{}]{} $$\begin{aligned} \nonumber \frac{1}{2\pi}{\hspace{1pt}}\int{\hspace{1pt}}dm{\hspace{1pt}}\mathcal{A}^{G}_{d+1}= \mathcal{A}^{R}_{d+2}{\hspace{1pt}}\int{\hspace{1pt}}dm{\hspace{1pt}}\delta(m)=\mathcal{A}^{R}_{d+2}\end{aligned}$$ The right hand side is exactly what we expect since allowing the parameter $m$ to continuously vary restores the Lorentz invariance[[ : consulting Eq.  we see that this continuous sum corresponds to restoring the relativistic theory of Eq. .]{}]{}]{}]{} That the constant of proportionality relating $\mathcal{A}^{R}_{d+2}$ to $\mathcal{A}^{G}_{d+1}$ vanishes for $m\neq 0$ can be verified by an all-orders computation of $\mathcal{A}^{G}_{d+1}$, to which we now turn our attention. Perturbative proof of Vanishing anomaly {#perturbation} ======================================= The fact that the anomaly vanishes for non-vanishing $m$ can be shown perturbatively taking the background to be slightly curved. In flat space-time, wavefunction renormalization and coupling constant renormalization are sufficient to render a quantum field theory finite. Defining composite operators requires further renormalization. Therefore, when the model is placed on a curved background additional short distance divergences appear since the background metric can act as a source of operator insertions. To cure these divergences, new counter-terms are required that may break scaling symmetry even at a fixed point of the renormalization group flow. In this section, we will treat the background metric as a small perturbation of a flat metric so that we compute in a field theory in flat space-time with the effect of curvature appearing as operator insertions of the perturbation $h_{\mu\nu}=g_{\mu\nu}-\eta_{\mu\nu}$. To be specific, we will look at the vacuum bubble diagrams with external metric insertions. It turns out that all of these Feynman diagrams vanish at all orders of perturbation theory, leading to a vanishing anomaly. In fact, [[ we will show]{}]{} that these anomalies vanish even away from the fixed point as long as the theory satisfies some nice properties. Suppose we have a rotationally invariant field theory such that: 1. The theory includes only rotationally invariant (“scalar”) fields. 2. At free fixed point, the theory admits an $U(1)$ symmetry under which the scalar fields are charged. 3. \[thm:prop\] [[ The free propagator is of the form $\frac{\imath}{2m\omega - f(|\vec{k}|)+\imath\epsilon}$, where, generically, $f(|\vec{k}|)=|\vec{k}|^{z}$.]{}]{} 4. \[thm:int\] The interactions are perturbations about the free fixed point by operators of the form $g(\phi,\phi^*)|\phi|^{2}$, where $g$ is a polynomial of the scalar field $\phi$. An elementary argument presented below shows that, under these conditions, all the vacuum bubble diagrams vanish to all orders in perturbation theory. Before showing this, a few comments are in order. First, the argument is valid in any number of spatial dimensions. Second, assumption \[thm:int\] precludes terms like $\phi^{4}+(\phi^{*})^{4}$ or $K\phi^{2}$ in the Lagrangian. To be precise, $F(\phi)+\text{h.c.}$ can evade this theorem for any holomorphic function $F$ of $\phi$. This is because assumption \[thm:int\] implies that each vertex of the Feynman diagrams of the theory has at least one incoming scalar field into it and one outgoing scalar field line from it; having both incoming and outgoing lines at each vertex is at the heart of this result. Thirdly, it should be understood that all interactions that can be generated via renormalization, that is, not symmetry protected, are to be included. For example, were we to consider a single scalar field with only the interaction $\phi^{3}\phi^{*}+\text{h.c.}$, the interactions $\phi^{4}+(\phi^{*})^{4}$ and $(\phi\phi^{*})^{2}$ will be generated along the RG flow. Nonetheless, $U(1)$ symmetry will always prohibit a holomorphic interaction $F(\phi)+h.c$. Lastly, assumption \[thm:prop\] can be relaxed to include a large class of functions $f(|\vec{k}|^{2})$; this means one can recast this result in terms of perturbation theory along the RG-flow rather than about fixed points. To prove this claim, notice first that a vacuum diagram is a connected graph without external legs (hanging edges). Moreover, since we are considering a complex scalar field, the vertices are connected by directed line segments. These directed segments form directed closed paths. To see this, recall that by assumption each vertex has at least one ingoing and one outgoing path. Starting from any vertex, we have at least one outgoing path. Any one of these paths must have a second vertex at its opposite end, since by assumption there are not hanging edges. Take any one outgoing path and follow it to the next vertex. Now, at this second vertex repeat this argument: follow the outward path to a third vertex. And so on. Since a finite graph has a finite number of vertices, at some point in the process we have to come back to a vertex we have already visited. For example, assume that we first revisit the $i$-th vertex. This means that starting from the vertex $i$ we have a directed path which loops back to the $i$-th vertex itself. The simplest example is that of a path starting and ending on the first vertex, corresponding to a self contraction of the elementary field in the operator insertion. Let us call this directed loop $\Gamma$. We use the freedom in the choice of loop energy and momentum in the evaluation of the Feynman diagram to assign a loop energy $\omega$ in a way such that $\omega$ loops around $\Gamma$. In performing the integral over $\omega$ it suffices to consider the $\Gamma$ subdiagram only. The resulting integration is of the form: $$\int d\omega\, P(\omega,\vec k, \{\omega_n,\vec{ k}_n\})\prod_{n\in\Gamma}\frac{1}{(\omega+\omega_n-f(|\vec k+ \vec{k}_n|)/2m+i\epsilon)}$$ where the product is over all vertices in $\Gamma$ and correspondingly over all line segments in $\Gamma$ out of these vertices. Energy $\omega_n$ and momentum $\vec k_n$ enter $\Gamma$ at the vertex $n$. The factor $P(\omega,\vec k, \{\omega_n,\vec{ k}_n\})$ is polynomial in momentum and energy and may arise if there are derivative interactions. Note that every propagator factor has the same sign $i\epsilon$ prescription, that is, all poles in complex-$\omega$ lie in the lower half plane (have negative imaginary part). The integral over the real $\omega$ axis can be turned into an integral over a closed contour in the complex plane, by closing the contour on an infinite radius semicircle on the upper half plane, using the fact that for two or more propagators the integral over the semicircle at infinity vanishes. Then Cauchy’s theorem gives that the integral over the closed contour vanishes as there are no poles inside the contour. This proves the claim, except for the singular case of a self-contraction, that is, a propagator from one vertex to itself. Self contractions can be removed by normal ordering, again giving a vanishing result. For an alternative way of seeing this note that this integral is independent of external momentum and energy, and is formally divergent in the ultraviolet (as $|\omega|\to\infty$). The integral results in a constant (independent of external momentum and energy) that must be subtracted to render it finite, and can be chosen to be subtracted completely, to give a vanishing result. The computation in the case of anti-commuting fields differs only in that a factor of $-1$ is introduced for each closed fermionic loop. Hence the claim applies equally to the case of anti-commuting scalar fields. We now return to the derivation of our main result, Eq. . The conditions above are satisfied for the theories considered in Sec. \[sec:heatKcurv\], namely, free theories of complex scalars, with the free propagator given by $\frac{\imath}{2m\omega - |\vec{k}|^{2}+\imath 0^{+}}$. Recall that we are to put the theory on a curved background which is assumed to be a small perturbation from flat background. The perturbations act as insertions on vacuum bubble diagrams, but since they preserve the $U(1)$ symmetry the model still satisfies the assumptions above. Hence all the bubble diagrams vanish, and we conclude there are no divergences coming from metric insertion on bubble diagrams. Consequently, there is no scale anomaly. We emphasize that the absence of the Weyl anomaly is valid in all orders of perturbation in both the coupling and the metric. The result holds true even if we make the couplings to be space-time dependent so that every coupling insertion injects additional momentum and energy to the bubble diagram. Physically, the anomaly vanishes because the absence of antiparticles in non-relativistic field theories and the conservation of $U(1)$ charge forbid pair creation, necessary for vacuum fluctuations that may give rise to the anomaly. This perturbative proof holds for theories which need not be Galilean invariant, and the question arises as to whether one may use LCR to make statements about anomalies for theories with kinetic term involving one time derivative and $z\neq 2$. We will take up this task in following section, starting by giving the promised proof of our prescription in Eq. . [[ We remark that perturbative proof works for $m\neq 0$. For $m=0$, the integrand becomes independent of $\omega$, [[ and]{}]{} one can not [[ perform]{}]{} the contour integral to argue the diagrams vanish. In fact, [[ the integral over $\omega$ is divergent, as expected from our earlier expectation that at $m=0$ one encounters IR divergences]{}]{}. One way to see the presence of $\delta(m)$[[ , as explained earlier,]{}]{} is to take a continuous set of fields $\phi_m$, labelled by continuous parameter $m$. [[ If]{}]{} we exchange the sum over [[ (1-loop)]{}]{} bubble diagrams and [[ the]{}]{} integral over $m$, then each of the propagator can be thought of as a relativistic propagator with $m$, playing the role of $p_-$. Thus the whole calculation formally becomes [[ that of the]{}]{} relativistic anomaly.]{}]{} [[ One can verify our results by explicit calcualation in specific cases. In a slightly curved space-time, one can treat the deviation from flatness as background field sources. This also serves the purpose of checking that the $\eta$-regularization is appropriate, obtaining the anomaly as a function of $\eta$. Since, as $\eta\to0$, for $m\neq 0$, the flat space heat kernel vanishes, one expects the anomaly to be vanishing. In fact, one can check that a $\delta(m)$ is recovered as $\eta\to0$. We refer to the App. \[hketa\] for an explicit calculation; it verifies our results in detail, and shows the vanishing anomaly regardless of the order of limits $\eta\to0$ and $m\to0$.]{}]{} Modified LCR and Generalisation {#generalisation} =============================== Proving the heat kernel prescription ------------------------------------ In this subsection we will explain why our proposed method to determine the heat kernel for Schrödinger field theory ($z=2$) worked in a perfect manner, as evidenced by the agreement between Eqs.  and . We will see that one can use LCR to relate the heat kernel of a theory living in $d+1$ dimensions with that of a parent theory living in $d+2$ dimensions, as long as the parent theory has $SO(1,1)$ invariance.[^10] Furthermore, if the parent theory has a dynamical scaling exponent given by $z$, then the theory living in $d+1$ dimension has $2z$ as its dynamical exponent. We will make these statements precise in what follows. Suppose the operator $D$ defined in $d+2$ dimensional space-time is diagonal in the eigenbasis of $P_-$, the conjugate momenta to $x^{-}$: $$\begin{aligned} \langle x^{+}_{2}, x^{i}_{2}, m_{2}| D| x^{+}_{1},x^{i}_{1},m_{1}\rangle = \langle x^{+}_{2}, x^{i}_{2}| D_{m_2}| x^{+}_{1},x^{i}_{1}\rangle \delta(m_{2}-m_{1})\,,\end{aligned}$$ where $m_{1,2}$ label the eigenvalues of $P_-$. The example worked out in Sec. \[sec:heatKdircalc\] had $D=\mathcal{M}$, and it does satisfy this requirement. It follows that $$\begin{aligned} \nonumber \langle x^{+}_{2}, x^{i}_{2}, x^{-}_{2}| e^{-sD}| x^{+}_{1},x^{i}_{1},x^{-}_{1}\rangle &=&&\frac{1}{2\pi} \int dm_{1}\ dm_{2} e^{-\imath m_{1}x^{-}_{1}+\imath m_{2}x^{-}_{2}}\langle x^{+}_{2}, x^{i}_{2}, m_{2}| e^{-sD}| x^{+}_{1},x^{i}_{1},m_{1}\rangle\\ &=&& \frac{1}{2\pi} \int dm_{1} e^{\imath m_1 x^{-}_{12}}\langle x^{+}_{2}, x^{i}_{2}| e^{-sD_{m_1}}| x^{+}_{1},x^{i}_{1}\rangle\,,\end{aligned}$$ from which we obtain $$\begin{aligned} \label{eq:prescription2} \langle x^{+}_{2}, x^{i}_{2}| e^{-sD_{m}}| x^{+}_{1},x^{i}_{1}\rangle = \int dx^{-} e^{-\imath m x^{-}_{12}} \langle x^{+}_{2}, x^{i}_{2}, x^{-}_{2}| e^{-sD}| x^{+}_{1},x^{i}_{1},x^{-}_{1}\rangle\,.\end{aligned}$$ This is precisely the prescription we gave in Eq. . Generalisation {#sec:gener} -------------- Since the LCR (or DLCQ) trick requires null cone reduction, it may seem necessary that the parent theory have $SO(d+1,1)$ symmetry, and that this will result necessarily in a Galilean invariant reduced theory, that is, with $z=2$. This is not quite right: one may relax the condition of $SO(d+1,1)$ symmetry and obtain reduced theories with $z\ne 2$. The key observation is that for null cone reduction only two null coordinates are needed, with the rest of the coordinates playing no role. Hence, we consider null cone reduction of a $d+2$ dimensional theory which enjoys $SO(1,1) \times SO(d)$ symmetry. The reduced theory will be a $d+1$ dimensional theory with $SO(d)$ rotational symmetry and a residual $U(1)$ symmetry that arises from the null reduction. The point is that the theory can enjoy anisotropic scaling symmetry. Consider, for example, the following class of operators $$\begin{aligned} \label{eq:Mrd+2} \mathcal{M}_{rc; d+2} = \left(-\partial_{t}^{2}+\partial_{x}^{2}\right) - (-1)^{z/2} (\partial_{i}\partial^{i})^{z/2}\,,\end{aligned}$$ where $t=x^0$ and $x=x^{d+1}$ and for the reminder of this section there is an implicit sum over repeated latin indices, over the range $i=1,\ldots,d$. These operators transform homogeneously under $$\begin{aligned} x^{i}\to \lambda x^{i},\qquad t\to \lambda^{z/2}t\qquad\text{and}\qquad x\to \lambda^{z/2}x\,. \end{aligned}$$ Introducing null coordinates as before, $x^{\pm}=\frac{1}{\sqrt{2}}(x\pm t)$, null reduction of this operator yields $$\begin{aligned} \mathcal{M}_{gc; d+1}= 2 \imath m \partial_{t^{\prime}}- (-1)^{z/2}(\partial_{i}\partial_{i})^{z/2}\,,\end{aligned}$$ where $t^{\prime}=x^{+}$ is the time coordinate of the reduced theory. From the dispersion relation of the reduced theory, $2m\omega = |\vec k|^{z}$, we read off that the dynamical exponent is $z$. [[ Here we are interested in even $z$ to [[ insure]{}]{} that the operator $\mathcal{M}_{gc; d+1}$ is local.]{}]{} For $z=2$, we recover the case discussed in earlier sections with the parent theory being Lorentz invariant and the reduced theory being Schrödinger invariant. Following the prescription , we can relate the matrix element of the heat kernel operator for $\mathcal{M}_{r;d+2}$ to that of $\mathcal{M}_{g;d+1}$, via[^11] $$\begin{aligned} \label{mastereq} \mathcal{G}_{\mathcal{M}_{gc;d+1}}= \int_{-\infty}^{\infty} dx^{-}\ e^{-\imath m x^{-}} \langle x^{-}_{0}+x^{-}|\mathcal{G}_{\mathcal{M}_{rc;d+2}}|x^{-}_{0}\rangle\,.\end{aligned}$$ This should be viewed as an operator relation: thinking of the basis on which the operator $\mathcal{G}_{\mathcal{M}_{r;d+2}}$ acts as given by the tensor product of $|x^{+}\rangle$, $|x^{-}\rangle$ and $|x^{i}\rangle$ for $i=1,2,\ldots,d$, then $ \langle x^{-}_{0}+x^{-}|\mathcal{G}_{\mathcal{M}_{r;d+2}}|x^{-}_{0}\rangle $ is an operator acting on the complement of the space spanned by $|x^{-}\rangle$. Taking the trace on both sides of Eq. , we obtain the heat kernel of the reduced theory: $$\begin{aligned} \label{mastereq1} K_{\mathcal{M}_{gc;d+1}}= \int_{-\infty}^{\infty} dx^{-}\ e^{-\imath m x^{-}}\ \Tr_{x^{+},x^{i}} \langle x^{-}_{0}+x^{-}|\mathcal{G}_{\mathcal{M}_{rc;d+2}}|x^{-}_{0}\rangle\end{aligned}$$ Equations or are useful in practice only when we know either left or right hand sides by some other means. Hence, the next meaningful question to be asked is whether we can calculate $\mathcal{G}_{\mathcal{M}_{r}}$ explicitly for a curved space-time for any $z$. The case for $z=2$, that in which the parent theory is relativistic and the reduced theory is Schrödinger invariant, is well known and was presented in Sec. \[sec:heatKdircalc\]. For generic $z$, the answer is yes to some extent. We will find a closed form expression when the slice of constant $(t,x)$ in space-time is described by a metric that does not depend on $t$ or $x$: $$\label{eq:metricslice} ds^{2} = -dt^{2}+(dx)^{2}+h_{ij}(x^{i})dx^{i}dx^{j}$$ With this choice, the heat kernel equation for the curved background version of the operator $\mathcal{M}_{rc;d+2}$ of Eq.  admits a solution by separation of variables, into the product of the relativistic heat kernel in $1+1$ dimensions and the heat kernel for an operator acting only on the $d$-dimensional slice [@Baggio:2011ha]. Specifically, we consider operators $$\begin{aligned} \label{curvedoperator} \mathcal{M}_{rc;d+2}= \nabla^{2}_{t,x}-D^{z/2}\end{aligned}$$ where $\nabla^{2}_{t,x}=(-\partial_{t}^{2}+\partial_{x}^{2})$ and $D$ is a second order scalar differential operator on the slice of constant $(t,x)$, [*e.g.*]{}, $D=-\nabla^{2}=-1/\sqrt{h}\,\partial_i\sqrt{h}h^{ij}\partial_j$. With these choices, $$\mathcal{G}_{\mathcal{M}_{rc;d+2}}=\mathcal{G}_{\nabla^{2}_{t,x}}\;\mathcal{G}_{D^{z/2}}\,.$$ Gilkey has shown that the heat kernel expansion for $D^k$ can be computed from that for $D$ [@gilkey1980spectral] [[ for $k>0$]{}]{}. The argument is based on the observation that the $\zeta$-functions for the two operators are related: $$\zeta (\epsilon , f,D^k ) = \Tr_{L^{2}} \left(f (D^k)^{-\epsilon}\right) =\Tr_{L^{2}} \left(f D^{-k \epsilon}\right) = \zeta (k \epsilon , f,D ) \,.$$ Gilkey’s result is as follows: If $D$ has heat kernel expansion $$\begin{aligned} K_D=\left(\frac{1}{\sqrt{4\pi}}\right)^{d}\sum_{n\geq 0} s^{n-\frac{d}{2}}a^{(d)}_{n}\end{aligned}$$ then the heat kernel expansion of $D^k$ is $$\begin{aligned} \nonumber K_{D^k}= \left(\frac{1}{\sqrt{4\pi}}\right)^{\!\!d}\sum_{n\geq 0} s^{\frac{2n-d}{2k}}\frac{\Gamma(\frac{d-2n}{2k})}{k\Gamma(\frac{d}{2}-n)}a^{(d)}_{n} &=\left(\frac{1}{\sqrt{4\pi}}\right)^{\!\!d}\sum_{\underset{2n\neq d (\text{mod\ } 2k)}{n\geq 0}} s^{\frac{2n-d}{2k}}\frac{\Gamma(\frac{d-2n}{2k})}{k\Gamma(\frac{d}{2}-n)}a^{(d)}_{n} \\ &+\left(\frac{1}{\sqrt{4\pi}}\right)^{\!\!d}\sum_{\underset{2n=d (\text{mod\ } 2k)}{n\geq 0}} s^{\frac{2n-d}{2k}}(-1)^{\frac{(2n-d)(1-k)}{2k}}a^{(d)}_{n}\end{aligned}$$ Hence, $\mathcal{M}_{rc;d+2}=(-\partial_{t}^{2}+\partial_{x}^{2}) - (-\nabla^{2})^{z/2}$ has heat kernel expansion $$\begin{aligned} \label{eq:KMrcexp} \langle x^{+}_{2},x^{-}_2, x^{i}| \mathcal{G}_{\mathcal{M}_{rc;d+2}}| x^{+}_{1},x^{-}_{1},x^{i}\rangle = \frac{e^{\frac{-x^{+}_{12}x^{-}_{12}}{2s}}}{4\pi s}\left(\frac{1}{\sqrt{4\pi}}\right)^{\!\!d} \sum_{n\geq 0} s^{\frac{2n-d}{z}}\frac{\Gamma(\frac{d-2n}{z})}{\frac{z}2\Gamma(\frac{d}{2}-n)}a^{(d)}_{n}\end{aligned}$$ where $x^{\pm}_{12}=x^{\pm}_{2}-x^{\pm}_{1}$ and $a^{(d)}_{n}$ are the well known coefficients of the heat kernel expansion of $ -\nabla^{2}$. Now, the reduced theory lives on $d+1$ dimensional space-time with curved spatial slice, [*i.e.*]{}, the background metric is given by $$\begin{aligned} ds^{2}=-dt^{2}+h_{ij}dx^{i}dx^{j}\,,\end{aligned}$$ where $i$ runs from $1$ to $d$. In order to extract the heat kernel of $\mathcal{M}_{gc;d+1}=2\imath m\partial_{t}+(-\nabla^2)^{z/2}$, we need partial tracing of heat kernel of $\mathcal{M}_{rc;d+2}$, $$\begin{aligned} \langle x^{-}_{0}+x^{-}| \Tr_{x^{+},x^{i}} \mathcal{G}_{\mathcal{M}_{rc;d+2}}|x^{-}_{0}\rangle = \left(\frac{1}{\sqrt{4\pi}}\right)^{\!\!d} \frac{1}{4\pi s}\sum_{n\geq 0} s^{\frac{2n-d}{z}}\frac{\Gamma(\frac{d-2n}{z})}{\frac{z}2\Gamma(\frac{d}{2}-n)}a^{(d)}_{n}\,,\end{aligned}$$ leading to $$\begin{aligned} \label{eq:KMgcexp} K_{\mathcal{M}_{gc;d+1}}= 2\pi \delta(m)\frac{1}{4\pi s} \left(\frac{1}{\sqrt{4\pi}}\right)^{\!\!d}\sum_{n\geq 0} s^{\frac{2n-d}{z}}\frac{\Gamma(\frac{d-2n}{z})}{\frac{z}2\Gamma(\frac{d}{2}-n)}a^{(d)}_{n}\,.\end{aligned}$$ Adding conformal coupling modifies $a^{(d)}_{n}$ but the pre-factor stays $2\pi \delta(m)\frac{1}{4\pi s} \left(\frac{1}{\sqrt{4\pi}}\right)^{\!\!d}$. Hence, we have the generalised result $$\begin{aligned} \label{eq:mainresultagain} \mathcal{A}^{g}_{d+1}= 2\pi \delta(m) \mathcal{A}^{r}_{d+2}\end{aligned}$$ where $\mathcal{A}^{g}_{d+1}$ is the Weyl anomaly of a theory of a single complex scalar field of charge $m$ under a $U(1)$ symmetry living in $d+1$ dimensions with dynamical exponent $z$ and $\mathcal{A}^{r}_{d+2}$ is the Weyl anomaly of a field theory living in $d+2$ dimension such that it admits a symmetry under $t\to \lambda^{z/2}t, x^{d+2}\to \lambda^{z/2}x^{d+2} \text{and\ }x^{i}\to\lambda x^{i}$ for $i=1,\ldots,d+1$. Thus we have shown that theories with one time derivative on a time independent curved background do not have any Weyl anomalies. This is consistent with the perturbative result obtained previously. It deserves mention that the operator $\mathcal{M}_{rc;d+2}$ of Eq.  does not transform homogeneously under Weyl transformations. In order to construct a Weyl covariant operator consider generalizing the metric to the following form $$ds^2= N dx^+dx^-+h_{ij}dx^idx^j\,.$$ If $N$ is independent of $x^-$ the metric for the reduced theory will include a general lapse function $N$. Then we replace $(\nabla^2)^{\frac{z}{2}}$ by $\mathcal{O}^{(d+2z-4)}\mathcal{O}^{(d+2z-8)}\cdots\mathcal{O}^{(d+4)}\mathcal{O}^{(d)}$ with $\mathcal{O}^{(p)}$ defined as $$\begin{aligned} \mathcal{O}^{(p)}\equiv \nabla^{2}-\frac{p}{4(d-1)}R+ \frac{2+p-d}{z}\frac{\partial_{i}N}{N}h^{ij}\partial_{j}+\frac{d}{4z^{2}}(2+p-d)\frac{\partial_{i}N}{N}h^{ij}\frac{\partial_{j}N}{N}\end{aligned}$$ Under $h_{ij}\to e^{2\sigma}h_{ij}$, $N\to e^{z\sigma}N$ and $\psi \to e^{-\frac{p}{2}\sigma}\psi$, [[ this operator transforms covariantly]{}]{}, in the sense that $$\begin{aligned} \mathcal{O}^{(p)}\psi \to e^{-(\frac{p}{2}+2)\sigma}\mathcal{O}^{(p)}\psi\,.\end{aligned}$$ Therefore, under the Weyl rescaling $h_{ij}\to e^{2\sigma}h_{ij}$, $N\to e^{z\sigma}N$ and $\phi \to e^{-\frac{d}{2}\sigma}\phi$ we have that $$\begin{aligned} \label{eq:confCoupling} N\sqrt{h}\,\phi^{*}\mathcal{O}^{(d+2z-4)}\mathcal{O}^{(d+2z-8)}\cdots\mathcal{O}^{(d+4)}\mathcal{O}^{(d)}\phi\end{aligned}$$ is invariant under under Weyl transformations. Adding the conformal coupling will modify the expressions for $a^{(d)}_{n}$, but scaling with respect to $s$ will remain unmodified. Hence we can enquire about existence or absence of potential Weyl anomalies. To have a non-vanishing Weyl anomaly, we need to have an $s$ independent term in the heat kernel expansion. This is possible only when $\frac{2n-d}{z}=1$, [*i.e.*]{}, when $d+z$ is even; see Eqs.  and . Since for a local Lagrangian $z$ must be even, this condition corresponds to even $d$ [^12]. This is expected because of the following reason: the scalars we can construct out of geometrical data (that can potentially appear as a trace anomaly) have even dimensions and the volume element scales like $\lambda^{d+z}$, so that in order to form a scale invariant quantity $d+z$ has to be even. Now when $d$ is even, we have $s$ independence for $n=(d+z)/2$ and the coefficient of $s^{0}$ is given by $\left(\frac{1}{\sqrt{4\pi}}\right)^{d} (-1)^{1-\frac{z}2}a^{d}_{\frac{d+z}{2}}$. Hence, the result relating anomalies in the parent and reduced theory, Eq. , still holds. Summary, Discussion and Future directions {#conclusion} ========================================= We have shown that for a $d+1$ dimensional Schrödinger invariant field theory of a single complex scalar field carrying charge $m$ under $U(1)$ symmetry, the Weyl anomaly, $\mathcal{A}^{G}_{d+1}$, is given in terms of that of a relativistic free scalar field living in $d+2$ dimensions, $\mathcal{A}^{R}_{d+2}$, via $$\begin{aligned} \mathcal{A}^{G}_{d+1}= 2\pi \delta (m) \mathcal{A}^{R}_{d+2}\,.\end{aligned}$$ Here the parent $d+2$ theory lives in a space-time with null isometry generated by the Killing vector $\partial_{-}$ so that the metric can be given in terms of a $d+1$ dimensional Newton-Cartan structure. The result is shown to be generalised to $$\begin{aligned} \mathcal{A}^{g}_{d+1}= 2\pi \delta(m) \mathcal{A}^{r}_{d+2}\,,\end{aligned}$$ where $\mathcal{A}^{g}_{d+1}$ is the Weyl anomaly of a theory of a single complex scalar field of charge $m$ under an $U(1)$ symmetry living in $d+1$ dimensions with dynamical exponent $z$, while $\mathcal{A}^r_{d+2}$ is the Weyl anomaly of an $SO(1,1)\times SO(d)$ invariant theory living in $d+2$ dimension such that it admits symmetry under $t\to \lambda^{z/2}t$, $x^{d+2}\to \lambda^{z/2}x^{d+2}$ and $x^{i}\to\lambda x^{i}$ for $i=1,\ldots,d+1$. To obtain information regarding the anomaly, we introduced a method to systematically handle the heat kernel for a theory with kinetic term involving one time derivative only. We provided crosschecks and consistency checks on our heat kernel prescription. One may worry that to properly define a heat kernel the square of the derivative operator must be considered. This would also be the case for, say, the Dirac operator. In fact, one can properly define it this way; see, for example, Ref. [@Witten:2015aba]. The result obtained regarding the anomaly of Schrödinger field theory is consistent with the one by Jensen [@Jensen:2014aia]. Auzzi [*et al*]{}, [@Auzzi:2016lxb] have studied the anomaly for a Euclidean operator given by $$\begin{aligned} \mathcal{M}^{\prime}_{E,g}= 2m\sqrt{-\partial_{t}^{2}} - \nabla^{2}\,,\end{aligned}$$ with eigenspectra given by $|\vec{k}|^{2} + 2m |\omega| \geq 0$. One can define the heat kernel for this operator as well, but the eigenspectra of this operator is not analytically related to that of $\mathcal{M}_{M,g}=2\imath m\partial_{t}+\nabla^{2}$, which is $-k^{2} + 2m \omega$. As a result the propagator in $\omega$-$\vec k$ space has a cut on the complex $\omega$ plane with branch point at the origin, making the analytic continuation to Minkowski space problematic. It is known that the two point correlator of Schrödinger field theory is constrained and has a particular form as elucidated in Ref. [@Nishida:2007pj; @Goldberger:2014hca]. While our prescription and the resulting Euclidean correlator conforms to that form, it is not clear how the Euclidean Schrödinger operator defined in Ref. [@Auzzi:2016lxb] does, if at all. Finally, we note that the operator $\sqrt{-\partial_t^2}$ is non-local (in the sense that the kernel, defined by $\sqrt{-\partial_t^2}f(t)=\int\! dt'\,K(t-t')f(t')$, has non-local support, $K(t)=2\partial_t\text{P}\frac{1}{t}$). There are several avenues of investigation suggested by this work: 1. What happens in the case of several scalar fields with different charge interacting with each other while preserving Schrödinger invariance in flat space-time? How is the pre-factor $\delta(m)$ modified? 2. It is not obvious how null reduction of a theory of a Dirac spinor in $d+2$ dimensions can result in a Lagrangian in $d+1$ dimensions of the form $\mathcal{L}=2\imath m \psi^{\dagger}\partial_{t}\psi +\psi^{\dagger}\nabla^{2}\psi$, let alone one with $\mathcal{L}=2\imath m \psi^{\dagger}\partial_{t}\psi -\psi^{\dagger}(-\nabla^{2})^{z/2}\psi$ for $z\ne2$. On the other hand, as we have seen, the functional integral over non-relativistic anti-commuting fields yields the same determinant as that of commuting fields (only a positive power). Hence, the anomaly of the anti-commuting field is the negative of that of the commuting field. 3. Calculations using the same Euclidean operator as in Ref. [@Auzzi:2016lxb] give a non-vanishing entanglement entropy in the ground state [@Solodukhin:2009sk]. By contrast, for the operator $\mathcal{M}_{M,g}=2\imath m\partial_{t}+\nabla^{2}$, the entanglement entropy in the ground state vanishes, since for this local non-relativistic field theory $\phi(x)|0\rangle=0$ and hence the ground state is a product state. It would be of interest to verify this result by direct computation using a method based on our prescription. 4. The method described in Sec. \[sec:gener\] to compute Weyl anomalies in theories with $z\ne2$ is not sufficiently general in that, by assuming the metric is time independent and has constant lapse, it neglects anomalies involving extrinsic curvature or gradients of the lapse function. A future challenge is to develop a more general computational method. We hope to come back to these questions in the future. SP would like to thank Mainak Pal, Shauna Kravec and Diptarka Das for useful discussions. [[ The authors would also like to acknowledge constructive and useful comments [[ by]{}]{} the referee.]{}]{} This work was supported in part by the US Department of Energy under contract DE-SC0009919. Technical Aspects of Heat Kernel for one time derivative theory {#ap1} =============================================================== Here’s one more perspective of why $\delta(m)$ appears in heat kernel for one-time derivative theory using the eigenspectra of the operator $\mathcal{M}_{g}$ with one time derivative. The Minkowski $\mathcal{M}_{M,g}$ operator is given by $$\begin{aligned} \mathcal{M}_{M,g}= 2\imath m \partial_{t} - (-\nabla^2)^{z/2}\end{aligned}$$ and eigenspectra is given by $2m\omega -k^{z}$. Now, we can not directly define the heat kernel since the eigenvalues range from $-\infty$ to $\infty$, and therefore it blows up. A similar situation also arises in relativistic theory where the eigenspectra is given by $-\omega^{2}+k^{2}$. There we define the heat kernel by Euclideanizing the time co-ordinate so that the eigenvalues become $\omega^{2}+k^{2}\geq 0$ and this positive definiteness allows for convergence. Technically, we can always define heat kernel for an operator $M$ as long as the eigenvalues of $M$ have positive real part. Building up on our experience to deal with the relativistic case, we use analytic continuation here as well. We define the Euclidean operator as $$\begin{aligned} \mathcal{M}_{E,g}=2 m \partial_{\tau} + (-\nabla^2)^{z/2}\end{aligned}$$ with eigenspectra given by $\lambda_{k,\omega}=-2\imath m\omega+k^{z}$. Evidently, $\text{Re}\left(\lambda_{k,\omega}\right)\geq 0$, hence we have a well defined heat kernel, given by $$\begin{aligned} K_{\mathcal{M}_{E,g}}=\Tr e^{-s\mathcal{M}_{E,g}} = \int \frac{d^{d}k}{(2\pi)^{d}} e^{-sk^{z}} \int \frac{d\omega}{2\pi} e^{-2m\imath s \omega} = \frac{\delta (m)}{2 s} \frac{2}{\Gamma(\frac{d}{2})} \frac{\Gamma \left(\frac{d}{z}+1\right)}{d \left(\sqrt{4\pi s^{\frac{2}{z}}}\right)^{d}}\end{aligned}$$ Similarly, the Euclidean heat kernel is well defined for the operator $\mathcal{M}_{rc;d+2}= \nabla^{2}_{t,x}-(-\nabla^{2}_{x^{i}})^{z/2}$, where $i=1,2, \ldots d$ and $x\equiv x^{d+2}$. If we Wick rotate to Euclidean time $\tau$, the eigenvalues of the operator $\mathcal{M}_{rc;d+2}$ are given by $\omega^{2}+(k^{d+2})^{2}+(|\vec{k}|^2)^{z/2}\geq 0$. [[ The presence of $\delta(m)$ can more formally be treated with an extra regulizer $\eta$, as discussed in the last few paragraphs of \[sec:heatKdircalcsub\] for $z=2$; a similar argument, using the regulator $\eta$, applies to any $z$]{}]{}. Riemann normal co-ordinate and coincident limit {#riemann} =============================================== In this appendix we show $x^-$ independence of quantities relevant to the computation of the coincidence limit of the Heat Kernel when the light cone reduction technique is used. We assume that the daughter theory is coupled to a Newton Cartan structure, satisfying [[ the]{}]{} Frobenius condition, [*i.e.*]{}, $\vec{n} \wedge d\vec{n}=0$ is satisfied. This condition allows a foliation of the manifold globally. Thus, without loss of generality, the metric is given by $$\begin{aligned} \bdr g_{\mu\nu}&=n_{\mu}n_{\nu}+h_{\mu\nu}\\ n_{\mu}&=(n,0,0,\cdots, 0)\ , \quad h_{\tau \nu}=0. \edr\end{aligned}$$ Using and the fact $h_{ij}$ is a positive definite matrix, we thus have $$\begin{aligned} h^{\tau\nu}=0\ , \quad v^{\mu}=\left(\tfrac{1}{n},0,0,\cdots, 0\right).\end{aligned}$$ The form of the metric, [[ to]{}]{} which the reduced theory is coupled, corresponds to a parent space-time metric $G_{MN}$, [[ with non-vanishing components]{}]{} given by $$\begin{aligned} \label{schoice} {{\color{black} G_{-+}=n\,, \quad G_{ij}=h_{ij}\,.}}\end{aligned}$$ [[ In addition, we assume that]{}]{} the parent space-time admits a null isometry [[ so that $h_{ij}$ and $n$ are independent of $x^{-}$. ]{}]{} In what follows, we will work with this particular choice of metric $G_{MN}$ [[ ]{}]{}. Without loss of generality, we choose $x_1=(0,0,\cdots,0)$ (we call it point $P$) and construct the Riemann normal co-ordinate with the origin as the base point. The Riemann normal co-ordinate $y^{M}$, is given in terms of the original co-ordinate $x^{M}$ as follows [@Brewin:2009se]: $$\begin{aligned} \label{expansion} y^{M}= x^{M}+ f^{M}_{AB}x^{A}x^{B}+f^{M}_{ABC}x^{A}x^{B}x^{C}+\cdots\,,\end{aligned}$$ where the index $M$ runs over $+,-,1,2,3,\cdots,d$. In [[ the]{}]{} coincident limit of [[ the]{}]{} reduced theory, [*i.e.*]{}, $x_{2}^{\mu} \to 0 $, for $\mu =+,1,2,\cdots, d$ ([[ with]{}]{} $x_{2}^{-}$ [[ possibly]{}]{} different from $0$), we claim that $$\begin{aligned} \label{co} \left[y_2^{\mu}\right]=0, \qquad \left[y_2^{-}\right]=x_2^{-},\end{aligned}$$ where [[ henceforth]{}]{} the square bracket is used to denote the coincident limit in [[ the]{}]{} reduced theory. We note that [[ $[f^{M}_{ABC...}x^{A}x^{B}x^{C}\cdots]=0$]{}]{} whenever any of the indices is not $-$. [[ R]{}]{}ecall that $f^{M}_{ABC\cdots}$ are constructed out of derivatives acting on metric. Thus, $f^{M}_{\underbrace{--\cdots-}_{N\ \text{indices}}}$ can be non-zero only if [[ it contains]{}]{} $N$ [[ factors]{}]{} of [[ the]{}]{} metric tensor $G_{-K_{i}}$, where $K_{i}$ is a running index with $i=1,2,\cdots, N$. This is because $G_{--}=0$ and derivatives can not carry [[ the]{}]{} “$-$" index as [[ the]{}]{} metric components are $x^{-}$-independent. Moreover, [[ by dimensional analysis]{}]{} $f^{M}_{\underbrace{--\cdots-}_{N}}$ has $N-1$ derivatives[[ $f^{M}_{\underbrace{--\cdots-}_{N}}$]{}]{}. Schematically, this assumes one of the following forms $$\begin{aligned} \partial_{A_1}\cdots \partial_{A_{N-1}} G_{-K_1}\cdots G_{-K_N} G^{MA_i} G^{A_{i_1} K_{j_1}}G^{A_{i_2}A_{j_2}}\cdots G^{K_{i_3}K_{j_3}}{{\color{black} \cdots}}\,,\\ \partial_{A_1}\cdots \partial_{A_{N-1}} G_{-K_1}\cdots G_{-K_N} G^{MK_i} G^{A_{i_1} K_{j_1}}G^{A_{i_2}A_{j_2}}\cdots G^{K_{i_3}K_{j_3}}{{\color{black} \cdots}}\,.\end{aligned}$$ Here the derivatives are assumed to act on all possible combinations, resulting in different possible terms. For example, for $N=2$, one can have [[ the following]{}]{} terms: $$\label{caseN2} \begin{aligned} G^{MA_1}G^{K_1 K_2}G_{-K_2}\partial_{A_1}G_{-K_1}\,,\\ G^{MK_2}G^{A_1 K_1}G_{-K_1}\partial_{A_1}G_{-K_2}\,,\\ G^{MK_2}G^{A_1 K_1}G_{-K_2}\partial_{A_1}G_{-K_1}\,. \end{aligned}$$ [[ There can not be any $x^-$ derivative for a term to be non-vanishing.]{}]{} [[ This implies the indices $A_i$ are contracted among themselves, except possibly for one contracted with $G^{MA_i}$, and the indices $K_i$ are contracted among themselves. But since $G_{-K}=0$ except for $G_{-+}$, and $G^{++}=0$, any term for which two factors of the metric tensor, $G_{-K_{i_1}}$ and $G_{-K_{i_2}}$, are contracted via $G^{K_{i_1}K_{i_2}}$ vanish. ]{}]{} Next, we show that $\left[\Delta_{VM}\right]=1$. The expression for $\Delta_{VM}$[[ , Eq, ,]{}]{} involves bi-derivatives of [[ the geodetic interval, Eq. , and the ]{}]{} determinant of [[ the]{}]{} metric. To begin with, we turn our attention to [[ the]{}]{} determinant of [[ the]{}]{} metric and note that $$\begin{aligned} \left[G^{\prime}(y_2)\right]=J^2(0,x_2^{-},0,\cdots,0)G(0,x_2^{-},0,\cdots,0)\,,\end{aligned}$$ where [[ a]{}]{} prime indicates quantities in Riemann normal co-ordinate and $J$ is the Jacobian associated with [[ the]{}]{} co-ordinate transformation . The $x^{-}$ independence in the original co-ordinate guarantees that $G(0,x_2^{-},0,\cdots,0)=G(0,0,0,\cdots,0)$, hence we have $$\begin{aligned} \label{a1} \left[G^{\prime}(y_2)\right]=\left(\frac{J(0,x_2^{-},0,\cdots,0)}{J(0,0,0,\cdots,0)}\right)^{2}G^{\prime}(0).\end{aligned}$$ [[ Next consider the geodetic interval]{}]{} from point $P$ to point $Q$[[ .]{}]{} [[ In]{}]{} Riemann normal co-ordinates [@Brewin:2009se] $$\begin{aligned} {{\color{black} y^{M}_{2}=y^M(Q)}}=y^{M}_{1}+s_{Q}\frac{dx^{M}}{ds}\bigg|_{s=0}\,,\end{aligned}$$ where $s_{Q}$ is the value of [[ the]{}]{} affine parameter at $Q$ and [[ $s=0$]{}]{} at $P$, [[ with $y^{M}_{1}=y^M(P)$]{}]{}. [[ Using Eq. , hence we have]{}]{} $$\begin{aligned} 2\sigma(y_2,y_1) = G_{MN}(0)(y_2^{M}-y_{1}^{M})(y_2^{N}-y_{1}^{N})= G^{\prime}_{MN}(0)(y_2^{M}-y_{1}^{M})(y_2^{N}-y_{1}^{N})\end{aligned}$$ [[ where we have used $G^{\prime}_{MN}(0)=G_{MN}(0)$.]{}]{} [[ It follows that]{}]{} $$\begin{aligned} \label{a2} \Delta_{VM}= \left(\frac{G^{\prime}(y_2)}{G^{\prime}(0)}\right)^{-1/2}\,.\end{aligned}$$ [[ We have]{}]{} continued back to Minkowskian signature (the definition in [[ Eq.]{}]{}  is for metric with Euclidean signature). [[ Since]{}]{} $\Delta_{VM}$ is a bi-scalar, use [[ of Eqs. ]{}]{}  $$\begin{aligned} \label{coin} \left[\Delta_{VM}\right]=\left(\frac{J(0,x_2^{-},0,\cdots,0)}{J(0,0,0,\cdots,0)}\right)^{-1}=J^{-1}(0,x_2^{-},0,\cdots,0)\end{aligned}$$ in the original co-ordinate [[ system,]{}]{} $x^{M}$. [[ Equation]{}]{}  is consistent with the result that $\Delta_{VM}=1$ when all the co-ordinates, including $x^{-}$, coincide, [*i.e.*]{}, when $x_{2}^{-}=0$. We aim to show that $$\begin{aligned} \left[\det\bigg(\frac{\partial y^{M}}{\partial x^{N}}\bigg)\right]= \det\bigg(\left[\frac{\partial y^{M}}{\partial x^{N}}\right]\bigg) = 1\end{aligned}$$ From Eq.  we have $$\begin{aligned} \ \left[\frac{\partial y^{M}}{\partial x^{N}}\right] = \delta^{M}_{N}+(f^{M}_{N-}+f^{M}_{-N})x^{-}+(f^{M}_{N--}+f^{M}_{-N-}+f^{M}_{--N})x^{-}x^{-}+\cdots\end{aligned}$$ Consider first the lowest two terms in the expansion. Explicitly, we have [@Brewin:2009se] $$2f^{M}_{N-}=2f^{M}_{-N}=\Gamma^{M}_{N-}=-\frac{1}{2}G^{Mi}\partial_{i}G_{N-}-\frac{1}{2}G^{M+}\partial_{+}G_{N-}+\frac{1}{2}G^{M+}\partial_{N}G_{+-}\,.$$ It follows that $f^{M}_{N-} \neq 0$ only for [[ $M=-$ or $N=+$]{}]{}. Similarly, [[ $f^{M}_{(N--)}\ne0$ provided $M=-$ or $N=+$ ]{}]{}, since [@Brewin:2009se] $$\begin{aligned} 6f^{M}_{NIJ}=\Gamma^{M}_{NE}\Gamma^{E}_{IJ}+\partial_{N}\Gamma^{M}_{IJ}\end{aligned}$$ [[ By an argument analogous to that below Eqs.  one can show that $[f^{M}_{N--\cdots-}]=0$ (at least three $-$ subscripts).]{}]{} [[ Schematically]{}]{} $$\left[\bigg(\frac{\partial y^{M}}{\partial x^{N}}\bigg)\right]= \begin{pmatrix} 1 & * & * & \dots & \dots &*\\ 0 & 1 & 0 & \dots & \dots & 0\\ 0 & * & 1 & 0 & \dots & 0\\ \vdots & \vdots & &\ddots & \ddots &\\ 0 & * & 0 & 0& 1 & 0\\ 0 & * & 0 & 0& 0 &1 \end{pmatrix}$$ [[ where a “$*$” means a non-zero entry.]{}]{} Thus, the [[ matrix has unit]{}]{} determinant and we have, using Eq. , $$\begin{aligned} \label{eq:deltais1} \left[\Delta_{VM}\right]=1\,.\end{aligned}$$ [[ Lastly, we turn to the heat kernel expansion coefficients, $a_n$. They are determined by the recursive relation  [@Mukhanov:2007zz], $$\begin{aligned} n a_n+\partial_{M}\sigma\partial^{M}a_n=-\Delta^{-1/2}_{VM}\mathcal{M}\left(\Delta^{1/2}_{VM} a_{n-1}\right)\,,\end{aligned}$$ and $a_0=1$, where $\mathcal{M}$ is the relativistic operator in the parent theory. The condition of $x^-$ independence of $[a_n]$, $[\partial_i a_n]$ and $[\partial_i\partial_j a_n]$ can be imposed on the recursion self-consistently. To show this one uses $x^-$ independence of $\left[\Delta_{VM}\right]$, $\left[\partial_i\Delta_{VM}\right]$ and $\left[\partial_i\partial_j\Delta_{VM}\right]$, which follows from an argument similar to the one used to establish Eq.  ]{}]{} [[ Explicit Perturbative Calculation of $\eta$-regularized Heat Kernel]{}]{} {#hketa} ============================================================================ In this appendix [[ we give an explicit perturbative computation that shows the vanishing of the anomaly for a class of curved backgrounds. This serves to verify the general arguments presented in the body of the manuscript in a specific, simple example, and allows us to study explicitly the $\eta$ regulated Heat Kernel asking in particular whether the $\eta\to0$ limit is a well defined limit as $m\neq 0$.]{}]{} To be specific, we compute the heat kernel on a curved background, characterized by $$\begin{aligned} n_{\mu}=\left(\frac{1}{1-n(x)},0,0\right), &\qquad v^{\mu}=\left(1-n(x),0,0\right)\\ \quad h_{ij}=\delta_{ij}\, , &\quad \sqrt{g}=\sqrt{\text{det}(n_{\nu}n_{\nu}+h_{\mu}h_{\nu})}=\frac{1}{1-n(x)}.\end{aligned}$$ where $n(x)$ is a function of space only and $h_{i0}=0$. The special choice is inspired by [@Auzzi:2016lxb]and additionally serves the purpose of affording a direct comparison with that work. We will perform a perturbative calculation as an expansion in $n(x)$. We will specialize to a $2+1$ dimensional Schrödinger field theory coupled to this background. The action is given by $$\begin{aligned} \label{Action} S=\int dtd^2x\ {{\color{black} N}}\left(2m\phi^{\dagger}\imath\tfrac{1}{N}\partial_t\phi-h^{ij}\partial_i\phi^{\dagger}\partial_j\phi-\xi R\phi^{\dagger}\phi\right)\end{aligned}$$ where $N(x)=\tfrac{1}{1-n(x)}$ and $R$ is the Ricci scalar of the $3+1$ dimensional geometry, on which the parent theory lives. As we will see, the result of this calculation is that the Weyl anomaly, corresponding to the theory described by Eq.  is given by $$\begin{aligned} \label{eq:app:A_G} \mathcal{A}_G=2\pi\delta(m) \left(-aE_4+cW^2+bR^2+dD_MD^MR\right)\end{aligned}$$ where the coefficients $a,b,c,d$ are given by: $$\label{ANOMALY} \begin{aligned} a&=\frac{1}{8\pi^2}\frac{1}{360}\ , \quad b=\frac{1}{8\pi^2}\frac{1}{2}\left(\xi-\frac{1}{6}\right)^2\ ,\\ c&=\frac{1}{8\pi^2}\frac{1}{120}\ , \quad d=\frac{1}{8\pi^2}\left(\frac{1-5\xi}{30}\right)\ . \end{aligned}$$ These are exactly the same as in the expression for the Weyl Anomaly of a relativistic complex scalar field theory[^13] living in one higher dimension [@Deser:1976yx; @Brown:1976wc; @Dowker:1976zf; @Hawking:1976ja; @Christensen:1977jc; @Duff:1977ay; @Duff:1993wm]: $$\begin{aligned} \mathcal{A}_R= \left(-aE_4+cW^2+bR^2+dD_MD^MR\right)\,.\end{aligned}$$ To arrive at this result, we proceed by considering the heat kernel of the following Euclidean operator, corresponding to the action in Eq. , namely $$\begin{aligned} \mathcal{M}_{E,c}=2m\tfrac{1}{N}\partial_\tau - \mathcal{D}^2+\xi R\,,\end{aligned}$$ where we have $$\begin{aligned} \mathcal{D}^2 &= \frac{1}{\sqrt{g}}\partial_{i}\left(\sqrt{g}h^{ij}\partial_j\right)= \partial^2 + \left(1+n\right)\left(\partial_i n\right)\partial_i\,,\\ R &= -2\partial^{2}n-2n\partial^2n-\frac{7}{2}\partial_i n \partial_i n+\cdots\,,\\ -g^{1/4}\mathcal{D}^2\left(g^{-1/4}\delta(x)\right)&= -\partial^2 \delta(x) +\delta(x) \left(\frac{1}{2}\partial^2n+\frac{1}{2}n\partial^2n+\frac{3}{4}\partial_in\partial_in\right).\end{aligned}$$ The Euclidean operator can be expressed as the one in flat space-time, perturbed by the background field $n(x)$: $$\begin{aligned} \label{pert} \nonumber \langle \vec{x},\tau|\mathcal{M}_{E,c}|\vec{x}^{\prime},\tau^{\prime}\rangle&=\langle \vec{x},\tau|\mathcal{M}_{E,f}|\vec{x}^{\prime},\tau^{\prime}\rangle+mP_{1}(x)\partial_\tau\delta(\vec{x}-\vec{x}^{\prime})\delta(\tau-\tau^{\prime})\\ & \quad+P_2(x)\delta(\vec{x}-\vec{x}^{\prime})\delta(\tau-\tau^{\prime})\,,\end{aligned}$$ where the subscript $c$ and $f$ denote the curved and flat space-time respectively while $E$ denote the Euclidean nature of the operator. Here we have introduced $$\begin{aligned} \label{P} P_1(x)=2n(x), \quad P_2(x)= \left(\frac{1}{2}\partial^2n+\frac{1}{2}n\partial^2n+\frac{3}{4}\partial_i n\partial_i n\right)-\xi \left(2\partial^2 n+2n\partial^2 n+\frac{7}{2}\partial_i n\partial_i n\right).\end{aligned}$$ The heat kernel can be obtained as a perturbative expansion of the background fields as follows: $$\begin{aligned} K(s)=\exp\left[-s\left(\mathcal{M}_{E,f}+P\right)\right]=\sum_{N=0}^{\infty}(-1)^{N}K_{N}(s)\,.\end{aligned}$$ The $K_{N}(s)$ is defined as follows: $$\begin{aligned} \label{Kn} K_{N}(s)=\int_0^s\!\!ds_{N} \int_0^{s_{N}}\!\!\! ds_{N-1}\cdots \int_0^{s_{2}}\!\!\! ds_{1}\ G(s-s_N)PG(s_N-s_{N-1})P\cdots G(s_2-s_1)P G(s_1)\,.\end{aligned}$$ where $G(s)=e^{-s\mathcal{M}_{E,f}}$ and $P$ is the perturbation , explicitly given by $$\begin{aligned} \langle \vec{x},\tau|P|\vec{x}^{\prime},\tau^{\prime}\rangle&=mP_{1}(x)\partial_\tau\delta(\vec{x}-\vec{x}^{\prime})\delta(\tau-\tau^{\prime})+P_2(x)\delta(\vec{x}-\vec{x}^{\prime})\delta(\tau-\tau^{\prime}).\end{aligned}$$ One can now complete the calculation by using the matrix element of $G(s)$ as given by $$\begin{aligned} \label{fhk} \nonumber \mathcal{G}_{g,E}\left(s; (\vec{x}_2, \tau_2),(\vec{x}_1, \tau_1)\right)&\equiv \langle \vec{x}_2, \tau_2|G(s)|\vec{x}_1,\tau_1\rangle \\ & = \frac{1}{\pi}\left(\frac{1}{4\pi s}\right)^{d/2}\left[\frac{s\eta}{\left(2ms-\tau_2+\tau_1\right)^2+s^2\eta^2}\right]e^{-\frac{(\vec{x}_2-\vec{x}_1)^2}{4s}},\end{aligned}$$ which corresponds to the heat kernel expression for the $\eta$-regulated Euclidean operator: $\mathcal{M}^{\prime}_{E,g}=2m\partial_{\tau}-\nabla^2+\eta\sqrt{-\partial^{2}_{\tau}}$, as discussed in the last few paragraphs of \[sec:heatKdircalcsub\].[^14] This reproduces Eq.  as $\eta\to0$. The evaluation of Eq.  follows the procedure sketched out in the appendix of [@Auzzi:2016lxb]. We separate the contributions from $P_1$ and $P_2$ to $K_1$ as follows: $$\begin{aligned} K_{1P_1}(s)&=\left(\frac{\tfrac{\eta}{2}}{m^2+\tfrac{\eta^2}{4}}\right)\left(\frac{-1}{4m^2+\eta^2}\right)\frac{8m^2}{\left(4\pi s\right)^{2}}\left(P_1+\tfrac{s}{6}\partial^2P_1+\tfrac{s^2}{60}\partial^2\partial^2P_1+\cdots\right),\\ K_{1P_2}(s)&=\left(\frac{\tfrac{\eta}{2}}{m^2+\tfrac{\eta^2}{4}}\right) \frac{2}{\left(4\pi s\right)^{2}}\left(sP_2+\tfrac{s^2}{6}\partial^2P_2+\cdots\right), \end{aligned}$$ and for $K_2$, which gets contributions quadratic in $P_1$ and $P_2$, as follows: $$\begin{aligned} \nonumber K_{2P_1P_1}(s)&=\frac{\left(24 m^2-2 \eta ^2\right)}{\left(\eta ^2+4 m^2\right)^2}\left(\frac{2m^2\eta}{4m^2+\eta^2}\right)\frac{1}{(4\pi s)^2}\bigg(P_1^2+\tfrac{s}{3}P_1\partial^2P_1+\tfrac{s}{6}\partial_iP_1\partial_iP_1\\ &\quad+\frac{s^2}{180}\left(6P_1\partial^2\partial^2P_1+5\partial^2P_1\partial^2P_1+12\partial_i\partial^2P_1\partial_iP_1+4\left(\partial_i\partial_jP_1\right)\left(\partial_i\partial_jP_1\right)\right)\bigg)\\ K_{2P_1P_2}(s)&=\left(\frac{\tfrac{\eta}{2}}{m^2+\tfrac{\eta^2}{4}}\right)\left(\frac{-1}{4m^2+\eta^2}\right)\frac{8m^2}{\left(4\pi s\right)^{2}}\Big(\tfrac{s}{2}P_1P_2\nonumber\\ &\qquad\qquad+\tfrac{s^2}{12}(P_2\partial^2P_1+P_1\partial^2P_2+\partial_iP_1\partial_iP_2)+\cdots\Big)\\ K_{2P_2P_1}(s)&=K_{2P_1P_2}(s)\\ K_{2P_2P_2}(s)&=\left(\frac{\tfrac{\eta}{2}}{m^2+\tfrac{\eta^2}{4}}\right) \frac{2}{\left(4\pi s\right)^{2}}\left(\tfrac{s^2}{2}P_2^2+\cdots\right)\end{aligned}$$ The anomaly is determined by the $s$-independent terms in $K_N$. In $\eta\to 0$ limit, factors of $\delta(m)$ arise, after use of the following easily verifiable limits $$\begin{aligned} \lim_{\eta\to 0} &\ \left(\frac{\tfrac{\eta}{2}}{m^2+\tfrac{\eta^2}{4}}\right)\left(\frac{8m^2}{4m^2+\eta^2}\right)= \pi \delta(m)\ ,\\ \lim_{\eta\to 0}&\ \left(\frac{\tfrac{\eta}{2}}{m^2+\tfrac{\eta^2}{4}}\right)=\pi\delta(m)\ ,\\ \lim_{\eta\to 0}&\ \frac{24 m^2-2 \eta ^2}{\left(\eta ^2+4 m^2\right)^2}\left(\frac{2\eta m^2}{m^2+\tfrac{\eta^2}{4}}\right)=2\pi \delta(m).\end{aligned}$$ In $\eta\to 0$ limit, the $s$ independent terms are given by $$\begin{aligned} K_{1P_1}(s)&\ni\frac{\delta(m)}{16\pi}\ \left[-\frac{1}{30}\partial^2\partial^2n\right] \ ,\\ \nonumber K_{1P_2}(s)&\ni\frac{\delta(m)}{16\pi}\ \left[\frac{1}{3}\partial^2P_2\right]\\ \nonumber &=\frac{\delta(m)}{16\pi}\ \bigg[\frac{1}{3}\bigg(\left(\frac{1}{2}-2\xi\right)\partial^2\partial^2n+\left(\frac{1}{2}-2\xi\right)\partial^2n\partial^2n+\left(\frac{1}{2}-2\xi\right)n\partial^2\partial^2n\\ &\quad +\left(\frac{5}{2}-11\xi\right)\partial_in\partial_i\partial^2n+\left(\frac{3}{2}-7\xi\right)\left(\partial_i\partial_jn\right)\left(\partial_i\partial_jn\right)\bigg)\bigg] \ ,\\ K_{2P_1P_1}&\ni\frac{\delta(m)}{16\pi}\ \left[\frac{1}{90}\left(6n\partial^2\partial^2n+5\partial^2n\partial^2n+12\partial_i\partial^2n\partial_in+4\left(\partial_i\partial_jn\right)\left(\partial_i\partial_jn\right)\right)\right] \ ,\\ \nonumber K_{2P_1P_2}+K_{2P_1P_2}&\ni\frac{\delta(m)}{16\pi}\ \left[\frac{-1}{3}(P_2\partial^2n+n\partial^2P_2+\partial_in\partial_iP_2)\right]\\ &=\frac{\delta(m)}{16\pi}\ \left[\frac{-1}{3}\left(\frac{1}{2}-2\xi\right)(\partial^2n\partial^2n+n\partial^2\partial^2n+\partial_in\partial_i\partial^2n)\right]\ ,\\ K_{2P_2P_2}&\ni\frac{\delta(m)}{16\pi}\ \left[P_2^2+\cdots\right] =\frac{\delta(m)}{16\pi}\ \left[\left(\frac{1}{2}-2\xi\right)^2\partial^2n\partial^2n+\cdots\right]\ .\end{aligned}$$ Using $$\begin{aligned} R &= -2\partial^{2}n-2n\partial^2n-\frac{7}{2}\partial_i n \partial_i n+\cdots\ ,\\ R^2 &= 4(\partial^{2}n)^2+\cdots\ ,\quad W^2 =\frac{1}{3}(\partial^{2}n)^2+\cdots\ ,\\ E_4 &=2(\partial^{2}n)^2-2(\partial_{i}\partial_{j}n)(\partial_i\partial_jn)+\cdots\ ,\\ D_MD^MR&=-2\partial^4n-2(\partial^{2}n)^2-2n\partial^4n-13(\partial_j n)(\partial_j \partial^2n)-7(\partial_{i}\partial_{j}n)(\partial_i\partial_jn)+\cdots\ .\end{aligned}$$ one verifies the anomaly expression in Eqs.  and . Since our calculation only fixes the value of $12b+c$, in oder to break the degeneracy we use the fact that for $\xi=\frac{1}{6}$ the Wess-Zumino consistency condition precludes an $R^2$ anomaly [@Auzzi:2016lxb] and assume $c$ is $\xi$-independent. We emphasize that the calculation carried out here does not rely on any null cone reduction technique, hence, this lends further credence to the LCR prescription, which has correctly produced the $\delta(m)$ factor, as elucidated before. [^1]: [[ The unusual sign convention in our definition of $x^-$ results in the peculiar sign in Eq. .]{}]{} [^2]: Positivity is required for convergence of the gaussian integral. [^3]: For anti-commuting fields $W =- \ln (\det(\mathcal{M}))$; the minus sign is the only difference between commuting and anti-commuting cases, so that in what follows we restrict our attention to the case of commuting fields. [^4]: In next few sections, we explicitly find this asymptotic form for $z=2$ while the arbitrary $z$ case is handled separately in \[sec:gener\]. [^5]: Incidentally, this shows that the anomaly is absent when $d+z$ is odd. [^6]: Even though $\mathcal{M}_{E}$ is not a hermitian operator, the heat kernel is well defined for any operator as long as $\text{Re}(\lambda_{k})>0$ where $\lambda_{k}$ are its eigenvalues. We explore this technical aspect in appendix. [^7]: The choice of regulator is suggested naturally, as it can ultimately be linked to the Minkowski form of the propagator $ G= \frac{\imath}{2m\omega-k^{2}+\imath|\eta \omega|} \to \frac{\imath}{2m\omega-k^{2}+\imath 0^{+}} $ [^8]: Alternatively, one can think of introducing the regulator, only after going over to the Euclidean version. The unregulated Euclidean operator, $\mathcal{M}_{E,g}=2m\partial_{\tau}-\nabla^{2}$ is regulated to $\mathcal{M}^{\prime}_{E,g}=2m\partial_{\tau}-\nabla^2+\eta\sqrt{-\partial^{2}_{\tau}}$. [^9]: Recall, in the parent theory $x^{\pm}=\frac{1}{\sqrt{2}}(x^1 \pm t)$. Note that we are using a non-standard sign convention in the definition of $x^-$. [^10]: One may as well assume that both parent and reduced theories have, in addition, $SO(d)$ rotational symmetry. [^11]: Provided these heat kernels are well defined. We postpone this technical aspect to the appendix. [^12]: [[ Giving up on [[ the]{}]{} requirement of locality allows $z$ to be any positive real number. In [[ this case]{}]{}, the anomaly is expected to [[ be]{}]{} present whenever $d+z$ is even. It might be of potential interest to look at these cases carefully and make sure that non-locality does not provide any obstruction in the anomaly calculation and [[ that the]{}]{} renormalization process can be done in a consistent manner.]{}]{} [^13]: [[ The Weyl anomaly of a complex scalar field is twice of that of a real scalar field.]{}]{} [^14]: [[ In curved space-time, $\mathcal{M}^\prime_{E,g}$ includes a perturbation $n(x)\eta\sqrt{-\partial^{2}_{\tau}}$, that, however, does not contribute to the anomaly in the $\eta\to0$ limit. This term’s contribution to $K_{1}$ is proportional to $\frac{\eta\left(\eta^2-4m^2\right)}{\left(\eta ^2+4 m^2\right)^2}$ that vanishes as $\eta\to 0$, without giving a $\delta(m)$ (or any derivative of $\delta(m)$). This term’s contributions to $K_2$ also vanish as $\eta\to 0$. We omit these terms for simplicity for rest of the appendix.]{}]{}
--- abstract: 'We report the first observation of the Mach cones excited by a larger microparticle (projectile) moving through a cloud of smaller microparticles (dust) in a complex plasma with neon as a buffer gas under microgravity conditions. A collective motion of the dust particles occurs as propagation of the contact discontinuity. The corresponding speed of sound was measured by a special method of the Mach cone visualization. The measurement results are incompatible with the theory of ion acoustic waves. The estimate for the pressure in a strongly coupled Coulomb system and a scaling law for the complex plasma make it possible to derive an evaluation for the speed of sound, which is in a reasonable agreement with the experiments in complex plasmas.' author: - 'D. I. Zhukhovitskii' - 'V. E. Fortov' - 'V. I. Molotkov' - 'A. M. Lipaev' - 'V. N. Naumkin' - 'H. M. Thomas' - 'A. V. Ivlev' - 'M. Schwabe' - 'G. E. Morfill' title: Measurement of the speed of sound by observation of the Mach cones in a complex plasma under microgravity conditions --- \[s1\] Introduction =================== Dusty or complex plasma is a low-temperature plasma, which includes dust particles with sizes ranging from $1$ to $10^3 \;\mu {\mbox{m}}$. Due to the higher electron mobility, particles acquire a considerable electric charge. Thus, a strongly coupled Coulomb system is formed.[@1; @2; @3; @4; @5; @6; @8; @9] In such plasma, various collective phenomena at the level of individual particles take place. Complex plasmas are usually studied in gas discharges at low pressures, e.g., in the radio frequency (RF) discharges. Under microgravity conditions, a large homogeneous bulk of the complex plasma can be observed. The particles form a nearly homogeneous cloud around the center of the chamber, typically with a central void caused by the ions streaming outwards. The microgravity conditions are realized either in parabolic flights[@10; @11; @12; @13; @14] or onboard the International Space Station (ISS).[@10; @15; @16; @17; @18; @019; @19] Some experiments are carried out on an inhomogeneous system consisting of the particles with different diameters. The simplest example of such a system is a large particle surrounded by a dense cloud of smaller particles. Usually, this particle called the projectile moves through the cloud with a supersonic or subsonic velocity. Such projectiles are generated using controlled mechanisms of acceleration,[@11; @20] or they can appear sporadically.[@19; @21] In the latter case, agglomerates or larger particles left over from previous experiments (not removed during the cleaning procedure and accumulated at the periphery of the particle cloud) are cracked upon illumination by the laser sheet or upon shaking the chamber. Detached individual particles acquire a negative charge and then they are accelerated due to the Coulomb repulsion.[@22] In the work by Havnes [*et al.*]{},[@25] propagation of a long-wave nondispersive disturbance, which is usually called the sound, and formation of a cone corresponding to the Mach cone in a continuous medium were predicted for the strongly coupled system of the dust particles. In the first experiments, the Mach cones in the 2D plasma crystal were excited by a sphere moving faster than the lattice sound speed beneath the 2D lattice plane (Samsonov [*et al.*]{}[@26; @21]) and by applying a force from the radiation pressure of a moving laser beam (Melzer [*et al.*]{}[@30]). Later, Nosenko [*et al.*]{}[@27; @28] used the latter method of the disturbance excitation and observed several Mach cones, which were attributed by the authors to propagation of the compressional and shear wakes and to their interference. The shape of the Mach cones formed by nondispersive linear sound waves was calculated analytically, and the curved wings of the Mach cones were experimentally observed by Zhdanov [*et al*]{}.[@31; @32] In the recent works by Caliebe, Arp, and Piel,[@11] Jiang [*et al.*]{}[@019], and Schwabe [*et al.*]{},[@19] the excitation of the 3D Mach cones by the supersonic projectiles moving in a strongly coupled cloud of charged particles was observed. In these studies, argon was used as a buffer gas. The determined speed of sound is not much different in the performed experiments. In this work, we report the first measurement of the speed of sound in the dust cloud by observation of the Mach cones in the case that neon is used as a buffer gas. The measured speed of sound proved to be more than twice as low as for argon, which means that this quantity depends rather sensitively on the sort of a gas. We can account for observed dependences on the basis of a scaling law for the dust cloud obtained in Ref. . The paper is organized as follows. In Sec. \[s2\], we describe the experimental setup and the method of the Mach cone visualization. The details of experimental data processing and the results of speed of sound measurement are presented in Sec. \[s3\]. A qualitative interpretation for our experiment is discussed in Sec. \[s4\], and the results of this study are summarized in Sec. \[s5\]. \[s2\] Experiment ================= The experiment was performed during the 13th mission of PK-3 Plus on the ISS. The setup is described in detail in Ref. . The heart of this laboratory consists of a capacitively coupled plasma chamber with circular electrodes of 6 cm diameter and 3 cm apart. A radio frequency (RF) voltage applied to these electrodes generates a bulk of plasma. A dust cloud was formed by the microparticles injected into the main plasma with dispensers. Neon was used as a buffer gas at the pressures of 15 and 20 Pa, and the main microparticle cloud was composed of the monodisperse silica particles with the diameter of $1.55\;\mu {\mbox{m}}$. The diameter of observed projectiles estimated as the diameter of larger particles, which are also present in the chamber, is most likely the same as in the experiment[@19], i.e., it is equal to $15\;\mu {\mbox{m}}$. The trajectories of the dust particles and the projectile were monitored using the optical particle detection system, which consisted of a laser illumination system and a recording system, containing three progressive scan CCD-cameras. The illumination system is based on two laser diodes with $\lambda = 686\;{\mbox{mm}}$ and a continuous wave optical power of 40 mW, the light of which is focused to a thin sheet. This laser light sheet has a full width at half maximum of about $80\;\mu {\mbox{m}}$ at the focal axis. The cameras with different magnifications and fields of view recorded the light scattered by the microparticles at $90^\circ $. To analyze the microparticle motion, we used three cameras with different fields of view and resolutions, which showed the entire microparticle cloud between the electrodes. The plasma glow was filtered out. The cameras follow the PAL standard with a resolution of $768 \times 576$ pixels. Each camera provides two composite time interlaced video channels with 25 Hz frame rate. Both video channels from one camera were selected for recording, so they were combined to a 50 Hz progressive scan video. ![\[f1\](a) Snapshot of the projectile moving through the cloud of dust particles with a supersonic speed; (b) enlarged fragment of the snapshot; and (c) the result of the Mach cone visualization. The neon gas pressure is 20 Pa.](1){width="9.44cm"} We observed two events of the projectile motion through the dust cloud. Figure \[f1\] shows the first one. The images were recorded by the quadrant view camera with the resolution of $49.6\;\mu {\mbox{m}}$ and $45.05\;\mu {\mbox{m}}$ per pixel in horizontal and vertical directions, respectively, at 50 frames/s. The projectile moved with a supersonic velocity from the upper left to the lower right side of the dust cloud \[Fig. \[f1\](a)\]. The track of the moving projectile is surrounded by a dust-free region (cavity), which emerges as a result of a strong Coulomb repulsion between the negatively charged dust particles and the projectile \[Fig. \[f1\](b)\]. The cavity is elongated, the position of a projectile being eccentric. A comparison with \[Fig. \[f1\](c)\] shows that in the center of a perturbation propagating through the dust cloud, the number density of dust particles is continuous both in the vicinity of the cavity and far apart from it. The perturbation proper has a typical form of the Mach cone. On this basis, we can conclude that observed perturbation is a *contact discontinuity*.[@23] Visualization of the Mach cone included the comparison of corresponding pixels for each pair of two successive video frames converted to 8-bit grayscale mode negative images. If a gray value for the latter image was within 10% of that for the former image, a corresponding pixel of the resulting image was left blank \[i.e., it is white in Fig. \[f1\](c)\]. Otherwise, the pixel assumed the value of the former image. Although the dust particles form a strongly coupled system, they move around their equilibrium positions in the dust crystal. Typically, the positions of a dust particle in the successive frames differ by at least one pixel. Due to a low velocity, a point rather than a track in the image represents a particle. Consequently, many more of the unperturbed dust particles appear in the resulting image. However, if a particle finds itself in the contact discontinuity region, it is represented by a track due to a considerable velocity inside the perturbation. The tracks of neighboring particles overlap. Thus, a small region of the pixels with almost equal gray values is formed. Since the time interval between successive frames is 0.02 s, this region is shifted by 0.02 cm at the propagation velocity of $1\;{\mbox{cm/s}}$. If the perturbation thickness is larger than 0.02 cm (in our experiment, it is about 0.04 cm), its images in the successive frames partially overlap. This area of overlap is represented by the white pixels in the resulting image, which makes it possible to visualize the Mach cone in Fig. \[f1\] (obviously, the dust-free regions are also represented by the white pixels). Note that the efficiency of other methods used in Refs.  would be insufficient for the visualization if neon was used as a carrier gas. \[s3\] Determination of the speed of sound ========================================== It is well-known that the Mach angle $\theta$ is related to the speed of sound $c_s$ by the Mach cone relation $\sin \theta = c_s /u$, where $u$ is the velocity of the perturbation source (in our case, this is the projectile velocity), i.e., $$c_s = u\sin \theta . \label{e1}$$ The projectile velocity was determined by manual measurement of the positions of the projectile track centers in different frames. The velocity proved to increase from $4.6$ to $5.8\;{\mbox{cm/s}}$ as the projectile crossed the dust cloud. The estimates show that such acceleration along with a slight curvature of the projectile trajectory, which could lead to a bend of the rulings of a cone, would have a negligibly small effect on the result of determination of $c_s$ as compared to the measurement errors. The Mach angle can be determined using the vectors ${\bf{r}}_d$ and ${\bf{r}}_u$ that coincide with the lower and upper rulings of the Mach cone, respectively: $$\sin \theta = {\displaystyle{1 \over {\sqrt 2 }}}\left( {1 - {\displaystyle{{{\bf{r}}_u \cdot {\bf{r}}_d } \over {r_u r_d }}}} \right)^{1/2} . \label{e2}$$ Alternatively, one can measure the Mach angle $\theta _d$ ($\theta _u $) between the vector ${\bf{r}}_d$ (${\bf{r}}_u $) and the projectile displacement vector ${\bf{s}}$, which coincides with the projectile track. Then $$\begin{array}{*{20}c} {\sin \theta = {\displaystyle{{\sin \theta _u + \sin \theta _d } \over 2}},} \\ {\sin \theta _{u,d} = \left[ {1 - \left( {{\displaystyle{{{\bf{r}}_{u,d} \cdot {\bf{s}}} \over {r_{u,d} s}}}} \right)^2 } \right]^{1/2} .} \\ \end{array} \label{e3}$$ ![\[f2\]Speed of sound as a function of the length of the projectile path in the dust cloud $x$ (the first event). The methods of measurement are discussed in the text.](2){width="9.2cm"} We determined the coordinates of the vectors ${\bf{r}}_d $, ${\bf{r}}_u $, and ${\bf{s}}$ manually in each frame, which allowed one to measure $c_s $. Figure \[f2\] illustrates the results. *Method 1* denotes the calculation by the formula (\[e2\]); *method 2* implies averaging of $\sin \theta$ given by Eqs. (\[e2\]) and (\[e3\]). It is seen that both methods yield close results and no apparent dependence on the coordinate can be revealed within experimental errors. The averaging over the entire path of the projectile leads to the estimate $c_s = 0.96 \pm 0.14\;{\mbox{cm/s}}$. The second event of the projectile motion through the dust cloud was detected at the neon pressure of 15 Pa. The other parameters were the same as for the first event. Here, the Mach cone can be resolved only in three processed images, which increases the error. The average projectile velocity for this event $u = 2.4\;{\mbox{cm/s}}$ is more than twice as low as for the above-discussed event, and the speed of sound still amounts to $c_s = 0.97 \pm 0.51\;{\mbox{cm/s}}$, which is very close to the previous estimate. Thus, $c_s$ is more than twice as low as for argon[@11; @019; @19] revealing the effect of the gas sort. ![\[f3\]Damping rate as a function of the length of the projectile path in the dust cloud $x$ (the first event).](3){width="9.4cm"} A clear resolution of the Mach cone rulings makes it possible to estimate the damping rate of the propagating perturbation as $\nu = (u/r)\cos \theta $, where $r$ is the base radius of the Mach cone. Like $c_s $, $\nu$ reveals no apparent dependence on the coordinate along the projectile path (Fig. \[f3\]). Its average amounts to $\nu = 46 \pm 3\;{\mbox{s}}^{ - 1}$ for the first event and to $\nu = 32 \pm 13\;{\mbox{s}}^{ - 1} $, for the second one. For the first event, the damping length $l = c_s /\nu \approx 0.021\;{\mbox{cm}}$, which is of the same order of magnitude as the visible perturbation wavelength $\lambda \approx 0.026\;{\mbox{cm}}$. Both lengths have the scale of ca. three to four interparticle distances. \[s4\] Discussion ================= Under the conditions of our experiment, the volume charge of electrons is negligibly small as compared to that of the ions and particles,[@22] and the complex plasma can be treated as a system of negatively charged particles on the uniform positive background of the ions. Such situation is characteristic of strongly coupled Coulomb systems under high energy density. Similarly to the theory of ideal collisionless plasma, the perturbation treated in this work is commonly associated with the ion acoustic wave with the speed of sound[@24] $$c_s = \left( {{\displaystyle{{\left| {Z_d } \right|T_d } \over {M_d }}}} \right)^{1/2} , \label{e4}$$ where $Z_d = a_d T_e \Phi _d /e^2$ is the dust particle charge in units of the elementary charge $e$, $a_d$ is the particle radius, $T_e$ is the electron temperature (the Boltzmann constant is set to unity), $\Phi _d = e\varphi /T_e $, $\varphi$ is the electrostatic potential of a particle, $T_d$ is the particle temperature that is related to the average kinetic energy of a particle, $M_d = (4\pi /3)\rho a_d^3$ is the particle mass, and $\rho$ is the density of the particle material. For our experiment, $T_e \simeq 7\;{\mbox{eV}}$. If we use the orbital motion limited approximation[@9] for determination of $\Phi _d$ and set $T_d = T_n $, where $T_n = 300\;{\mbox{K}}$ is the temperature of a buffer gas (room temperature), we arrive at the estimate $c_s = 10.2\;{\mbox{cm/s}}$. Obviously, this is yet a *lower bound*. Thus, we can ascertain at least one order of magnitude disagreement between experiment and theory, which cannot be removed by existing alternative approaches to the calculation of the particle charge. Therefore, the formula (\[e4\]) is *fully inapplicable* for strongly coupled Coulomb systems. For such systems, we will derive an alternative estimate for the speed of sound $c_s $. Obviously, $c_s^2$ is proportional to the ratio of the pressure, which is in the order of magnitude $Z_d^2 e^2 n_d^{4/3} $, to the mass density of a dust cloud $(4\pi /3)a_d^3 \rho n_d $. Hence, $$c_s^2 \sim {\displaystyle{3 \over {4\pi }}}{\displaystyle{{e^2 n_i^2 } \over {\rho a_d^3 n_d^{5/3} }}}, \label{e5}$$ where $n_i$ is the ion number density and we used the quasineutrality condition $\left| {Z_d } \right|n_d \simeq n_i $. It was shown in Ref.  that the overlap of potentials of the dust particles, which scatter streaming ions, leads to the scaling law for the dust cloud that relates the particle number density to the particle radius: $n_d^{ - 2/3} = (4\pi /3)^{2/3} \kappa T_e a_d $, where $\kappa$ is some constant (the “dust invariant"). We substitute this in (\[e5\]) to derive $$c_s \simeq \left( {{\displaystyle{{4\pi } \over 3}}} \right)^{1/3} (\kappa T_e )^{5/4} {\displaystyle{{en_i } \over {\rho ^{1/2} a_d^{1/4} }}}. \label{e6}$$ Unfortunately, neither $n_d$ nor $n_i$ are available for the experiment with neon. For this reason, we test the relation (\[e6\]) on the experiments with argon (Table \[t1\]). It is seen that for $n_i = 5.5 \times 10^8 \;{\mbox{cm}}^{ - 3} $, which is close to typical values for argon,[@22] this relation yields a reasonable agreement with the experiment and reproduces the observed dependences of the speed of sound on the particle radius and electron temperature. Equation (\[e6\]) reproduces the determined speed of sound for neon ($0.96\;{\mbox{cm/s}}$) at $n_i = 1.0 \times 10^8 \;{\mbox{cm}}^{ - 3} $, which also seems to be a reasonable value because the ion number density in neon is typically one order of magnitude lower than that in argon[@18] (however, $\kappa$ may be different for neon). Note that (\[e4\]) disagrees with the experiments with argon as well, albeit the disagreement is not as great as for neon. For example, it leads to $c_s = 4.4\;{\mbox{cm/s}}$ for the experiment[@19]. $p$, Pa $2a_d ,\;\mu {\mbox{m}}$ $c_s$ (exp.), ${\mbox{cm/s}}$ $c_s$ (\[e6\]), ${\mbox{cm/s}}$ --------- -------------------------- ------------------------------- --------------------------------- 9.6 1.55 2.4 (Ref. ) 2.26 10 2.55 2.2 (Ref. ) 2.18 30 9.55 2.0 (Ref. ) 2.14 : Speed of sound $c_s$ in experiments with argon as a buffer gas at different argon pressures $p$ and dust particle diameters vs. theoretical estimation Eq. (\[e6\]) at $\kappa = 0.209\;{\mbox{cm/eV}}$ [@22] and $n_i = 5.5 \times 10^8 \;{\mbox{cm}}^{ - 3} $.[]{data-label="t1"} Consider the damping of a propagating perturbation. The damping due to the friction between the dust particles and a buffer gas (neutral or the Epstein drag[@33]), which always takes place in the complex plasma, is characterized by the friction coefficient[@6] $\nu _n = (8\sqrt {2\pi } /3)\delta m_n n_n v_{T_n } a_d^2 /M_d $, where $\delta \simeq 1.4$ is the accommodation coefficient, $m_n$ is the mass of a buffer gas molecule, $n_n$ and $v_{T_n } = (T_n /m_n )^{1/2}$ are the number density and thermal velocity of the buffer gas molecules, respectively. For our experiments, $\nu _n = 86\;{\mbox{s}}^{ - 1} \;$ at the neon pressure of $20\;{\mbox{Pa}}$ and $65\;{\mbox{s}}^{ - 1} \;$, at $15\;{\mbox{Pa}}$. The comparison with the above-discussed measurements of the damping coefficient $\nu$ shows that damping of the particle motion by the neutral drag must dominate but in this case, the wave extinction needs a more accurate treatment. It is of interest to compare our results with those obtained from the Mach cone observations in 2D systems.[@26; @21; @30; @27; @28] The first difference is a single 3D Mach cone observed in this work (single cones were also observed in an argon discharge[@11; @019; @19]) vs. a double cone for a 2D system[@26; @21] (in Refs.  and , three to four cones were observed). A double cone structure is sometimes associated with the propagation of the compressional and shear waves.[@28] While there is a principal possibility to excite the compressional and shear waves in a 2D system, the shear waves are unlikely to be observed in the 3D complex plasma. Indeed, in the vicinity of a projectile, a local melting of the dust crystal occurs,[@34] and the medium for the wave propagation becomes liquid. This rules out the shear waves and accounts for a single cone structure in our case. Note that there is a controversy about the number of the Mach cones in the 2D case and their nature as yet. The second difference between the systems is the dissipation. In the experiments with 2D systems, the buffer gas pressure was an order of magnitude lower, and the particle radius, by several times larger than for our system (the particle diameter varied from about $5$ to $9\;\mu {\mbox{m}}$ in Ref. ). Thus, in our case, the damping rate exceeds this quantity for the conditions of Ref.  by almost two orders of magnitude. Apparently, a high damping is responsible for the absence of the interference patterns in our 3D case while these patterns are observed for a 2D system.[@21; @28] As for the effect of dissipation, the Epstein drag qualitatively accounts for the observed dependence $l(\nu )$ in our case: the damping length decreases with the increase in $\nu$ (see the discussion above). Instead, in the 2D case, the opposite trend was registered.[@21] A significant similarity between the two systems lies in the fact that solely the longest wavelength perturbation can be excited because $l \sim \lambda $, and both quantities have the scale of several interparticle distances. This allows one to conclude that for both systems, the observed Mach cones correspond to the nondispersive waves and define the speed of sound. It is noteworthy that the speed of sound determined in Ref.  amounts to ca.  $2\;{\mbox{cm/s}}$ and is independent both on the sort of a gas (argon, xenon, krypton) and on the particle diameter. Almost the same speed of sound is listed in Table \[t1\]; likewise, $c_s$ is weakly dependent on the particle diameter. It is then surprising that for neon that was not investigated in the previous studies we obtained the speed of sound, which is twice as low as for other gases. \[s5\] Conclusion ================= To summarize, we used the excitation of the Mach cones by large particles moving with the supersonic velocity for measurement of the speed of dust sound in a complex plasma with neon as a buffer gas. For this purpose, a high-definition method of the Mach cone visualization was developed. The determined speed of sound proved to be more than one order of magnitude lower than that predicted by the theory of the ion acoustic waves. We propose an interpretation of these results based on the similarity between a strongly coupled Coulomb system and a solid. Using a scaling law that relates the dust particle number density to its radius, we obtained the theoretical estimate for the speed of sound, which describes the main regularities of sound propagation in complex plasmas. The authors gratefully acknowledge the support from the Russian Science Foundation (Project No. 14-12-01235) for theoretical interpretation of the experimental results and the support from DLR/BMWi (Grants Nos. 50WM0203 and 50WM1203) for realizing our joint space experiments on the ISS. [33]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty  and , eds., @noop [**]{}, Series in Plasma Physics (, ) [****,  ()](\doibase 10.1103/PhysRevLett.72.4009) [****,  ()](\doibase 10.1103/PhysRevLett.73.652) [****,  ()](\doibase 10.1143/JJAP.33.L804) @noop [**]{} (, , ) [****,  ()](\doibase 10.1016/j.physrep.2005.08.007) [****,  ()](\doibase 10.1103/RevModPhys.81.25) [****,  ()](\doibase 10.1088/0034-4885/73/6/066501) [****,  ()](\doibase 10.1088/1367-2630/8/1/007) [****,  ()](\doibase 10.1063/1.3606468) [****,  ()](\doibase 10.1103/PhysRevE.77.026407) [****,  ()](\doibase 10.1103/PhysRevE.83.016402) [****,  ()](\doibase 10.1103/PhysRevE.83.066404) [****,  ()](\doibase 10.1088/1367-2630/10/3/033037) [****,  ()](\doibase 10.1103/PhysRevLett.83.1598) [****,  ()](\doibase 10.1103/PhysRevLett.106.205001) [****,  ()](\doibase 10.1088/1367-2630/10/3/033036) [****, ()](\doibase 10.1209/0295-5075/85/45002) [****, ()](\doibase 10.1209/0295-5075/96/55001) [****,  ()](\doibase 10.1063/1.3568839) [****,  ()](\doibase 10.1103/PhysRevE.61.5557) [****, ()](\doibase 10.1063/1.4881473) [****,  ()](\doibase 10.1116/1.580119) [****,  ()](\doibase 10.1103/PhysRevLett.83.3649) [****,  ()](\doibase 10.1103/PhysRevE.62.4162) [****,  ()](\doibase 10.1103/PhysRevLett.88.135001) [****,  ()](\doibase 10.1103/PhysRevE.68.056409) [****,  ()](\doibase 10.1103/PhysRevE.66.026411) [****,  ()](\doibase 10.1103/PhysRevE.69.026407) @noop [**]{} (, , ) @noop [**]{} (, , ) [****, ()](\doibase 10.1103/PhysRev.23.710) [****,  ()](\doibase 10.1103/PhysRevE.86.016401)
--- abstract: 'We present a photoluminescence study of freestanding and Si/SiO$_2$ supported single- and few-layer MoS$_2$. The single-layer exciton peak (*A*) is only observed in freestanding MoS$_2$. The photoluminescence of supported single-layer MoS$_2$ is instead originating from the *A*$^-$ (trion) peak as the MoS$_2$ is *n*-type doped from the substrate. In bilayer MoS$_2$, the van der Waals interaction with the substrate is decreasing the indirect band gap energy by up to $\approx$ 80 meV. Furthermore, the photoluminescence spectra of suspended MoS$_2$ can be influenced by interference effects.' author: - Nils Scheuschner - Oliver Ochedowski - 'Anne-Marie Kaulitz' - Roland Gillen - Marika Schleberger - Janina Maultzsch title: 'Photoluminescence of freestanding single- and few-layer MoS$_2$' --- Introduction ============ Two-dimensional crystals of molybdenum disulfide (MoS$_2$) show a great potential for novel nanoelectronic devices.[@Radisavljevic2011; @Lembke2012] Bulk MoS$_2$ is an indirect semiconductor with a band gap of 1.29 eV.[@Gmelin] In absorption it shows two excitonic transitions *A* ($\approx$ 1.8 eV) and *B* ($\approx$ 2.0 eV) from the *K* point of the Brillouin zone [@Evans1965]. As bulk MoS$_2$ is an indirect semiconductor, it shows virtually no photoluminescence. Reducing the thickness of the MoS$_2$ to a few layers leads to a strong increase in photoluminescence yield[@Mak2010; @Splendiani2010]. The emerging photoluminescence is attributed to the increase of the indirect band gap due to confinement effects, which leads to a transition to a direct semiconductor for single-layer MoS$_2$[@Mak2010; @Splendiani2010; @Cheiwchanchamnangij2012]. This change in the electronic band structure was also directly measured in angle-resolved photoemission spectroscopy.[@Jin2013] Figure \[Sample\](a) shows photoluminescence spectra of single- and bilayer MoS$_2$ on Si/SiO$_2$ substrate. For more than one layer, an additional low-energy feature *I* is observed, which is attributed to the indirect gap.[@Mak2010] As predicted by band structure calculations, the peak position of *I* strongly depends on the layer number, see Fig. \[Sample\](b). Mak *et al.* [@Mak2010] reported strong photoluminescence of freestanding single-layer MoS$_2$ with a peak position of 1.90 eV. They found for the freestanding single-layer MoS$_2$ an increase in photoluminescence intensity by two orders of magnitude compared to the bilayer. For MoS$_2$ on Si/SiO$_2$ substrates, on the other hand, an increase of only approximately 40% was reported; the photoluminescence peak position was determined to 1.83 eV.[@Splendiani2010] These large differences in quantum yield and transition energy raise the question how the substrate influences the optical properties of single- and few-layer MoS$_2$. In this work we present the direct comparison of the photoluminescence of freestanding and supported (Si/SiO$_2$ substrates) single- and few-layer MoS$_2$. For few-layer MoS$_2$, we show that, besides the confinement effects leading to the indirect-direct transition [@Mak2010; @Splendiani2010], the van der Waals interaction with the substrate has a strong influence on the electronic structure at the $\Gamma$ point. This results in a blue shift of the indirect transition of up to 80 meV in bilayer MoS$_2$. For single-layer MoS$_2$ on Si/SiO$_2$ substrates, the photoluminescence peak is at $\approx$ 1.82 eV; for freestanding samples the peak blueshifts by $\approx$ 65 meV and its intensity increases by up to one order of magnitude. We attribute this to the simultaneous observation of the *A* exciton and the *A*$^-$ peak (attributed to negatively charged trions[@Mak2012c]) in freestanding single-layer MoS$_2$. As the exciton emission is suppressed depending on the charge carrier concentration, we conclude that the single-layer MoS$_2$ on Si/SiO$_2$ substrates is effectively *n*-type doped by the substrate. This finding implies that in most cases where photoluminescence of single-layer MoS$_2$ in the energy range of $\approx$ 1.82 eV has been reported, the MoS$_2$ was *n*-type doped and the observed emission was originating primarily from the *A*$^-$ peak (trion) and not from the exciton (*A*). Samples and Methods =================== ![\[Sample\] (a) Photoluminescence spectra of single-layer and bilayer of MoS$_2$ on Si/SiO$_2$ substrate excited with 2.33 eV. The transitions *I*, *A* and *B* are indicated; spectra are vertically offset. (b) Indirect transition energy of few-layer MoS$_2$ (E$_I$) on Si/SiO$_2$ substrates. (c) Optical image of a single- (SL) and bilayer (BL) sample on a Si/SiO2 substrate with holes; the dotted line indicates the linescan shown in Fig. \[BL\_Line\].](Skizze.eps){width="100.00000%"} We prepared freestanding MoS$_2$ layers via mechanical exfoliation of natural MoS$_2$ (SPI supplies) on Si/SiO$_2$ substrates. The thickness of the SiO$_2$ is 100 nm, which leads to an increased optical contrast of the atomically thin MoS$_2$ flakes [@Castellanos-Gomez2010]. A regular pattern of holes with diameters &gt;3 $\mu$m was etched into the wafers. Via optical microscopy we identified single- and few-layer MoS$_2$ flakes covering the Si/SiO$_2$ substrate as well as holes, see Fig. \[Sample\](c). The layer number was verified by Raman and photoluminescence spectroscopy[@Lee2010c]. Micro-Raman and photoluminescence microscopy were performed with a Horiba Labram HR spectrometer with 2.33 eV excitation energy. The photoluminescence spectra were corrected for the relative spectral sensitivity of the system with a National Institute of Standards and Technology (NIST) traceable reference white light source. The laser intensity was chosen below 0.5 mW. The laser spot size was $\approx$ 0.5 $\mu$m. All measurements were performed at room temperature. Results and Discussion ====================== ![\[BL\_Line\] Photoluminescence linescan of a bilayer MoS$_2$ sample across supported and freestanding areas, following the dotted line in Fig. \[Sample\](c). (a) Unprocessed spectra. The two dashed lines indicate the spectra shown in Fig. \[BL\_I\](a). (b) Demodulated spectra.](BL_zusammen.eps){width="100.00000%"} Figure \[BL\_Line\](a) shows the evolution of the photoluminescence spectra of bilayer MoS$_2$ in a linescan from the substrate across a freestanding area, as indicated by the dotted line in Fig. \[Sample\](c). At first sight, the most apparent change is that the spectra from the freestanding area show a modulation of the signal with a periodicity $\Delta E$ of $\approx$ 80 meV. Considering the geometry of the system, we suggest the modulation to be caused by interference of the directly emitted light with the light reflected from the bottom of the hole. In this case, $\Delta E$ can be calculated by $\Delta E = \frac{hc}{2L}$, where $L$ is depth of the hole, $h$ is Planck’s constant and $c$ is the velocity of light. By optical microscopy of the cross section of one of our substrates, we determine $L$ $\approx$ 8 $\mu$m, which leads to a predicted value for $\Delta E$ of 77 meV. This is in excellent agreement with the observed value. Therefore, to correct our data for further analysis, we introduce an empirical modulation function $f_{\text{mod}}$: $$f_{\text{mod}}(E)=1+A_{\text{mod}} \cdot\cos\left(E\cdot\frac{2\pi}{\Delta E}+\varphi_{\text{mod}}\right)$$ $A_{\text{mod}}$ describes the amplitude and $\varphi_{\text{mod}}$ the phase of the modulation. On the substrate, the photoluminescence spectrum of bilayer MoS$_2$ can be fitted with two Lorentzians for the $B$ and $I$ transitions, one Gaussian for the $A$ transition, and a constant background term, see dashed line in Fig. \[BL\_I\](a). We multiply the modulation function $f_{\text{mod}}$ to these three peaks to fit the freestanding bilayer spectra. With the results of $\Delta E$, $A_{\text{mod}}$ and $\varphi_{\text{mod}}$ from the fits, we then demodulate the photoluminescence spectra as shown in Fig. \[BL\_Line\](b). The amplitude of the modulation varies strongly between the different holes on the substrate. We attribute this effect to varying roughness and tilting of the bottoms of the holes. Freestanding few-layer MoS$_2$ ------------------------------ Figure \[BL\_extrahiert\](a) shows the relative photoluminescence intensities of the *A*, *B* and *I* transitions normalized to the substrate values for the linescan over the bilayer sample. We define the intensity of the transitions as the area under the Lorentzian (*B* and *I*) or Gaussian (*A*) curve. The *A* exciton intensity for the freestanding parts is reduced to 60-70% of the intensity in the supported region; we also observe an analogous reduction of the Raman intensity. Such a decrease is expected compared to the supported part, where multiple reflections of the exciting and the emitted light at the Si/SiO$_2$ interface lead to an enhancement.[@Li2012d] The intensity of the *B* transition is at least reduced to 20%, as it can already be seen directly in Fig. \[BL\_Line\](a). In contrast, the intensity in the freestanding part of the *I* transition is increased to 200-320%. From our measurements the reason for this intensity change remains unclear. Figure \[BL\_extrahiert\](b) shows the energies of the *A* and *I* peaks. In freestanding bilayer MoS$_2$, we observe an increase in the energy of the *I* transition by $\approx$ 80 meV, and a decrease of the *A*-transition energy by $\approx$ 15 meV, compared to the surrounding supported areas. Those shifts are already evident in the spectra of the freestanding area close to the supported area, which show no modulation effects, see Figs. \[BL\_extrahiert\](a) and \[BL\_I\](a). We will discuss in the following three scenarios as potential source of these shifts: $(i)$ the dielectric environment[@Mao2013; @Plechinger2011], $(ii)$ strain[@Conley2013a; @He2013; @Shi2013; @Zhu2013] and $(iii)$ direct changes in the electronic band structure due to interaction with the substrate. $(i)$ It is well known that the dielectric environment can change the exciton binding energy by screening of the Coulomb interaction and modifying electron-electron interactions. However, we conclude from our measurements that the *A* and *I* peaks are similarly affected by changes in the dielectric environment: On the substrate the individual peak positions show standard deviations of $\approx$ 7 meV from the center position, while their difference shows only a standard deviation of $\approx$ 2 meV, *i.e.*, both peaks shift in the same direction. Assuming that these shifts are due to local variations in the dielectric environment as well (inhomogeneities in the substrate, adsorbates on the MoS$_2$ flakes), the change of the dielectric environment cannot explain the opposite shifts of the *A* and *I* peaks when going from the substrate to the freestanding part. $(ii)$ Raman spectroscopy shows a decrease of the E$_{2g}$ phonon frequency by 2 cm$^{-1}$ in the freestanding areas relative to the supported area, see Fig. \[RamanBL\]. This shift might be due to biaxial strain as the MoS$_2$ might sag in the hole area. To assess the effect of such slight sagging on the phonon frequencies, we simulated their evolution under applied biaxial tensile strain $\epsilon$ using density functional theory (DFT)[^1]. For the purpose of this work, we considered a representative range of $\epsilon$ = 0$-$1%, where we defined $\epsilon$ as the relative length change of the lattice vectors, *i.e.*, $\epsilon$=$\frac{\Delta a}{a}$, due to in-plane hydrostatic stress $\sigma\equiv\sigma_{xx}=\sigma_{yy}$. As expected all modes experience a red shift for tensile strain, with a shift rate of $\approx-$2.2 cm$^{-1}/\%$ for the A$_{1g}$ mode and $\approx-$5.2 cm$^{-1}/\%$ for the E$_{2g}$ mode, respectively. Assuming that the Raman E$_{2g}$ mode shows almost no dependence on doping[@Chakraborty2012; @Frey1999], the observed downshift mode would correspond to biaxial tensile strain of $\epsilon\approx$ 0.4%. Tensile uniaxial strain causes redshifts of the *A* and *I* transitions. [@Conley2013a; @He2013] Quasiparticle band structure calculations of monolayer MoS$_2$ under biaxial strain predict similar results.[@Shi2013] For few layer MoS$_2$, the influence of the van der Waals interaction between the layers, which leads to the direct-indirect gap transition, has to be taken into account. It appears feasible that this additional term could lead to opposite shifts of the *A* and *I* transitions of bilayer MoS$_2$ under biaxial strain. We thus calculated the corresponding electronic band structures of the biaxially strained MoS$_2$ bilayer sheets, with and without van der Waals interaction, see Fig. \[bands\]. The van der Waals interaction is primarily influencing the electronic states at the $\Gamma$ point, leading to a redshift of the *I* transition by 0.43 eV for unstrained bilayer MoS$_2$, compared to calculations without van der Waals interaction. This redshift increases slightly for 1% biaxial tensile strain to 0.46 eV. As the exciton binding energy is only marginally affected by small strain[@Feng2012], we can assume that shifts of the photoluminescence peaks almost purely arise from strain-induced changes of the fundamental band gaps. With van der Waals interactions included, both the indirect and the direct fundamental band gaps decrease linearly in the range of 0$-$1% strain with a rate of $-0.12$ eV/% for the direct transition and a larger rate of $-0.29$ eV/% for the indirect transition. Again, as both transitions shift to lower energies, tensile biaxial strain cannot explain the observed peak shifts in opposite directions. ![\[BL\_extrahiert\] (a) Indirect and A exciton transition energy extracted from the linescan for bilayer MoS$_2$. (b) *A*, *B* and *I* photoluminescence intensities normalized to the substrate values. Data points with filled (hollow) symbols were determined from the fits of the unprocessed (demodulated) spectra.](BL_extrahiert.eps){width="100.00000%"} ![\[BL\_I\] (a) Photoluminescence spectra of freestanding and supported bilayer MoS$_2$. The spectrum of the freestanding bilayer is from the region near the edge of the hole, where only very small modulation effects are observed; The dashed line shows the fitting function for the supported bilayer MoS$_2$ spectrum. (b) Difference of the indirect transition energy ($\Delta$E$_I$) of few-layer MoS$_2$ between supported and freestanding areas as a function of the layer number.](BL_I.eps){width="100.00000%"} ![\[RamanBL\] (a) Raman spectra of freestanding and supported bilayer MoS$_2$. (b) and (c) Raman mappings showing the E$_{2g}$ (a) and A$_{1g}$ (b) Raman mode positions. The dashed lines indicate the position of the hole.](RamanBL.eps){width="100.00000%"} ![\[bands\] Evolution of calculated electronic band structures for bilayer MoS$_2$ subject to biaxial tensile strain of (a) 0%, (b) 0.5% and (c) 1% as calculated using van-der-Waals corrected DFT-PBE calculations. (d) Comparison of the trends of the corresponding indirect and direct band gaps for calculations with (full symbols) and without (open symbols) van-der-Waals (vdW) corrections. The arrows denote the indirect and direct fundamental band gaps of the material. ](MoS2_bands.eps){width="100.00000%"} $(iii)$ As strain and the dielectric environment cannot explain the shifts of the *A* and *I* peaks, we attribute the observed blue shift of the *I* peak to the sensitivity of the electronic $\Gamma$ point states to interactions with the environment. This sensitivity results from the spatial localization of the electronic states toward the outer regions of the MoS$_2$ layer.[@Splendiani2010] In multilayer MoS$_2$, it leads to the direct-indirect band gap transition because of the layer-layer interaction[@Cheiwchanchamnangij2012; @Han2011], as supported by the van der Waals calculations in Fig. \[bands\](d). We conclude from our measurements that the interaction of the supported bilayer MoS$_2$ with adjacent amorphous SiO$_2$, similar to the interaction with another MoS2 layer, leads to a redshift of the *I* transition. The influence of the substrate decreases strongly with the layer number: for more than two layers the shift of the *I* peak is less than 10 meV, see Fig. \[BL\_I\](b). In analogous measurements with two- to five-layer MoS$_2$, we did not find a systematic layer number dependence for the *A* peak position or the shift of the *A* peak position between freestanding and supported areas. A possible explanation for this observation might be the influence of different concentrations of impurities on the photoluminescence. This seems plausible as we use MoS$_2$ from natural sources. For different few-layer MoS$_2$ samples on Si/SiO$_2$ substrates, we found variations in the energy of the *I* peak of up to 30 meV between different flakes, indicating that the strength of the interaction with the substrate is also varying. Our findings have implications on the physics of multilayer heterostructures of two-dimensional crystals[@Geim2013]: these results show that in those structures the interaction between the layers can alter the electronic states of each layer such that they cannot be treated as isolated layers. Freestanding single-layer MoS$_2$ --------------------------------- ![\[SL\_map\] Photoluminescence intensity map of the *A* transition of a sample with freestanding single- and bilayer MoS$_2$. The dashed lines indicate the position of the hole and the border between single- (SL) and bilayer (BL) as observed in optical contrast.](SL_map){width="100.00000%"} Figure \[SL\_map\] shows a photoluminescence intensity map in the energy region of the *A* transition of a sample with freestanding bilayer and single-layer MoS$_2$. When going from the supported to the freestanding region, the intensity of the *A* transition decreases for the bilayer, whereas the intensity for the single-layer increases by up to one order of magnitude. This increase cannot be caused simply by an increase of the optical absorption cross section, as we observe a small decrease of the Raman intensity, which also contains the optical absorption cross section. In this particular sample, we observed only very weak modulation of the photoluminescence signal. ![\[SL\_Spektren\] Photoluminescence of single-layer MoS$_2$ (solid lines). Dotted lines show fit functions of the *A*, *B* and *A*$^-$-peaks. (a) Freestanding and (b) on Si/SiO$_2$ substrate and (c) from the boundary region between supported and freestanding areas, showing an overlap of both types of spectra.](SL_Spektren.eps){width="100.00000%"} Figure \[SL\_Spektren\] shows photoluminescence spectra from the freestanding (a) and supported (b) areas as well as from the boundary between the supported and freestanding areas (c). The maximum of the photoluminescence emission shows a blue shift of $\approx$ 65 meV in the freestanding single-layer compared to the supported one; the emission peak becomes asymmetric. At the boundary between supported and freestanding areas, the emission shows double-peak structures, see Fig. \[SL\_Spektren\](c). We attribute this to the simultaneous observation of the *A* peak and the *A*$^-$ peak. The *A*$^-$ peak was assigned by Mak *et al.*[@Mak2012c] to negatively charged trions. While the emission intensity of the trions (*A*$^-$) showed no dependence on the charge carrier concentration, the exciton (*A*) emission was strongly reduced for *n*-type doped MoS$_2$.[@Mak2012c] For graphene it is well known that the charge carrier concentration can be influenced by the substrate.[@Stampfer2012; @Bukowska2011] The observed changes in the photoluminescence of freestanding single-layer MoS$_2$ can thus be understood when assuming *n*-type doping of the MoS$_2$ by charge transfer effects from the substrate. Therefore, the emission of the exciton (*A*) is suppressed on the substrate; instead, the observed photoluminescence of single-layer MoS$_2$ on Si/SiO$_2$ substrate is primarily from the trion (*A*$^-$). In the freestanding areas, the MoS$_2$ layer is less doped, and the emission of the exciton becomes dominant. This is seen in the increase in intensity and the observed blue shift in Fig. \[SL\_Spektren\](a). From the photoluminescence map we determine the energy of the excitonic *A* transition of freestanding single-layer MoS$_2$ to $1.886\pm 0.008$ eV. As the exciton is completely quenched on the substrate the shift of the Fermi level must be at least $\approx$ 40 meV.[@Mak2012c] The full width half maximum (FWHM) of the *A*$^-$ peak is unaffected by the substrate; we find for all spectra a value of $\approx$ 100 meV. For the *A* peak in the freestanding area we find a value of $\approx$ 47 meV. The vast majority of reported photoluminescence data from supported single-layer MoS$_2$ is in the energy range of ($\approx 1.82$ eV).[@Splendiani2010; @Plechinger2011; @Bussmann2012; @Conley2013a; @Scheuschner2012a] Our results indicate that in these cases the MoS$_2$ was *n*-type doped and the observed photoluminescence was originating from the *A*$^-$ peak; the pure exciton *A* peak is only observed in freestanding or otherwise undoped single-layer MoS$_2$. Conclusion ========== We have presented a comparison of the photoluminescence spectra of suspended and supported (on Si/SiO$_2$) single- and few-layer MoS$_2$. Freestanding single-layer MoS$_2$ shows strong photoluminescence from the *A* peak (exciton) and the $A^-$ peak (trion). For single-layer MoS$_2$ on Si/SiO$_2$, the emission from the exciton (*A* peak) is suppressed due to *n*-type doping by the substrate; instead the photoluminescence spectrum shows only the $A^-$ peak (trion). We therefore conclude that in most cases where the reported photoluminescence of single-layer MoS$_2$ is close to the $A^-$ energy of $\approx$1.82 eV, the MoS$_2$ is *n*-type doped and shows the *A*$^-$ peak instead of the *A* exciton emission. In few-layer MoS$_2$, the van der Waals interaction with the substrate is decreasing the *I* peak energy (indirect band gap) compared to freestanding MoS$_2$. The influence of the substrate decreases strongly with increasing layer number. For bilayer MoS$_2$ we found a redshift of the *I* peak position by up to $\approx$80 meV; for three to five layers it is less than 10 meV. Note: During revision of the manuscript, two related studies have become available.[@Buscema2013a; @Sercombe2013] Acknowledgements ================ We thank the Fraunhofer IZM (HDI & WLP) for the preparation of the substrates. This work was supported by the European Research Council ERC under grant no. 259286 and by the SPP 1459 Graphene of the DFG. [30]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](\doibase 10.1038/nnano.2010.279) [****,  ()](\doibase 10.1021/nn303772b) @noop [**]{},  ed., Vol.  (, , ) [****, ()](\doibase 10.1098/rspa.1965.0071) [****,  ()](\doibase 10.1103/PhysRevLett.105.136805) [****,  ()](\doibase 10.1021/nl903868w) [****,  ()](\doibase 10.1103/PhysRevB.85.205302) [****,  ()](\doibase 10.1103/PhysRevLett.111.106801) [****,  ()](\doibase 10.1038/nmat3505) [****,  ()](\doibase 10.1063/1.3442495) [****,  ()](http://pubs.acs.org/doi/abs/10.1021/nn1003937) [****,  ()](http://pubs.acs.org/doi/abs/10.1021/nn3025173) [****,  ()](\doibase 10.1002/smll.201202982) [****,  ()](\doibase 10.1002/pssr.201105589) [****,  ()](\doibase 10.1021/nl4014748) [****,  ()](\doibase 10.1021/nl4013166) [****,  ()](\doibase 10.1103/PhysRevB.87.155304) [****,  ()](\doibase 10.1103/PhysRevB.88.121301) [****,  ()](\doibase 10.1103/PhysRevB.85.161403) [****,  ()](\doibase 10.1103/PhysRevB.60.2883) [****,  ()](\doibase 10.1038/nphoton.2012.285) [****,  ()](\doibase 10.1103/PhysRevB.84.045409) [****,  ()](\doibase 10.1038/nature12385) [****,  ()](\doibase 10.1063/1.2816262) [****,  ()](\doibase 10.1088/1367-2630/13/6/063018) [****,  ()](\doibase 10.1557/opl.2012.1463) [****,  ()](\doibase 10.1002/pssb.201200389) [arXiv:1311.3869](http://arxiv.org/abs/1311.3869), () [****,  ()](\doibase 10.1038/srep03489) [^1]: The phonon frequencies at the $\Gamma$-point and the electronic band structures were calculated in the frame of density functional (perturbation) theory on the level of the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional as implemented into the Quantum Espresso suite. We treated the Mo(3s,3p,3d,42) and the S(3s,3p) states as valence electrons using projector augmented waves (PAW) with a cutoff of 70 Ry. All reciprocal space integrations were performed by a discrete k-point sampling of 12x12x1 k-points in the Brillouin zone. For each hydrostatic strain $\epsilon$, we scaled the calculated in-plane lattice constant of a=3.19Åof the unstrained MoS$_2$ bilayer sheet to a’=(1+$\epsilon$)a and fully optimized the atomic positions until the residual forces between atoms was smaller than 0.01 eV/Å. Interactions of the sheet with residual periodic images due to the 3D boundary conditions were minimized by maintaining a vacuum layer between bilayer planes of at least 25Å.
[**On the geometry of similarity search: dimensionality curse and concentration of measure**]{}\ [Vladimir Pestov]{}\ [School of Mathematical and Computing Sciences, Victoria University of Wellington,\ P.O. Box 600, Wellington, New Zealand.]{}\ ------------------------------------------------------------------------ .4cm [**Abstract**]{} > [We suggest that the curse of dimensionality affecting the similarity-based search in large datasets is a manifestation of the phenomenon of concentration of measure on high-dimensional structures. We prove that, under certain geometric assumptions on the query domain $\Omega$ and the dataset $X$, if $\Omega$ satisfies the so-called concentration property, then for most query points $x^\ast$ the ball of radius $(1+\e)d_X(x^\ast)$ centred at $x^\ast$ contains either all points of $X$ or else at least $C_1\exp(-C_2\e^2n)$ of them. Here $d_X(x^\ast)$ is the distance from $x^\ast$ to the nearest neighbour in $X$ and $n$ is the dimension of $\Omega$.]{} [**Keywords**]{} > [Data structures, databases, information retrieval, computational geometry, performance evaluation]{} .4cm ------------------------------------------------------------------------ Introduction ============ As the size of datasets in existence grows at an amazing rate (see e.g. Section 4.1 in [@SSU]) and workloads become ever more sophisticated, algorithms for similarity-based data retrieval often slow down exponentially with dimension, sometimes reducing to an exhaustive search (‘the curse of dimensionality’) [@BGRS; @BWY; @pyr2; @WSB]. It is important to try and understand the common geometric nature of the dimensionality curse for a great variety of different, often non-euclidean, metric spaces representing data structures [@Brin; @CPRZ; @CPZ; @U]. In this Letter we suggest that the curse of dimensionality is a manifestation of the phenomenon of concentration of measure on high-dimensional structures. This phenomenon is an important discovery of modern analysis, observed in a wide range of situations [@GrM; @M; @MS; @Ta]. Roughly speaking, a set $\Omega$ equipped with a distance and a probability measure has the concentration property if already for small values of $\e>0$ the ‘$\e$-fattening’ of every subset containing at least 1/2 of all elements of $\Omega$ contains all points of $\Omega$ apart from a set of almost vanishing measure $\alpha(\e)$. Here $\alpha$ is the so-called concentration function of $\Omega$. Many ‘naturally occuring’ high-dimensional structures possess the concentration property: the $n$-dimensional sphere $\s^n$, Euclidean unit ball ${\mathbb B}^n$, Hamming cube $\{0,1\}^n$, groups of permutations $S_n$ all have concentration functions of the form $\alpha(\e)=O(1)\exp(-O(1)\e^2n)$. Here we will address just one aspect of ‘dimensionality curse,’ informally described in [@BWY] as follows: > ‘It seems ... that this [\[exponential\]]{} complexity might be inherent in any algorithm for solving closest point problems because a point in a high-dimensional space can have many “close” neighbours.’ To formalise this account, we borrow a concept from [@BGRS]. A similarity query is called $\e$-unstable for an $\e>0$ if most points if the dataset $X$ are at a distance $<(1+\e)d_X(x^\ast)$ from the query point $x^\ast$, where $d_X$ denotes the distance to the nearest neighbour in the dataset $X$. Query instability was shown in [@BGRS] to occur under some probability assumptions on the query distribution, and it was argued that asking unstable queries is partly responsible for the dimensionality curse. It seems to us that even more important is a ‘local’ version of query instability, where the number of data points located at a distance $<(1+\e)d_X(x^\ast)$ from a query point $x^\ast$ grows exponentially in the dimension of the query domain. If such an effect prevails in a given workload, then answering the range query of radius $(1+\e)d_X(x^\ast)$ obviously takes an average expected exponential time, even though the query may be globally stable. In our model, a dataset $X$ is a finite metric subspace of a metric space $\Omega$ of query points, the latter being equipped with a probability measure reflecting the query distribution. We assume that $\Omega$ has the concentration property in the sense that $\alpha(\e)=O(1)\exp(-O(1)\e^2n)$, where $n$ is the ‘dimension’ of the query domain $\Omega$. Our assumption on the way $X$ sits in $\Omega$ is of a homogeneity type: the radii of open balls centred at $x$ and having measure $1/2$ are (almost) the same for all datapoints $x$. Under such assumptions we prove that if $\e>0$, then for all query points $x^\ast\in\Omega$, apart from a set of measure $O(1)\exp(-O(1)\e^2n)$, the open ball of radius $(1+\e)d_X(x^\ast)$ centred at $x^\ast$ contains either all points of the dataset $X$ or else at least $C_1\exp(C_2\e^2n)$ of them for some $C_1,C_2>0$. Thus, a typical range query of radius $(1+\e)d_X(x^\ast)$ is either unstable or takes an exponential time to answer. In particular, most queries are unstable if the size of $X$ grows subexponentially in $n$. In Conclusion we explain a possible constructive significance of our results. Similarity workloads ==================== Our model builds on the approaches of [@CPRZ; @CPZ] and [@HKP]. A [*similarity workload*]{} is a quadruple $(\Omega, d,\mu, X)$, where 1. $\Omega$ is a (possibly infinite) set called the [*domain*]{}, whose elements are [*query points.*]{} 2. $ d$ is a metric on $\Omega$, the [*dissimilarity measure.*]{} 3. $\mu$ is a Borel probability measure on the metric space $(\Omega, d)$, reflecting the query point distribution. 4. $X$ is a finite subset of $\Omega$, called the [*instance,*]{} or the [*dataset*]{} proper, whose elements are [*data points*]{}. Recall that a triple $(\Omega, d,\mu)$ formed by a metric space $(\Omega, d)$ and a probability Borel measure $\mu$ on it is called a [*probability metric space.*]{} Thus, a similarity workload is a probability metric space $\Omega$ together with a distinguished finite metric subspace $X$. [*Similarity queries*]{} are of two major types: a [*range query*]{} centred at $x^\ast\in\Omega$ of radius $\e>0$ (the set of all $x\in X$ with $ d(x,x^\ast)<\e$), and a [*$k$-nearest neighbours $k$-NN query*]{} centred at $x^\ast\in\Omega$, where $k\in\N$. Following [@BGRS], we say that a similarity query centred at $x^\ast$ is $\e$[*-unstable*]{} for an $\e>0$ if $$\left\vert\{x\in X\colon d(x^\ast,x)\leq (1+\e)d_X(x^\ast) \}\right\vert > \frac {\vert X\vert} 2,$$ where $d_X(x^\ast)=\min\{d(x^\ast,x)\colon x\in X\}$ is the distance from $x^\ast$ to the nearest neighbour in $X$. In [@BGRS] the following new type of queries is proposed. - An [*$\e$-radius nearest neighbours query*]{} centred at a point $x^\ast\in\Omega$, where $\e>0$, is a range query centred at $x^\ast$ of the radius $(1+\e)d_X(x^\ast)$. The concentration phenomenon ============================ The [*concentration function,*]{} $\alpha=\alpha_\Omega$, of a probability metric space $\Omega$ is defined for each $\e>0$ by $$\alpha_\Omega(\e)=1-\inf\left\{\mu\left({\mathcal O}_\e(A)\right) \colon A\subseteq \Omega \mbox{ is Borel and } \mu(A)\geq \frac 1 2\right\}$$ and $\alpha_\Omega(0)=1/2$. It is a decreasing function in $\e$. A family $(\Omega_n)_{n=1}^\infty$ of probability metric spaces is called a [*Lévy family*]{} if for each $\e>0$, $\alpha_{\Omega_n}(\e)\to 0$ as $n\to\infty$, and a [*normal Lévy family*]{} (with constants $C_1,C_2>0$) if for all $n$ and $\e>0$ $$\alpha_{\Omega_n}(\e)\leq C_1e^{-C_2\e^2n}.$$ All the families listed below are normal Lévy families, see [@GrM; @M; @MS] for exact values of constants and further examples. \[ex\] (1) The $n$-dimensional unit spheres ${\mathbb S}^n$ equipped with the (unique) rotation-invariant probability measure and the geodesic distance. (2) The same, with the Euclidean distance. (3) The Hamming cubes $\{0,1\}^n$ of all binary strings of length $n$, equipped with the normalised Hamming distance $d(s,t)= \frac 1 n \vert \{i\colon s_i\neq t_i\}\vert$ and the normalised counting measure $\mu_\sharp(A)=\vert A\vert/\vert X\vert$. (4) The groups $SO(n)$ of $n\times n$ orthogonal matrices with determinant $1$, equipped with the geodesic distance and the Haar measure. (5) The Euclidean balls ${\mathbb B}^n$ with the $n$-volume and Euclidean distance. (6) The tori ${\mathbb T}^n$ with the normalised geodesic distance and product measure. (7) The hypercubes $[0,1]^n$ with the normalised Euclidean (or $l_1$) distance. More Lévy families can be obtained using operations described in [@GrM], Sect. 2. Let $f\colon \Omega\to \R$ be a Lipschitz-1 function: $$\forall x,y\in\Omega,~~\vert f(x)-f(y)\vert\leq d(x,y).$$ Denote by $M$ a median (or Lévy mean) of $f$, that is, a real number with $$\mu(\{x\in X\colon f(x)\leq M\}) =\mu(\{x\in X\colon f(x)\geq M\}).$$ For every $\e>0$, $\mu\left(f^{-1}(M-\e,M+\e) \right)\geq 1-2\alpha(\e)$. \[lip\] The [*phenomenon of concentration of measure on high-dimensional structures*]{} refers to the above situation, in which the function $f$ ‘concentrates near one value.’ See [@GrM; @M; @MS; @Ta]. Concentration and similarity workloads ====================================== Let $\Omega$ be a probability metric space with the concentration function $\alpha=\alpha_{\Omega}$. The following is quite immediate. Let $A\subseteq\Omega$, $\delta>0$, and $\mu(A)>\alpha(\delta)$. Then $\mu({\mathcal O}_\delta(A))> 1/2$. \[half\] Let $\delta>0$, and let $\gamma$ be a collection of subsets $A\subseteq\Omega$ of measure $\mu(A)\leq\alpha(\delta)$ each, satisfying $\mu(\cup\gamma)\geq 1/2$. Then the $2\delta$-neighbourhood of every point $x\in \Omega$, apart from a set of measure at most $\frac 12\alpha(\delta)^{\frac 12}$, meets at least $\lceil \frac 1 2 \alpha(\delta)^{-\frac 12}\rceil$ elements of $\gamma$. \[tech\] Partition $\gamma$ into a collection of pairwise disjoint subfamilies $\gamma_i$, $i\in I$ in such a way that for every $i$, $\alpha(\delta)\leq\mu(A_i)< 2\alpha(\delta)$, where $A_i= \cup\gamma_i$. Clearly, $(1/4)\alpha(\delta)^{-1}\leq \vert I\vert \leq (1/2)\alpha(\delta)^{-1}$. Select a subset $J\subseteq I$ with $\vert J\vert=\lceil \frac 12\alpha(\delta)^{-\frac 12} \rceil$. Lemma \[half\] implies that $$\mu\left({\mathcal O}_{2\delta}(A_i)\right)\geq \mu\left({\mathcal O}_{\delta}\left({\mathcal O}_{\delta}A_i\right)\right) \geq 1-\alpha(\delta),$$ and therefore $\cap_{i\in J}({\mathcal O}_{2\delta}(A_i)$ has measure at most $1-\vert J\vert \alpha(\delta)$. Let $(\Omega,d,\mu,X)$ be a similarity workload, with $\alpha$ as above. Denote by $M$ a median value of the function $d_X$ (distance to $X$) on $\Omega$. \[sim\] Let $\delta>0$. Then for all points $x^\ast\in\Omega$, except for a set of total mass at most $2\alpha(\delta)$, the distance to the nearest neighbour in $X$ is in the interval $(M-\delta,M+\delta)$. The function $x^\ast\to d_X(x^\ast)$ is Lipschitz-1 on $\Omega$, and Prop. \[lip\] applies. Let $(\Omega, d,\mu,X)$ be a similarity workload. For an $x\in X$, denote by $R_x$ the maximal radius of an open ball in $\Omega$ centred at $x$ of measure $\leq 1/2$. Let $\e>0$. We say that $X$ is [*weakly $\e$-homogeneous*]{} in $\Omega$ if all radii $R_x$, $x\in X$ belong to an interval of length $<\e$. \(1) $X$ is weakly $\e$-homogeneous for every $\e>0$ if the group of motions preserving the measure acts transitively on $\Omega$. Such are spaces 1-4, 6 in Example \[ex\]. (2) A subspace $X$ of the ball ${\mathbb B}^n$ is weakly $\e$-homogeneous if $X$ is contained in a spherical shell of thickness $\e$. (3) If we independently throw in $\Omega$ $N$ points $x_1,x_2,\dots,x_N$, distributed with respect to the measure $\mu$, then one can show that, with probability $\geq 1-2N\alpha(\e/2)$, the dataset $X=\{x_1,\dots,x_N\}$ is weakly $\e$-homogeneous. Query instability: local and global =================================== \[main\] Let $(\Omega, d,\mu,X)$ be a similarity workload. Denote by $M$ a median value of the distance from a query point in $\Omega$ to its nearest neighbour in $X$. Let $0<\e<1$, and assume that the instance $X$ is weakly $(M\e/6)$-homogeneous in $\Omega$. Then for all points $x^\ast\in\Omega$, apart from a set of total mass at most $3\alpha(M\e/6)$, the open ball of radius $(1+\e)d_X(x^\ast)$ centred at $x^\ast$ contains at least $$\min\left\{\vert X\vert,~ \lceil\frac 1 {2\alpha(M\e/6)^{\frac 12}}\rceil\right\} \label{N}$$ elements of $X$. Denote by $R$ the minimum of the radii $R_x, x\in X$. Let $\Delta=R-M$. \(1) If $\Delta>M\e/6$, then by Lemma \[half\] the measure of the ball ${\mathcal O}_M(x)$ cannot exceed $\alpha(\Delta)\leq\alpha(M\e/6)$, for otherwise the measure of ${\mathcal O}_R(x)$ would be $>1/2$. In particular, $\vert X\vert\geq \frac 12\alpha(M\e/6)^{-1}$. According to Lemma \[tech\] applied to the balls ${\mathcal O}_M(x)$, $x\in X$ with $\delta=M\e/6$, for all $x^\ast\in\Omega$ apart from a set of measure $\leq \frac 1 2\alpha(M\e/6)^{\frac 12}$, the $(M\e/3)$-neighbourhood of $x^\ast$ meets at least $\lceil \frac 1 2\alpha(M\e/6)^{-\frac 12}\rceil$ of such balls. \(2) If $\Delta\leq M\e/6$ (in particular, if $\vert X\vert< \frac 12\alpha(M\e/6)^{-\frac 12}\leq \frac 12\alpha(M\e/6)^{-1}$, cf. the previous paragraph), then $R_x+M\e/6\leq M(1+\e/2)$. Denote by $X'$ a subset of $X$ of cardinality $\min\{\vert X\vert, \frac 12\alpha(M\e/6)^{-\frac 12} \}$. Since the measure of every ball ${\mathcal O}_{M(1+\e/2)}(x)$ is at least $1-\alpha(M\e/6)$, for all $x^\ast\in\Omega$ apart from a set of measure $\leq \frac 12\alpha(M\e/6)^{\frac 12}$, the $(M\e/2)$-neighbourhood of $x^\ast$ meets every ball ${\mathcal O}_M(x)$, $x\in X'$. As a consequence of Lemma \[sim\] with $\delta=M\e/4$, for all $x^\ast\in\Omega$ apart from a set of measure at most $2\alpha(M\e/4)$, one has $\vert d_X(x^\ast)-M\vert<M\e/4$ and therefore $M(1+\e/2)\leq d_X(x^\ast)(1+\e)$. It remains to notice that $\frac 12\alpha(M\e/6)^{\frac 12}+2\alpha(M\e/4)\leq 3\alpha(M\e/6)^{\frac 12}$. Asymptotic results {#asymptotic-results .unnumbered} ------------------ Let $(\Omega_n, d_n,\mu_n,X_n)$ be an infinite collection of workloads. Denote by $M_n$ the median distances from points of $\Omega_n$ to their nearest neighbours in $X_n$. We make the following standing assumptions. 1. The query domains $(\Omega_n, d_n,\mu_n)$ form a [*normal Lévy family.*]{} 2. The values $M_n$ are bounded away from zero: $M_n\geq M>0$ for all $n\in\N$. The latter condition is only violated in very densely populated domains. For example, if $\Omega_n=\s^n$, then (2) is satisfied whenever the size of $X_n$ is not superexponential in $n$. For $\Omega_n$ finite (2) is satisfied if $\vert X_n\vert\leq \alpha_{\Omega_n}(M)\cdot\vert\Omega_n\vert$. Now let $0<\e<1$. 1. All the instances $X_n$ are weakly $(M_n\e/6)$-homogeneous in $\Omega_n$. Under the assumptions [(1)-(3),]{} for all query points $x^\ast\in\Omega_n$, apart from a set of measure $O(1)\exp(-O(1)M^2\e^2n)$, the open ball of radius $(1+\e)d_X(x^\ast)$ centred at $x^\ast$ contains either all elements of $X$ or else at least $C_1\exp(C_2M^2\e^2n)$ of them for some constants $C_1,C_2>0$ depending only on the family $(\Omega_n)_{n=1}^\infty$. \[local\] Under the assumptions [(1)-(3),]{} for all query points $x^\ast$, apart from a set of measure $O(1) \exp(-O(1)M^2\e^2n)$, the $\e$-radius nearest neighbours query centred at $x^\ast$ either is unstable or takes an exponential time (in $n$) to answer. In addition to [(1)-(3),]{} let the size of $X_n$ grow subexponentially in $n$. Then for all query points $x^\ast\in\Omega_n$, apart from a set of measure $O(1)\exp(-O(1)M^2\e^2n)$, the similarity query centred at $x^\ast$ is $\e$-unstable: all points of $X_n$ are at a distance $< (1+\e)d_X(x^\ast)$ from $x^\ast$. It is easy to construct sequences of workloads in which most of similarity queries are $1$-stable and yet for every $\e>0$ most of the $\e$-radius NN queries take time $C_1\exp(C_2\e^2n)$ to answer. Let $\delta>0$ be arbitrary. In a probability metric space $\Omega$ choose a maximal subset $X$ with the property that every two different elements of $X$ are at a distance $>\delta$ from each other. It is easy to see that centres of all $1$-unstable similarity queries in the workload $(\Omega,X)$ are contained in some ball of radius $4\delta$. Applying this procedure to every member of a normal Lévy family of homogeneous spaces of constant diameter $D$ (a typical situation) and choosing $\delta<D/8$, we obtain a desired sequence of workloads, because one can then prove that $\liminf M_n\geq \delta/2$. Conclusion ========== Our model links the ‘curse of dimensionality’ in multidimensional datasets to the phenomenon of concentration of measure on high-dimensional structures. All our assumptions on the query domain $\Omega$ and the dataset $X$ are purely geometric. Our estimates are by no means optimal, as we just aimed at deriving exponential lower bounds in a wide variety of situations. We believe that the most general case (absence of homogeneity in any form) can be included in the picture as well and will address the issue in the future work. Other important directions for research are to apply the concentration phenomenon to indexability theory [@HKP] and to performance analysis of concrete hierarchical tree index structures [@BWY; @Brin; @CPRZ; @CPZ; @U]. A possible constructive significance of our results is as follows. In practice, geometrically optimal dissimilarity measures are being routinely replaced with less precise distances that are computationally cheaper, with a view of subsequently discarding false hits. Such distances would in general lead to sharper concentration effects on the same measure space. It is therefore conceivable that using computationally more expensive distances will result in an overall speed-up. Acknowledgements {#acknowledgements .unnumbered} ================ I am grateful to Paolo Ciaccia for introducing me to the problematics of similarity-based information storage and retrieval, as well as for his hospitality and stimulating discussions during my visit to the University of Bologna in June 1998. [100]{} S. Berchtold, C. Böhm, D.A. Keim, and H.-P. Kriegl, [*A cost model for nearest neighbour search in high-dimensional data space,*]{} PODS’97 (Tucson, AZ), 78–86. K. Beyer, J. Goldstein, R. Ramakrishnan, and U. Shaft, [*When is “nearest neighbor” meaningful?,*]{} Technical paper no. 226, CS dept., Univ. Wisconsin-Madison, to appear in: ICDT-99. J.L. Bentley, B.W. Weide, and A.C. Yao, [*Optimal expected-time algorithms for closest point problems,*]{} ACM Trans. Math. Software [**6**]{} (1980), 563–580. S. Brin, [*Near neighbor search in large metric spaces,*]{} in: Proc. of the 21st VLDB International Conf., Zurich, Switzerland, Sept. 1995, pp. 574–584. P.  Ciaccia, M.  Patella, F.  Rabitti, and P.  Zezula, [*Performance of $M$-tree, an access method for similarity search in metric spaces,*]{} EC ESPRIT report, 24 February 1997, 25 pp., downloadable from [http://www.ced.tuc.gr/hermes]{} P.  Ciaccia, M.  Patella, and P.  Zezula, [*A cost model for similarity queries in metric spaces,*]{} in: Proc. 17-th Annual ACM Symposium on Principles of Database Systems (PODS’98), Seattle, WA, June 1998, pp. 59–68. M. Gromov and V.D. Milman, [*A topological application of the isoperimetric inequality,*]{} Amer. J. Math. [**105**]{} (1983), 843–854. J.M. Hellerstein, E. Koutsoupias, and C.H. Papadimitriou, [*On the analysis of indexing schemes,*]{} in: PODS’97, Tucson, AZ, pp. 249–256. V.D. Milman, [*The heritage of P.Lévy in geometric functional analysis,*]{} Astérisque [**157-158**]{} (1988), 273–301. V.D. Milman and G. Schechtman, [*Asymptotic Theory of Finite Dimensional Normed Spaces,*]{} Lecture Notes in Math. [**1200**]{}, Springer-Verlag, 1986. A. Silberschatz, M. Stonebraker, and J. Ullman (eds.), [*Database research: achievements and opportunities into the 21st century,*]{} Report of an NSF Workshop on the Future of Database Systems Research, May 26–27, 1995. M. Talagrand, [*Concentration of measure and isoperimetric inequalities in product spaces,*]{} Publ. Math. IHES [**81**]{} (1995), 73–205. J.K. Uhlmann, [*Satisfying general proximity/similarity queries with metric trees,*]{} Information Processing Lett. [**40**]{} (1991), 175–179. R. Weber, H.-J. Schek, and S. Blott, [*A quantatitive analysis and performance study for similarity-search methods in high-dimensional spaces,*]{} in: Proceedings of the 24-th VLDB Conference, New York, 1998, pp. 194–205.
--- author: - 'Emanuele R. Nocera,' - 'Maria Ubiali,' - and Cameron Voisey bibliography: - 'paper.bib' title: Single Top Production in PDF fits --- Acknowledgements {#acknowledgements .unnumbered} ================ The Authors would like to thank Jun Gao for providing them with the NNLO computation of Refs. [@Berger:2016oht; @Berger:2017zof], Marco Zaro for support with [[[MadGraph5\_aMC@NLO]{}]{}]{}, Zahari Kassabov for his help with the [ReportEngine]{} software, Reinhard Schwienhorst and Alberto Orso Maria Iorio for clarifications on the ATLAS and CMS experimental data, respectively, and the members of the NNPDF Collaboration, in particular Zahari Kassabov, Juan Rojo and Luca Rottoli, for useful comments on the manuscript. E.R.N. is supported by the European Commission through the Marie Skłodowska-Curie Action ParDHonS\_FFs.TMDs (grant number 752748). M.U. is funded by the Royal Society grant DH/150088 and supported by the STFC consolidated grant ST/L000385/1 and the Royal Society grant RGF/EA/180148. C.V. is supported by the STFC grant ST/R504671/1.
--- abstract: 'A general methodology to study public opinion inspired from information and complexity theories is outlined. It is based on probabilistic data extracted from opinion polls. It gives a quantitative information-theoretic explanation of high job approval of Greek Prime Minister Mr. Constantinos Karamanlis (2004-2007), while the same time series of polls conducted by the company Metron Analysis showed that his party New Democracy (abbr. ND) was slightly higher than the opposition party of PASOK -party leader Mr. George Papandreou. It is seen that the same mathematical model applies to the case of the popularity of President Clinton between January 1998 and February 1999, according to a previous study, although the present work extends the investigation to concepts as complexity and Fisher information, quantifying the organization of public opinion data.' author: - | C. P. Panos[^1], K. Ch. Chatzisavvas[^2],\ [*Department of Theoretical Physics,*]{}\ [*Aristotle University of Thessaloniki,*]{}\ [*54124 Thessaloniki, Greece*]{} date: 6 November 2007 title: 'Application of information and complexity theories to public opinion polls. The case of Greece (2004-2007).' --- Introduction ============ Public opinion expressed in results of elections and opinion polls have been studied widely using traditional statistics. An alternative approach is information theory, which can be applied to probabilistic data. Electoral data can be easily transformed from percentages to probabilities. Thus the use of information theory to investigate public opinion about political parties and the conduct of governments and its opposition is obvious, but has not been carried out so far. Such an application, the only one to our knowledge, is an analytical approach to interpret the public’s high job approval rating for president Clinton [@Kulkarni99]. This rating has been high and nearly constant between January 1998 and February 1999, despite the well known unfavorable conditions for the US president in that period. Such a high rating could be explained partially, but is still considered unusual for several reasons [@Kulkarni99]. The political situation in Greece in the recent three years is completely different than the United States of the years 1998-1999. However, an interesting (parallel) question arises: Greek Prime Minister Karamanlis enjoyed a high job approval (2004-2007), although his party New Democracy approval by the public was just $1\%$ higher than the opposition party PASOK, headed by George Papandreou. We note that we used statistical data from a specific Greek opinion polls company (Metron Analysis) in the period 2004-2007, stopping just three months before the latest parliament elections in Greece (September 2007). It is of interest to try to clarify the above striking fact by extending the usual statistical treatment to Shannon’s information theory [@Shannon48]. Information theory was used for the first time in telecommunications in the late 40’s. Our aim is to investigate the possibility to extract some general, qualitative conclusions from typical opinion polls in Greece, employing the tools of information and complexity theories. As we mentioned above, our inspiration comes from a similar study in the United States. Although the political systems and the conditions in the USA and Greece are very different, our work leads to the same mathematical model. It is seen that information-theoretic methods can be used to extend the results of usual statistics, which illuminate certain statistical data of public opinion. Information theory can proceed further towards an interpretation, in some sense, of statistical processes. The use of the logarithm in the definition of information entropy smooths small differences in statistical data from various companies and yields the same qualitative conclusions. This illustrates the strength of information theory to give quantitative (numerical) answers to qualitative questions. Elements of Information Theory ============================== Specifically information entropy $S$, corresponding to a probability distribution $\{p_{i}\}, i=1,2,\ldots,N$ of $N$ events occuring with probabilities $p_{i}$, respectivelly, can be defined as $$\label{eq:eq1} S=-\sum_{i=1}^{N} p_{i}\, \log{p_{i}}, \quad \mbox{where}\,\, \sum_{i=1}^{N} p_{i}=1 \,\,\mbox{(normalization)}$$ $S$ is an information theoretic quantity which takes into account all the moments of a probability distribution and can be considered, in a sense, superior to traditional statistics employing the well-known quantities of average value and variance. $S$ in relation (\[eq:eq1\]) is measured in bits (if the base of logarithm is 2), nats–natural units of information (if the base is e) and Hartleys (if the base is 10). In the present paper the base is 10, for the sake of comparison with [@Kulkarni99]. However, one case can be transformed to the other one, by multiplying with just a constant. Definition (\[eq:eq1\]) represents the average information content of an event, which occurs with a specific probability distribution $\{p_{i}\}$. The use of the logarithm is justified because in such a way $S$ obeys certain mathematical and intuitive properties expected from a quantity related to information content of a probability function. Specifically, $S$ is positive and the joint information content of two simultaneous independent events translate to the addition of the corresponding information measures of each event e.t.c. For more properties and a pedagogical description see [@Shannon48]. $S$ is maximum for an equiprobable or uniform probability distribution $p_{1},p_{2},\ldots,p_{N}=\displaystyle{\frac{1}{N}}$, i.e. $S_{\rm max}=\log{N}$. $S$ is minimum when one of the $p_{i}$’s is 1 ($p_{i}=1$) and all the other $p_{i}$’s are 0, i.e. $S_{\rm min}=0$, under the convention that $0\,\log{0}=0$. In this case, one of the outcomes is certain, while all the other ones are impossible to occur. $S$ represents a measure of information content of a probabilistic event, i.e. the average number of “Yes” or “No” questions needed to specify the event (in the case of bits). $S$ is reciprocal to the degree of surprise of an event, i.e. the least probable event has the most information and vice versa. We give a simple example in order to understand the meaning of relation (\[eq:eq1\]). Let us ask to a certain number of people the following question: *Is C. Karamanlis suitable for the position of Prime Minister of Greece?* We receive answers with percentages and corresponding probabilities $p_{1}$ (Yes), $p_{2}$ (No), $p_3$ (Something Else). A direct application of (\[eq:eq1\]) for the normalized probability distribution $\{p_{1},p_{2},p_{3}\}$, where $p_{1}+p_{2}+p_{3}=1$ ($N=3$) gives the information content in Hartleys of that set of probabilities. In the case of a uniform (equiprobable) distribution i.e. $p_{1}=p_{2}=p_{3}=1/3\simeq 33.3\%$, relation (\[eq:eq1\]) gives $S=S_{\rm max}=0.4471$. This is the maximum information entropy with uniform probability distribution ($N=3$). This can be interpreted as a distribution of complete ignorance (unbiased) in the sense that a specific answer does not contain more information than any other one. A case of maximum entropy $S$ corresponds to a minimum amount of information $I$ about our question. Thus information $I$, is reciprocal with $S$ $$\label{eq:eq2} I\sim \frac{1}{S}$$ The above convention agrees with our intuition, i.e. the information content of an event corresponding to a probability distribution can be quantified by the magnitude of our surprise after the event has occurred or how unpredictable is the outcome. The case of equiprobable distribution for $N=2$, i.e. $N_{1}=N_{2}=1/2=50\%$ occurred in the recent general parliament elections in Italy (April 2006). There were two large coalition of parties and one of the coalitions won with a slight difference in votes, about 40,000 -while the number of votes was about 40,000,000. Thus, with real results $49.8\%$ versus $49.69\%$, we can consider with a very satisfactory approximation that $p_{1}\simeq p_{2}\simeq 0.5=50\%$. The application of information theory in this case gives $S=S_{\rm max}=\log_{N}2=1$ bit (base of the logarithm equals 2). This fact is completely equivalent with throwing a fair coin (equal probability for the two results heads-tails) or with the question Yes-No (equiprobable) which coalition will win. That means that $S=S_{\rm max}$ gives $I=I_{\rm min}$ and the minimum information can be interpreted as a complete homogenization of the public opinion about the two coalitions. In other words, the results of elections in Italy correspond to the random throw of a fair coin i.e. a complete lack of knowledge of the voters. Our observation does not intend to depreciate the process of elections, the culmination of democracy, but it is an extreme case with maximum possible information entropy $S_{\rm max}$. There are other measures of information such as Onicescu’s information energy $E$ [@Onicescu66] and Fisher’s information $F$ [@Fisher25]. Shannon’s information is a global measure, while Fisher’s is a local one i.e. $S$ does not depend on the ordering of the probabilities $\{p_{i}\}$, while $F$ does depend, due to the existence of the derivative of the distribution in its definition. Their definitions are given below together with appropriate comments. It is stressed that all are based on the same probability distributions $\{p_{i}\}$ as $S$. Landsberg’s definition of disorder $\Delta$ [@Landsberg98] is $$\label{eq:3a} \Delta=\frac{S}{S_{\rm max}}$$ and order $$\label{eq:3b} \Omega=1-\Delta=1-\frac{S}{S_{\rm max}}, \quad (\Delta+\Omega=1)$$ Disorder $\Delta$ is a normalized disorder ($0<\Delta<1$). $\Delta=0$ (zero disorder, $S=0$) corresponds to complete order $\Omega=1$ and $\Delta=1$ (complete disorder, $S=S_{\rm max}$) corresponds to zero order $\Omega=0$. $\Delta, \Omega$ enable us to study the organization of data, described probabilistically. The next important step is the *statistical complexity* $$\label{eq:eq4} \Gamma_{\alpha,\beta}=\Delta^{\alpha} \Omega^{\beta}$$ defined by Shiner-Davison-Landsberg (SDL) [@Shiner99], where $\alpha$ is the strength of disorder and $\beta$ is the strength of order. In the present work we consider the simple case $\alpha=1$ and $\beta=1$. Another measure of *complexity* is $$\label{eq:eq5} C=S D$$ according to Lopez Ruiz-Mancini-Calbet (LMC) [@Lopez95]. Here $D$ is the so-called *disequilibrium* (or distance from equilibrium) defined as $$\label{eq:eq6} D=\sum_{i=1}^{N} \left(p_{i}-\frac{1}{N}\right)^2$$ SDL complexity $\Gamma_{\alpha,\beta}$ describes correctly the two extreme cases of complete order and complete disorder, where we expect intuitively zero complexity or organization of the data. An example taken from the physical world is illuminating. A perfect crystal (complete order) has $\Gamma=0$ and the same holds for a gas (complete disorder) where $\Gamma=0$ as well. Thus (perfect) crystals and gases are not interesting, lacking complexity or organization. This is given by $\Gamma_{\alpha,\beta}$ and agrees with intuition. Instead, for the information entropy we have $S=0$ for crystals and $S=S_{\rm max}$ for gases, which is not satisfactory. Thus extending from physics, $\Delta$, $\Omega$, $\Gamma_{\alpha,\beta}$ and $C$ enable us to study quantitatively the (organized) complexity of probabilistic data of opinion polls and elections. An other very important information measure is Fisher information $F$ [@Fisher25]. Recently, there is a revival of interest for Fisher information, culminating in two books [@Frieden04] and [@Frieden06], defined as $$\label{eq:eq8} F=\int \frac{\left(\frac{d\rho(x)}{dx}\right)^2}{\rho(x)}\,dx$$ for a continuous probability distribution $\rho(x)$, which is modified accordingly in the present work for discrete probability distributions. Specifically, for a discrete probability distribution $\{p_{i}\}$ employed in the present work, relation (\[eq:eq8\]) becomes $$\label{eq:eq8qa} F=\sum_{i=1}^{3} \frac{(p_{i+1}-p_{i})^2}{p_{i}}$$ Thus the treatment of high job approval of Clinton in [@Kulkarni99], will be repeated for the case of the Greek Prime Minister Constantinos Karamanlis and the Greek political scene in the recent three years (2004-2007) and extended in the present paper using new quantities e.g. $\Delta$, $\Omega$, $\Gamma_{\alpha,\beta}$, $C$ and $F$ as functions of time. Results and Discussion ====================== We used statistical data for the public opinion coming from the Greek opinion polls company *Metron Analysis*[^3]. Specifically, we focused our interest on the following three questions, presented in Table \[tab:tab1\]. \[tab:tab1\] Question A *Choose a political leader, who is, according to your opinion, most suitable for the position of Prime Minister of Greece.* ------------ ----------------------------------------------------------------------------------------------------------------------------------- Answers Karamanlis ($p_{1}$), Papandreou ($p_{2}$), and Other ($p_{3}$). Question B *Are you satisfied from Mr. Karamanlis or Mr. Papandreou as political leaders?* Answers from Karamanlis ($p_{1}$), from Papandreou ($p_{2}$), and from No one ($p_{3}$). Question C *Which party you wish to vote for?* Answers New Democracy (abbr. ND) -party leader Karamanlis ($p_{1}$), PASOK -party leader Papandreou ($p_{2}$), and Other party ($p_{3}$). : Questions A, B and C.[]{data-label="tab:tab1"} The corresponding answers (as probabilities), asked to a careful chosen number of voters, are shown in the vertical axes of Fig. \[fig:fig1\] (for Questions A, B and C, respectively). In all the figures the horizontal axis represents the time in months, starting from $t=0$ (March 2004) to $t=40$ (June 2007). Time $t=0$ is just before the parliament elections of April 2004 (winner ND) and time $t=40$ is 3 months before the latest elections that took place in September 2007, with another victory for ND. Thus we have three sets of probabilities $\{p_{1},p_{2},p_{3}\}$ corresponding to Questions A, B, and C. In Fig. \[fig:fig2\] we present $S^{(A)}(t)$ and $S_{\rm max}(t)$, calculated using (\[eq:eq1\]), from probabilities of Fig. \[fig:fig1\] (Question A). We also present disorder $\Delta^{(A)}(t)$ and order $\Omega^{(A)}(t)$, calculated from relations (\[eq:3a\]) and (\[eq:3b\]). Statistical complexity $\Gamma^{(A)}_{1,1}(t)$, complexity $C^{(A)}(t)$ and Fisher information $F^{(A)}(t)$ are calculated employing relations (\[eq:eq4\]), (\[eq:eq5\]) and (\[eq:eq8\]) and are displayed in Fig. \[fig:fig2\] as well. In Fig. \[fig:fig3\] we present $S^{(B)}(t)$, $\Delta^{(B)}(t)$, $\Omega^{(B)}(t)$, $\Gamma^{(B)}_{1,1}(t)$, $C^{(B)}(t)$ and $F^{(B)}(t)$ for Question B, while in Fig. \[fig:fig4\] we present $S^{(C)}(t)$, $\Delta^{(C)}(t)$, $\Omega^{(C)}(t)$, $\Gamma^{(C)}(t)$, $C^{(C)}(t)$ and $F^{(C)}(t)$ for Question C. In Question A, the entropy $S^{(A)}(t)$ shows an overall increase as function of time and tends to the limit value of $S_{max}=\log_{10}3=0.4771$ Hartleys. Taking into account that the information $I$ is reciprocal to the entropy $S$ (\[eq:eq2\]), it is seen that the information possessed by the body of voters on how suitable is Mr. Karamanlis, decreases as a function of time. In a sense, Mr. Karamanlis achievement was to *compartmentalize* and make *strongly independent*, the opinion of voters for him as a person (high percentages in Question A) compared to their opinion about his party (slightly higher percentage of ND compared with PASOK). The results of the parliament elections in September 2007 (ND-$42\%$, PASOK-$38\%$) confirmed the idea of the Mr. Karamanlis’ favorable and dominant profile, despite the major event of the forest fires and the collateral losses during August 2007. The same trend can be extracted by the other information-theoretic measures with the probabilities of Question A, e.g. the disorder $\Delta$ increases while the order $\Omega$ decreases. This is a mark that the organization of the statistics of the public opinion decreases. In Question B entropy as a function of time decreases and there is a global minimum for $t=34$ months (December 2006). After that global minimum it slightly increases, but stays far away from the maximum value. Respectively, order $\Omega$ increases and shows a global maximum for the same time ($t=34$ months), while the behavior of the other information-theoretic measures is analogous. It is seen from the figures that $I(t)=1/S(t)$, $\Omega(t)$, $\Gamma_{1,1}(t)$ and $C(t)$, although their mathematical definitions are different, show the same trends as functions of time: For Question A all of them decrease with time, while for Question B all increase. The same trend holds for Fisher information $F^{(B)}(t)$, which is reciprocal to $S^{(B)}(t)$ and analogous with $I^{(B)}(t)$. It is noted that this is the case for a simple Gaussian probability distribution, as seen in the literature [@Frieden06]. On the contrary, there is a striking similarity in the behavior of $F^{(A)}(t)$ and $F^{(C)}(t)$ compared with with $S^{(A)}(t)$ and $S^{(C)}(t)$ respectively. This can be attributed to the special meaning of Fisher’s information which is a local measure of information as contrasted to Shannon’s global measure. Specifically, changes in $S^{(A)}(t)$ or $S^{(C)}(t)$ are amplified as seen in the corresponding plots. This is very interesting but however, can be considered as a preliminary result taking into account that the presence of the derivative in the definition of $F$ cannot be reflected correctly in a case of a small discrete set of probabilities ($N=3$). Here we can make the following comment: If we switch from Question A to Question B, the qualitative behavior of the results changes drastically, it is almost inverse. So, in Question B the information that is available to the public increases (while the corresponding information in Question A decreases) and there is a global maximum (minimum) in December 2006. Thus, we have, from a mathematical (quantitative) point of view, a strong indication that the opinion polls are seriously affected from the formulation of the relevant questions. The fact that Mr. Karamanlis has a favorable profile based on Question A is escorted by a clearly less favorable profile according to the results of Question B. Completely analogous is the picture of Mr. Papandreou. The satisfaction rate is clearly less than the suitability for Prime Minister rate. The survey concerning president Clinton [@Kulkarni99] is based on a classical question about job approval which is definitely more objective (*“Do you approve or disapprove of the way \[name of president\] is handling his job as president?”*) and it has been used to US polls since 1930’s. On the other hand, the question about the suitability of a party leader for the position of the Prime Minister (it has been used in Greek opinion polls, since the mid 1990’s) is less objective. The person who answers the question takes into account extra data not related to a realistic assessment of the job and accomplishments of a party leader. The answers can be affected by factors such as the public image of the leader (the one that he shows and the one that the media advertise). There is more space for expectation or hope that in the future some issues are going to improve or are going to work better, due to special characteristics of the leader, his personality, his abilities etc. Question B about satisfaction is more realistic but still is general and obscure. Qualitatively, results for Question C are similar with the results of Question A. An interesting remark concerning the available data is that in the time interval between $t=12$ months (March 2005) and $t=28$ months (June 2006), the entropy reaches the maximum value and information practically minimizes. This time interval can be considered as the interim (meantime) between two elections. A final (extreme) remark seems appropriate in order to demonstrate the difference between the information theory point of view and the classical statistics approach. Suppose that the majority party achieves a crashing victory over minority, e.g. $70\%$ over $30\%$. The entropy for this scenario should be $S=0.611$ bits, while the corresponding maximum value is $S_{\rm max}=0.693$, so entropy decreases (or equivalently information increases) only by $12\%$. Thus, in terms of information theory, a fact that is completely clear-cut from the point of view of classical statistics, is not so important to give a complete and fair assessment of the public opinion. This has been outlined in [@Kulkarni99] as well. It is obvious that information theory can serve as a useful tool even for politics surveys, with more details than the present work. Something that so far has not been done systematically. The authors would like to thank Mr. Pantelis Savvidis (director of the newspaper *Macedonia* of Thessaloniki) and Mr. Dimitris Katsantonis (executive of the *Metron Analysis* for Thessaloniki) for providing the data. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ![Results for opinion polls for Questions A, B, and C respectively. The horizontal axes represent time in months ($t=0$ is March 2004), while the vertical axes show the corresponding percentages (Metron Analysis-Greece).[]{data-label="fig:fig1"}](graph1.eps "fig:"){height="6.0cm" width="7.0cm"} ![Results for opinion polls for Questions A, B, and C respectively. The horizontal axes represent time in months ($t=0$ is March 2004), while the vertical axes show the corresponding percentages (Metron Analysis-Greece).[]{data-label="fig:fig1"}](graph2.eps "fig:"){height="6.0cm" width="7.0cm"} ![Results for opinion polls for Questions A, B, and C respectively. The horizontal axes represent time in months ($t=0$ is March 2004), while the vertical axes show the corresponding percentages (Metron Analysis-Greece).[]{data-label="fig:fig1"}](graph3.eps "fig:"){height="6.0cm" width="7.0cm"} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Entropies $S^{(A)}(t)$ and $S^{(A)}_{\rm max}(t)$, Disorder $\Delta^{(A)}(t)$, Order $\Omega^{(A)}(t)$, Statistical Complexity $\Gamma^{(A)}_{1,1}(t)$, Complexity $C^{(A)}(t)$, and Fisher Information $F^{(A)}(t)$, for Question A.[]{data-label="fig:fig2"}](shannon1.eps "fig:"){height="6.0cm" width="7.0cm"} ![Entropies $S^{(A)}(t)$ and $S^{(A)}_{\rm max}(t)$, Disorder $\Delta^{(A)}(t)$, Order $\Omega^{(A)}(t)$, Statistical Complexity $\Gamma^{(A)}_{1,1}(t)$, Complexity $C^{(A)}(t)$, and Fisher Information $F^{(A)}(t)$, for Question A.[]{data-label="fig:fig2"}](delta1.eps "fig:"){height="6.0cm" width="7.0cm"} ![Entropies $S^{(A)}(t)$ and $S^{(A)}_{\rm max}(t)$, Disorder $\Delta^{(A)}(t)$, Order $\Omega^{(A)}(t)$, Statistical Complexity $\Gamma^{(A)}_{1,1}(t)$, Complexity $C^{(A)}(t)$, and Fisher Information $F^{(A)}(t)$, for Question A.[]{data-label="fig:fig2"}](omega1.eps "fig:"){height="6.0cm" width="7.0cm"} ![Entropies $S^{(A)}(t)$ and $S^{(A)}_{\rm max}(t)$, Disorder $\Delta^{(A)}(t)$, Order $\Omega^{(A)}(t)$, Statistical Complexity $\Gamma^{(A)}_{1,1}(t)$, Complexity $C^{(A)}(t)$, and Fisher Information $F^{(A)}(t)$, for Question A.[]{data-label="fig:fig2"}](Gamma1a.eps "fig:"){height="6.0cm" width="7.0cm"} ![Entropies $S^{(A)}(t)$ and $S^{(A)}_{\rm max}(t)$, Disorder $\Delta^{(A)}(t)$, Order $\Omega^{(A)}(t)$, Statistical Complexity $\Gamma^{(A)}_{1,1}(t)$, Complexity $C^{(A)}(t)$, and Fisher Information $F^{(A)}(t)$, for Question A.[]{data-label="fig:fig2"}](C1.eps "fig:"){height="6.0cm" width="7.0cm"} ![Entropies $S^{(A)}(t)$ and $S^{(A)}_{\rm max}(t)$, Disorder $\Delta^{(A)}(t)$, Order $\Omega^{(A)}(t)$, Statistical Complexity $\Gamma^{(A)}_{1,1}(t)$, Complexity $C^{(A)}(t)$, and Fisher Information $F^{(A)}(t)$, for Question A.[]{data-label="fig:fig2"}](Fb1.eps "fig:"){height="6.0cm" width="7.0cm"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Entropies $S^{(B)}(t)$ and $S^{(B)}_{\rm max}(t)$, Disorder $\Delta^{(B)}(t)$, Order $\Omega^{(B)}(t)$, Statistical Complexity $\Gamma^{(B)}_{1,1}(t)$, Complexity $C^{(B)}(t)$, and Fisher Information $F^{(B)}(t)$, for Question B.[]{data-label="fig:fig3"}](shannon2.eps "fig:"){height="6.0cm" width="7.0cm"} ![Entropies $S^{(B)}(t)$ and $S^{(B)}_{\rm max}(t)$, Disorder $\Delta^{(B)}(t)$, Order $\Omega^{(B)}(t)$, Statistical Complexity $\Gamma^{(B)}_{1,1}(t)$, Complexity $C^{(B)}(t)$, and Fisher Information $F^{(B)}(t)$, for Question B.[]{data-label="fig:fig3"}](delta2.eps "fig:"){height="6.0cm" width="7.0cm"} ![Entropies $S^{(B)}(t)$ and $S^{(B)}_{\rm max}(t)$, Disorder $\Delta^{(B)}(t)$, Order $\Omega^{(B)}(t)$, Statistical Complexity $\Gamma^{(B)}_{1,1}(t)$, Complexity $C^{(B)}(t)$, and Fisher Information $F^{(B)}(t)$, for Question B.[]{data-label="fig:fig3"}](omega2.eps "fig:"){height="6.0cm" width="7.0cm"} ![Entropies $S^{(B)}(t)$ and $S^{(B)}_{\rm max}(t)$, Disorder $\Delta^{(B)}(t)$, Order $\Omega^{(B)}(t)$, Statistical Complexity $\Gamma^{(B)}_{1,1}(t)$, Complexity $C^{(B)}(t)$, and Fisher Information $F^{(B)}(t)$, for Question B.[]{data-label="fig:fig3"}](Gamma1b.eps "fig:"){height="6.0cm" width="7.0cm"} ![Entropies $S^{(B)}(t)$ and $S^{(B)}_{\rm max}(t)$, Disorder $\Delta^{(B)}(t)$, Order $\Omega^{(B)}(t)$, Statistical Complexity $\Gamma^{(B)}_{1,1}(t)$, Complexity $C^{(B)}(t)$, and Fisher Information $F^{(B)}(t)$, for Question B.[]{data-label="fig:fig3"}](C2.eps "fig:"){height="6.0cm" width="7.0cm"} ![Entropies $S^{(B)}(t)$ and $S^{(B)}_{\rm max}(t)$, Disorder $\Delta^{(B)}(t)$, Order $\Omega^{(B)}(t)$, Statistical Complexity $\Gamma^{(B)}_{1,1}(t)$, Complexity $C^{(B)}(t)$, and Fisher Information $F^{(B)}(t)$, for Question B.[]{data-label="fig:fig3"}](Fb2.eps "fig:"){height="6.0cm" width="7.0cm"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Entropies $S^{(C)}(t)$ and $S^{(C)}_{\rm max}(t)$, Disorder $\Delta^{(C)}(t)$, Order $\Omega^{(C)}(t)$, Statistical Complexity $\Gamma^{(C)}_{1,1}(t)$, Complexity $C^{(C)}(t)$, and Fisher Information $F^{(C)}(t)$, for Question C.[]{data-label="fig:fig4"}](shannon3.eps "fig:"){height="6.0cm" width="7.0cm"} ![Entropies $S^{(C)}(t)$ and $S^{(C)}_{\rm max}(t)$, Disorder $\Delta^{(C)}(t)$, Order $\Omega^{(C)}(t)$, Statistical Complexity $\Gamma^{(C)}_{1,1}(t)$, Complexity $C^{(C)}(t)$, and Fisher Information $F^{(C)}(t)$, for Question C.[]{data-label="fig:fig4"}](delta3.eps "fig:"){height="6.0cm" width="7.0cm"} ![Entropies $S^{(C)}(t)$ and $S^{(C)}_{\rm max}(t)$, Disorder $\Delta^{(C)}(t)$, Order $\Omega^{(C)}(t)$, Statistical Complexity $\Gamma^{(C)}_{1,1}(t)$, Complexity $C^{(C)}(t)$, and Fisher Information $F^{(C)}(t)$, for Question C.[]{data-label="fig:fig4"}](omega3.eps "fig:"){height="6.0cm" width="7.0cm"} ![Entropies $S^{(C)}(t)$ and $S^{(C)}_{\rm max}(t)$, Disorder $\Delta^{(C)}(t)$, Order $\Omega^{(C)}(t)$, Statistical Complexity $\Gamma^{(C)}_{1,1}(t)$, Complexity $C^{(C)}(t)$, and Fisher Information $F^{(C)}(t)$, for Question C.[]{data-label="fig:fig4"}](Gamma1c.eps "fig:"){height="6.0cm" width="7.0cm"} ![Entropies $S^{(C)}(t)$ and $S^{(C)}_{\rm max}(t)$, Disorder $\Delta^{(C)}(t)$, Order $\Omega^{(C)}(t)$, Statistical Complexity $\Gamma^{(C)}_{1,1}(t)$, Complexity $C^{(C)}(t)$, and Fisher Information $F^{(C)}(t)$, for Question C.[]{data-label="fig:fig4"}](C3.eps "fig:"){height="6.0cm" width="7.0cm"} ![Entropies $S^{(C)}(t)$ and $S^{(C)}_{\rm max}(t)$, Disorder $\Delta^{(C)}(t)$, Order $\Omega^{(C)}(t)$, Statistical Complexity $\Gamma^{(C)}_{1,1}(t)$, Complexity $C^{(C)}(t)$, and Fisher Information $F^{(C)}(t)$, for Question C.[]{data-label="fig:fig4"}](Fb3.eps "fig:"){height="6.0cm" width="7.0cm"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [99]{} R.G. Kulkarni, R.R. Stough, and K.E. Haynes, Entropy **1**, 37 (1999). C.E. Shannon, Bell Syst. Tech. **27**, 379 (1948); ibid. **27** 623 (1948). O. Onicescu, C. R. Acad. Sci. Paris **A263**, 25 (1966); O. Onicescu and V. Stefanescu, *Elements of Informational Statistics with Applications* (Bucharest, 1979). R.A. Fisher, Proc. Cambridge Phil. Sec. **22**, 700 (1925). P.T. Landsberg and J. Shiner, Phys. Lett. A **245**, 228 (1998). J.S. Shiner, M. Davison, and P.T. Landsberg, Phys. Rev. E **59**, 1459 (1999). R. Lopez-Ruiz, H.L. Mancini, and X. Calbet, Phys. Lett. A **209**, 321 (1995). R. Frieden, Science from Fisher Information, (Cambridge Univ. Press, 2004). B.R. Frieden and R.A. Gatenby, eds. Exploratory Data Analysis using Fisher Information (Springer-Verlag; Berlin 2007) [^1]: `e-mail: chpanos@auth.gr` [^2]: `e-mail: kchatz@auth.gr` [^3]: http://www.metronanalysis.gr
--- abstract: 'Badzioch and Bergner proved a rigidification theorem saying that each homotopy simplicial algebra is weakly equivalent to a simplicial algebra. The question is whether this result can be extended from algebraic theories to finite limit theories and from simplicial sets to more general monoidal model categories. We will present some answers to this question.' address: ' Department of Mathematics and StatisticsMasaryk University, Faculty of SciencesKotlářská 2, 611 37 Brno, Czech Republicrosicky@math.muni.cz ' author: - 'J. Rosický$^*$' date: 'May 9, 2013' title: Rigidification of algebras over essentially algebraic theories --- [^1] Introduction ============ Badzioch [@Ba] proved a rigidification theorem for simplicial algebras of one-sorted algebraic theories ${\mathcal {T}}$ saying that any homotopy ${\mathcal {T}}$-algebra is weakly equivalent to a (strict) ${\mathcal {T}}$-algebra. Bergner [@Be] extended this rigidification theorem to (many-sorted) algebraic theories. Our aim is to find whether their rigidification theorems can be generalized to an arbitrary finitely combinatorial monoidal model category ${\mathcal {V}}$ in place of simplicial sets and to a finite weighted limit theory ${\mathcal {T}}$ in place of an algebraic theory. These theories are usually called essentially algebraic (see [@AR]). In the homotopy context, we have to work with weighted limits whose weight is cofibrant (see [@LR], or [@V]). Since, in contrast to finite products, finite weights are rarely cofibrant, we have to replace finite weights by their saturation consisting of finitely presentable weights. Then we can use finitely presentable cofibrant weights to define finite weighted homotopy limit theories. The rigidification theorem of [@Ba] and [@Be] has a strong form saying that the model categories of strict algebras and of homotopy algebras are Quillen equivalent. We will show that this strong form always follows from a weak one and is valid for ${\mathcal {T}}$ having all limits weighted by a suitable class $\Phi$ of finitely presentable cofibrant weights. The condition is that any cofibrant weight can be obtained from $\Phi$-weights by means of homotopy invariant $\Phi$-flat colimits. In particular, we can take all finitely presentable cofibrant weights because, in a finitely combinatorial model category, any cofibrant object is a filtered colimit of finitely presentable cofibrant objects. Or, we can take all finite products, i.e., an algebraic theory, provided that any cofibrant weight is a homotopy sifted colimit of finite coproducts of representables. On the other hand, we will show that the rigidification theorem is not always true and that it means a kind of coherence statement. We will need some assumptions about ${\mathcal {V}}$, above all it should be a monoidal model category in the sense of [@L], i.e., with the cofibrant unit $I$ ([@Ho] has this axiom in a weaker form). This makes possible to define model ${\mathcal {V}}$-categories (see [@L]). Also, ${\mathcal {V}}$ should be locally finitely presentable as a closed category (see [@K]) and finitely combinatorial. The latter adds that both cofibrations and trivial cofibrations are cofibrantly generated by morphisms between finitely presentable objects. Since we need the projective ${\mathcal {V}}$-model structure on $[{\mathcal {T}},{\mathcal {V}}]$, the ${\mathcal {V}}$-category ${\mathcal {T}}$ should be locally cofibrant (i.e., it should have all hom-objects cofibrant), or ${\mathcal {V}}$ should satisfy the monoid axiom. Since this projective model structure should be left proper as well, we will prefer the first assumption (see [@DRO]). In order to make the machinery of enriched left Bousfield localizations possible, we have to assume that ${\mathcal {V}}$ is not only finitely combinatorial but finitely tractable, which means that the generating cofibrations and trivial cofibrations are between cofibrant objects (see [@B]). Even more restrictively, in order to prove our main results we have to assume that all objects of ${\mathcal {V}}$ are cofibrant. Finally, will need a fibrant approximation ${\mathcal {V}}$-functor on ${\mathcal {V}}$ preserving limits weighted by finite weights. Concerning enriched category theory we refer to [@K1]. In particular, given a ${\mathcal {V}}$-category ${\mathcal {K}}$, a diagram $D:{\mathcal {D}}\to{\mathcal {K}}$ and a weight $G:{\mathcal {D}}\to{\mathcal {V}}$ then a limit $\{G,D\}$ of $D$ weighted by $G$ is defined by being equipped with a natural isomorphism $$\beta:{\mathcal {K}}(-,\{G,D\})\to[{\mathcal {D}},{\mathcal {V}}](G,{\mathcal {K}}(-,D)).$$ This natural transformation corresponds to a weighted limit cone $$\delta:G\to{\mathcal {K}}(\{G,D\},D).$$ The author is indebted to John Bourke, Richard Garner, A. E. Stanculescu and Luk' aš Vokřínek for stimulating discussions about the subject of this paper. But, in particular, the author is grateful to the unknown referee for finding a gap in the proof of \[th3.3\] and for pointing up the need of taking the saturation in \[re3.5\]. Homotopy limit sketches ======================= We recall the concept of a weighted limit sketch (see [@K1]). \[def2.1\] *A *weighted limit sketch* is a pair ${\mathcal {H}}=({\mathcal {T}},L)$ consisting of a small ${\mathcal {V}}$-category ${\mathcal {T}}$ and a set $L$ of weights $G_l:{\mathcal {D}}_l\to{\mathcal {V}}$, diagrams $D_l:{\mathcal {D}}_l\to{\mathcal {T}}$, objects $X_l$ and morphisms $$\delta_l:G_l\to{\mathcal {T}}(X_l,D_l)$$ in $[{\mathcal {T}},{\mathcal {V}}]$ for each $l\in L$.* A *model* of ${\mathcal {H}}$ is a ${\mathcal {V}}$-functor $A:{\mathcal {T}}\to{\mathcal {V}}$ such that $$G_l \xrightarrow{\quad \delta_l\quad} {\mathcal {T}}(X_l,D_l) \xrightarrow{\quad \quad} {\mathcal {V}}(AX_l,AD_l)$$ is a weighted limit cone for each $l\in L$. The last statement means that the induced morphisms $$t_l^A:AX_l\to\{G_l,AD_l\}$$ are isomorphisms. We will denote by ${\operatorname{Mod}}({\mathcal {H}})$ the full subcategory of $[{\mathcal {T}},{\mathcal {V}}]$ consisting of all models of ${\mathcal {H}}$. A weight $G:{\mathcal {D}}\to{\mathcal {V}}$ is called *finite* if 1. ${\mathcal {D}}$ has finitely many objects, 2. all objects ${\mathcal {D}}(d,e)$ are finitely presentable, and 3. all objects $Gd$ are finitely presentable. This concept was introduced in [@K]. Since any finitely presentable weight belongs to the closure of representable functors under colimits weighted by finite weights (see [@K], 7.2), finitely presentable weights form the saturation of finite weights (see [@KS], 3.8 and 3.13). \[def2.2\] [*A weighted limit sketch is called *finite* if all weights $G_l$, $l\in L$ are finitely presentable.* ]{} [@K] calls a weighted limit sketch finite if all weights are finite. Our definition is more general and we will need it later. But its strength is the same as that of [@K]. A *fibrant approximation functor* $R:{\mathcal {V}}\to{\mathcal {V}}$ is a functor $R$ together with a natural transformation $\rho:{\operatorname{Id}}\to R$ such that $\rho_V$ is a weak equivalence and $RV$ is fibrant for each $V\in{\mathcal {V}}$ (cf. [@H]). If all $\rho_V$ are trivial cofibrations we will call $R$ a *fibrant replacement functor* (cf. [@Ho]). \[th2.3\] Let ${\mathcal {V}}$ be a combinatorial monoidal model category equipped with a fibrant approximation ${\mathcal {V}}$-functor $R:{\mathcal {V}}\to{\mathcal {V}}$ preserving finite weighted limits. Let ${\mathcal {L}}=({\mathcal {T}},L)$ be a finite weighted limit sketch with ${\mathcal {T}}$ locally cofibrant. Then ${\operatorname{Mod}}({\mathcal {H}})$ is a combinatorial model ${\mathcal {V}}$-category with respect to the projective model structure. Following [@K], ${\operatorname{Mod}}({\mathcal {L}})$ is a reflective subcategory of $[{\mathcal {T}},{\mathcal {V}}]$. We will denote the inclusion ${\mathcal {V}}$-functor by $U:{\operatorname{Mod}}({\mathcal {H}})\to\ [{\mathcal {T}},{\mathcal {V}}]$ and its left ${\mathcal {V}}$-adjoint by $F$. Since any finitely presentable weight belongs to the closure of representable functors under colimits weighted by finite weights, the fibrant approximation functor $R$ preserves limits weighted by finitely presentable weights. Thus it lifts to a “fibrant approximation" functor on ${\operatorname{Mod}}({\mathcal {L}})$ by sending $A$ to $RA$. Hence the result follows from [@S] B.2. \[re2.4\] *(1) The assumption that ${\mathcal {T}}$ is *locally cofibrant* (i.e. that it has all hom-objects cofibrant) is only needed for the existence of the projective model structure on $[{\mathcal {T}},{\mathcal {V}}]$. Thus it can replaced by assuming that ${\mathcal {V}}$ satisfes the monoid axiom.* \(2) Each ${\mathcal {V}}$ having all objects fibrant has $R={\operatorname{Id}}$. In ${\operatorname{\bf SSet}}$, one can take $R={\operatorname{Ex}}^{\infty}$ because it is a colimit of a countable chain of right adjoint functors (see [@GJ]) and filtered colimits commute with finite weighted limits in ${\mathcal {V}}$ (see [@K]). Following [@J] B2.1.4. and B2.1.6, $R$ is a simplicial functor. \(3) We could replace a finite weighted limit sketch by an ($\alpha$-small) weighted limit sketch but we should assume that $R$ preserves ($\alpha$-small) weighted limits (see [@K], 7.4). \[def2.5\] *A *weighted homotopy limit sketch* is a weighted limit sketch ${\mathcal {H}}=({\mathcal {T}},L)$ where all weights $G_l$ are cofibrant in $[{\mathcal {T}},{\mathcal {V}}]$.* A *homotopy model* of ${\mathcal {H}}$ is a ${\mathcal {V}}$-functor $A:{\mathcal {T}}\to{\mathcal {V}}$ such that the induced morphisms $$t_l^A:AX_l\to\{G,AD_l\}$$ are weak equivalences for each $l\in L$. Let us add that $\{G,AD_l\}$ is the weighted homotopy limit in the sense of [@V] provided that the diagrams $AD_l$ are pointwise cofibrant for each $l\in L$, i.e., that all $AD_ld$, $d\in{\mathcal {D}}$, $l\in L$ are cofibrant. We will denote by ${\operatorname{HMod}}({\mathcal {H}})$ the full subcategory of $[{\mathcal {T}},{\mathcal {V}}]$ consisting of all homotopy models of ${\mathcal {H}}$. Of course, any model of ${\mathcal {H}}$ is a homotopy model of ${\mathcal {H}}$. We say that a homotopy model is fibrant if it is fibrant in the projective model structure on $[{\mathcal {T}},{\mathcal {V}}]$. \[ex2.6\] [ *Let ${\mathcal {T}}$ be a small ${\mathcal {V}}$-category and $f:X\to Y$ a morphism in ${\mathcal {T}}$. Let ${\mathcal {D}}$ be a free ${\mathcal {V}}$-category over $1$. Thus ${\mathcal {D}}$ has a unique object $d$ with ${\mathcal {D}}(d,d)$ equal to the tensor unit $I$ of ${\mathcal {V}}$. Let ${\mathcal {H}}=({\mathcal {T}},L)$ be a weighted limit sketch where $L$ consists of a single weight $G:{\mathcal {D}}\to{\mathcal {V}}$ with $Gd=I$, a single diagram $D:{\mathcal {D}}\to{\mathcal {T}}$ with $Dd=B$ and a morphism $\delta:I\to{\mathcal {T}}(X,Y)$ corresponding to $f$. Models of ${\mathcal {H}}$ are ${\mathcal {V}}$-functors $A:{\mathcal {T}}\to{\mathcal {V}}$ such that $A(f)$ is an isomorphism. The weight $G$ is cofibrant and $A$ is a homotopy model of ${\mathcal {H}}$ if and only if it is fibrant and $A(f)$ is a weak equivalence.* ]{} \[th2.7\] Let ${\mathcal {V}}$ be a left proper tractable monoidal model category and ${\mathcal {H}}=({\mathcal {T}},L)$ a weighted homotopy limit sketch with ${\mathcal {T}}$ locally cofibrant. Then there is a localized model category structure ${\mathcal {M}}_{\mathcal {H}}$ on $[{\mathcal {T}},{\mathcal {V}}]$ whose fibrant objects are precisely fibrant homotopy models of ${\mathcal {H}}$. In $[{\mathcal {T}},{\mathcal {V}}]$, we have morphisms $$\varphi_l:G_l\ast{\mathcal {T}}(D_l,-)\to{\mathcal {T}}(X_l,-)$$ for each $l\in L$. Since hom-functors are always cofibrant in $[{\mathcal {T}},{\mathcal {V}}]$, $G_l\ast{\mathcal {T}}(D_l,-)$ is cofibrant as well (see [@LR] 4.1). Thus $\varphi_l$ is a morphism between cofibrant objects for each $l\in L$. Following [@B], 3.18., there exists a left Bousfield ${\mathcal {V}}$-localization ${\mathcal {M}}_{\mathcal {H}}$ of $[{\mathcal {T}},{\mathcal {V}}]$ with respect to the set $\{\varphi_l\backslash l\in L\}$. Fibrant objects in ${\mathcal {M}}_{\mathcal {H}}$ are fibrant objects in $[{\mathcal {T}},{\mathcal {V}}]$ for which $$[{\mathcal {T}},{\mathcal {V}}](\varphi_l,A):[{\mathcal {T}},{\mathcal {V}}]({\mathcal {T}}(X_l,-),A)\to[{\mathcal {T}},{\mathcal {V}}](G_l\ast{\mathcal {T}}(D_l,-),A)$$ is a weak equivalence for each $l\in L$. Since $$[{\mathcal {T}},{\mathcal {V}}]({\mathcal {T}}(X_l,-),A)\cong A(X_l)$$ and $$[{\mathcal {T}},{\mathcal {V}}](G_l\ast{\mathcal {T}}(D_l,-),A)\cong\{G_l,{\mathcal {T}}(D_l,A)\},$$ $[{\mathcal {T}},{\mathcal {V}}](\varphi_l,A)$ corresponds to the morphism $t_l^A$. Thus a fibrant object $A$ in $[{\mathcal {T}},{\mathcal {V}}]$ is fibrant in ${\mathcal {M}}_{\mathcal {H}}$ if and only if $A$ is a homotopy model of ${\mathcal {H}}$. \[le2.8\] Let ${\mathcal {V}}$ be a combinatorial monoidal model category having all objects cofibrant and ${\mathcal {H}}=({\mathcal {T}},L)$ a weighted homotopy limit sketch. Let $G:{\mathcal {D}}^{{\operatorname{op}}}\to{\mathcal {V}}$ be a cofibrant weight, $D_1,D_2:{\mathcal {D}}\to[{\mathcal {T}},{\mathcal {V}}]$ diagrams and $\alpha:D_1\to D_2$ such that $\alpha_d$ is a weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$ for each object $d$ in ${\mathcal {D}}$. Then $G\ast\alpha:G\ast D_1\to G\ast D_2$ is a weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$. Since $[{\mathcal {T}},{\mathcal {V}}]$ is a model ${\mathcal {V}}$-category, there is a cofibrant replacement ${\mathcal {V}}$-functor $Q:[{\mathcal {T}},{\mathcal {V}}]\to[{\mathcal {T}},{\mathcal {V}}]$ (see [@Sh] 24.2). Let $\gamma_A:QA\to A$ denote the corresponding trivial fibration for $A\in[{\mathcal {T}},{\mathcal {V}}]$. Consider the diagram $$\xymatrix@C=4pc@R=3pc{ G\ast QD_1 \ar[r]^{G\ast Q\alpha} \ar[d]_{G\ast\gamma D_1} & G\ast QD_2\ar [d]^{G\ast\gamma D_2}\\ G\ast D_1 \ar[r]_{G\ast\alpha}& G\ast D_2 }$$ For $D:{\mathcal {D}}\to [{\mathcal {T}},{\mathcal {V}}]$ and $X\in{\mathcal {T}}$, let $D_X:{\mathcal {D}}\to{\mathcal {V}}$ be defined by $D_Xd=Dd(X)$. Since $(\gamma D)_X:D_X\to (QD)_X$ is an objectwise weak equivalence between objectwise cofibrant diagrams, $G\ast(\gamma D)_X$ is a weak equivalence in ${\mathcal {V}}$. This is [@H] 18.4.4, which is clearly valid for model ${\mathcal {V}}$-categories. Thus $G\ast\gamma D$ is a weak equivalence in $[{\mathcal {T}},{\mathcal {V}}]$. Therefore $G\ast\alpha$ is a weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$ if and only if $G\ast Q\alpha$ is a weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$. Since a left Bousfield localization of a model ${\mathcal {V}}$-category $[{\mathcal {T}},{\mathcal {V}}]$ is a model ${\mathcal {V}}$-category (see [@B], 4.46) and $Q\alpha$ is an objectwise weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$ between objectwise cofibrant diagrams, $G\ast Q\alpha$ is a weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$ (cf. [@H], 18.4.4). Homotopy limit theories ======================= \[def3.1\] [*A weighted limit sketch ${\mathcal {H}}=({\mathcal {T}},L)$ will be called *normal* if $X_l=\{G_l,D_l\}$ and $$\delta_l:G_l\to{\mathcal {T}}(X_l,D_l)$$ is the weighted limit cone.* ]{} \[re3.2\] *(1) Let $\Phi$ be a class of finitely presentable cofibrant weights and ${\mathcal {T}}$ be a small ${\mathcal {V}}$-category having all limits weighted by weights belonging to $\Phi$. We get a normal finite weighted homotopy limit sketch ${\mathcal {H}}({\mathcal {T}})=({\mathcal {T}},L({\mathcal {T}}))$ where $L({\mathcal {T}})$ consists of all pairs $(G,D)$ where $G:{\mathcal {D}}\to{\mathcal {V}}$ belongs to $\Phi$ and $D:{\mathcal {D}}\to{\mathcal {T}}$ is a diagram. These sketches will be called $\Phi$-*weighted homotopy limit theories*. If $\Phi$ consists of all finitely presentable cofibrant weights, we say that ${\mathcal {H}}({\mathcal {T}})$ is a *finite weighted homotopy limit theory*. Very often we will denote these theories just by ${\mathcal {T}}$.* For a pair of weights $H:{\mathcal {C}}^{{\operatorname{op}}}\to{\mathcal {V}}$ and $G:{\mathcal {D}}^{{\operatorname{op}}}\to{\mathcal {V}}$, we say that $H$-*colimits commute with* $G$-*limits* if the functor $$H\ast -:[{\mathcal {C}},{\mathcal {V}}]\to{\mathcal {V}}$$ preserves limits weighted by $G$. The class of colimits commuting with all $\Phi$-weighted limits is denoted $\Phi^+$. Weights belonging to $\Phi^+$ are called $\Phi$-*flat*. \(2) A weight $G:{\mathcal {D}}^{{\operatorname{op}}}\to{\mathcal {V}}$ will be called *homotopy invariant* if, for its cofibrant replacement $\gamma_G:G_c\to G$ in the projective model structure and any objectwise cofibrant diagram $D:{\mathcal {D}}\to{\mathcal {V}}$ (i.e., $Dd$ is cofibrant for each object $d$ in ${\mathcal {D}}$), the morphism $\gamma_G\ast D: G_c\ast D\to G\ast D$ is a weak equivalence. Following [@H] 18.4.5 (1) this definition does not depend on the choice of a cofibrant replacement. In particular, any cofibrant weight is homotopy invariant. \(3) Given a class $\Phi$ of cofibrant weights, then $\Phi^\diamond$ will denote the closure in cofibrant weights of $\Phi$ under colimits weighted by $\Phi$-flat homotopy invariant weights. This means that $\Phi^\diamond$ arises from $\Phi$ by iterative taking weighted colimits in presheaves $G\ast D$ such that $G$ is $\Phi$-flat and homotopy invariant, $D$ is objectwise cofibrant and $G\ast D$ is cofibrant. Whenever $G$ is cofibrant and $D$ objectwise cofibrant then $G\ast D$ is cofibrant. \(4) Often, we will have to assume that all objects of ${\mathcal {V}}$ are cofibrant. Then all diagrams $D:{\mathcal {D}}\to{\mathcal {V}}$ are objectwise cofibrant, which simplifies the definition of a homotopy invariant weight. Also, every ${\mathcal {V}}$-category is locally cofibrant. \[th3.3\] Let ${\mathcal {V}}$ be a finitely combinatorial monoidal model category having all objects cofibrant and $\Phi$ a class of finitely presentable cofibrant weights such that every cofibrant weight is weakly equivalent to a weight belonging to $\Phi^\diamond$. Assume that ${\mathcal {V}}$ is equipped with a fibrant approximation ${\mathcal {V}}$-functor $R:{\mathcal {V}}\to{\mathcal {V}}$ preserving $\Phi$-weighted limits. Let ${\mathcal {T}}$ be a $\Phi$-weighted homotopy limit theory. Then the model categories ${\operatorname{Mod}}({\mathcal {H}}({\mathcal {T}}))$ and ${\mathcal {M}}_{{\mathcal {H}}({\mathcal {T}})}$ are Quillen equivalent. Let ${\mathcal {H}}={\mathcal {H}}({\mathcal {T}})$. The ${\mathcal {V}}$-functor $F:[{\mathcal {T}},{\mathcal {V}}]\to{\operatorname{Mod}}({\mathcal {H}})$ is left Quillen (see the proof of \[th2.3\]). Since all $\varphi_l$ from the proof of \[th2.7\] are morphisms between cofibrant objects and $F\varphi_l$ are isomorphisms, $F:{\mathcal {M}}_{\mathcal {H}}\to [{\mathcal {T}},{\mathcal {V}}]$ is a left Quillen functor. Thus $(U,F)$ is a Quillen pair between ${\operatorname{Mod}}({\mathcal {H}})$ and ${\mathcal {M}}_{\mathcal {H}}$. We have to show that it is a Quillen equivalence of ${\operatorname{Mod}}({\mathcal {H}})$ and ${\mathcal {M}}_{\mathcal {H}}$. Let $\eta:{\operatorname{Id}}\to UF$ be the unit of the adjunction. If $A$ is a cofibrant object in $[{\mathcal {T}},{\mathcal {V}}]$ belonging to $\Phi$ then $A\cong A\ast Y$ where $Y:{\mathcal {T}}^{{\operatorname{op}}}\to[{\mathcal {T}},{\mathcal {V}}]$ is the Yoneda embedding. Since $A\in\Phi$, the pair $l=(A,{\operatorname{Id}}_{\mathcal {T}})$ belongs to $L({\mathcal {T}})$. For $B$ in ${\operatorname{Mod}}({\mathcal {H}})$ we have $$\begin{aligned} {\operatorname{Mod}}({\mathcal {H}})(Y(\{A,{\operatorname{Id}}_{\mathcal {T}}\}),B)&\cong B(\{A,{\operatorname{Id}}_{\mathcal {T}}\})\cong\{A,B\}\cong{\mathcal {V}}(I,\{A,B\})\\ &\cong[{\mathcal {T}},{\mathcal {V}}](A,{\mathcal {V}}(I,B))\cong[{\mathcal {T}},{\mathcal {V}}](A,B)\\ &\cong[{\mathcal {T}},{\mathcal {V}}](A,{\operatorname{Mod}}({\mathcal {H}})(Y,B))\end{aligned}$$ (the last isomorphism follows from the enriched Yoneda lemma, see [@B] 6.3.5). Thus $Y(\{A,{\operatorname{Id}}_{\mathcal {T}}\})$ is the weighted colimit $A\ast Y$ in ${\operatorname{Mod}}({\mathcal {H}})$. Consequently $\eta_A=\varphi_l$ and thus $\eta_A$ is a weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$. Now, let $A$ be an arbitrary cofibrant object in $[{\mathcal {T}},{\mathcal {V}}]$. Following our assumption, $A$ is weakly equivalent in $[{\mathcal {T}},{\mathcal {V}}]$ to $A'$ belonging to $\Phi^\diamond$. Thus there is a zig-zag of weak equivalences in $[{\mathcal {T}},{\mathcal {V}}]$ between $A$ and $A'$. Since both $A$ and $A'$ are cofibrant, this zig-zag can be changed into a zig-zag of weak equivalences in $[{\mathcal {T}},{\mathcal {V}}]$ between cofibrant objects. Since $F$ is left Quillen and $U$ preserves weak equivalences, the composition $UF$ preserves weak equivalences between cofibrant objects. Thus $\eta_A$ is a weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$ if and only if $\eta_{A'}$ is a weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$. Hence, without any loss of generality, we can assume that $A\in\Phi^\diamond$. Consider a weighted colimit $G\ast D$ in $[{\mathcal {T}},{\mathcal {V}}]$ where $G$ is $\Phi$-flat and homotopy invariant, $D:{\mathcal {D}}\to\Phi^\diamond$, $G\ast D$ is cofibrant and $\eta_{Dd}$ is a weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$ for each $d\in{\mathcal {D}}$. We have to prove that $\eta_{G\ast D}$ is a weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$. Since $\Phi^+$-weighted colimits commute with $\Phi$-weighted limits, the functor $U$ preserves $\Phi^+$-weighted colimits. Thus the unit $\eta_{G\ast D}:G\ast D\to UF(G\ast D)$ is a $\Phi^+$-weighted colimit $G\ast\eta_{Dd}$ of units $\eta_{Dd}$ where $Dd\in\Phi$. Consider the commutative diagram $$\xymatrix@C=4pc@R=3pc{ G_c\ast D \ar[r]^{\gamma_G\ast D} \ar[d]_{\eta_{G_c\ast D}} & G\ast D\ar [d]^{\eta_{G\ast D}}\\ UF(G_c\ast D) \ar[r]_{UF(\gamma_G\ast D)}& UF(G\ast D) }$$ For $X\in{\mathcal {T}}$, let $D_X:{\mathcal {D}}\to{\mathcal {V}}$ be defined by $D_Xd=Dd(X)$. Since $G$ is homotopy invariant and $D_X$ is objectwise cofibrant, $(\gamma_G\ast D)_X = \gamma_G\ast D_X$ is a weak equivalence in ${\mathcal {V}}$ for each $X\in{\mathcal {T}}$. Thus $\gamma_G\ast D$ is a weak equivalence in $[{\mathcal {T}},{\mathcal {V}}]$. Since $UF$ preserves weak equivalences between cofibrant objects and both $G_c\ast D$ and $G\ast D$ are cofibrant, $UF(\gamma_G\ast D)$ is a weak equivalence in $[{\mathcal {T}},{\mathcal {V}}]$. Thus it suffices to prove that $\eta_{G_c\ast D}$ is a weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$. Following \[le2.8\], $G_c\ast\eta D:G_c\ast D \to G_c\ast UFD$ is a weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$. Since $\eta_{G_c\ast D}$ is the composition $k(G_c\ast\eta D)$ where $k: G_c\ast UFD \to UF(G_c\ast D)$ is the induced morphism, it remains to prove that $k$ is a weak equivalence in $[{\mathcal {T}},{\mathcal {V}}]$. We have the commutative square $$\xymatrix@C=4pc@R=3pc{ G_c\ast UFD \ar[r]^{\gamma_G\ast UFD} \ar[d]_{k} & G\ast UFD\ar [d]^{k'}\\ UF(G_c\ast D) \ar[r]_{UF(\gamma_G\ast D)}& UF(G\ast D) }$$ where $k'$ is the induced isomorphism. Since the diagram $(UFD)_X$ is objectvise cofibrant for each $X\in{\mathcal {T}}$, $(\gamma_G\ast UFD)_X=\gamma_G\ast (UFD)_X$ is a weak equivalence in ${\mathcal {V}}$. Thus $(\gamma_G\ast UFD)$ is a weak equivalence in $[{\mathcal {T}},{\mathcal {V}}]$. Since we have shown that $UF(\gamma_G\ast D)$ is a weak equivalence in $[{\mathcal {T}},{\mathcal {V}}]$, $k$ is a weak equivalence in $[{\mathcal {T}},{\mathcal {V}}]$ as well. Let $f:A\to B$ be a morphism between fibrant objects in ${\operatorname{Mod}}({\mathcal {H}})$ such that $Uf$ is a weak equivalence. Then $Uf$ is a weak equivalence between fibrant objects in ${\mathcal {M}}_{\mathcal {H}}$ and thus it is a weak equivalence in $[{\mathcal {T}},{\mathcal {V}}]$ (see [@H] 3.2.13). Hence $f$ is a weak equivalence, which means that $U$ reflects weak equivalences between fibrant objects. Finally, since $[{\mathcal {T}},{\mathcal {V}}]$ and ${\mathcal {M}}_{\mathcal {H}}$ have the same cofibrant objects and $U$ preserves weak equivalences, $$\overline{\eta}_A: A \xrightarrow{\eta_A} UFA \xrightarrow{U\rho_{FA}} URFA$$ is a weak equivalence for each cofibrant object $A$ in ${\mathcal {M}}_{\mathcal {H}}$. Following [@Ho] 1.3.16, $(U,F)$ is a Quillen equivalence. \[th3.4\] Let ${\mathcal {V}}$ be a left proper finitely tractable monoidal model category satisfying the monoid axiom and equipped with a fibrant approximation ${\mathcal {V}}$-functor $R:{\mathcal {V}}\to{\mathcal {V}}$ preserving finite weighted limits. Let ${\mathcal {T}}$ be a locally cofibrant finite weighted homotopy limit theory. Then the model categories ${\operatorname{Mod}}({\mathcal {H}}({\mathcal {T}}))$ and ${\mathcal {M}}_{{\mathcal {H}}({\mathcal {T}})}$ are Quillen equivalent. Let $\Phi$ be the class of all finitely presentable cofibrant weights. Following [@MRV] 5.1, any cofibrant object $A$ in $[{\mathcal {T}},{\mathcal {V}}]$ is a colimit of a directed diagram $D:{\mathcal {D}}\to [{\mathcal {T}},{\mathcal {V}}]$ such that $Dd$ is cofibrant and finitely presentable in $[{\mathcal {T}},{\mathcal {V}}]$ for each $d\in{\mathcal {D}}$. This directed colimit is a weighted colimit $\overline{\Delta I}\ast\overline{D}$ where $\overline{{\mathcal {D}}}$ is a free ${\mathcal {V}}$-category on ${\mathcal {D}}$, $\overline{D}:\overline{{\mathcal {D}}}\to\ [{\mathcal {T}},{\mathcal {V}}]$ is the extension of $D$ and $\overline{\Delta I}$ is the extension of the constant diagram $\Delta I:{\mathcal {D}}\to{\mathcal {V}}$ on $I$. This weighted colimit is $\Phi$-flat (see [@K], 4.9) and homotopy invariant (see [@LR] 4.5). Following the proof of \[th3.3\], $\eta_{Dd}$ is a weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$ between cofibrant objects for each $d\in{\mathcal {D}}$. This proof also yields that $\eta_A$ is a weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$ and thus it proves the result. We do not need \[le2.8\] because $\eta D$ is an objectwise weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$ between objectwise cofibrant diagrams and thus $G_c\ast\eta D$ is a weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$ (cf. [@H] 18.4.4). Since each cofibration in $[{\mathcal {T}},{\mathcal {V}}]$ is an objectwise cofibration (see [@Sh] 24.4), the diagrams $D_X$ and $(UFD)_X$ are objectwise cofibrant for each $X\in{\mathcal {T}}$. Thus, in the whole proof, we do not need to assume that all objects of ${\mathcal {V}}$ are cofibrant. \[re3.5\] ** \(1) Recall that an *algebraic* ${\mathcal {V}}$-*theory* is a small ${\mathcal {V}}$-category ${\mathcal {T}}$ with finite products and a ${\mathcal {T}}$-*algebra* is a ${\mathcal {V}}$-functor $A:{\mathcal {T}}\to{\mathcal {V}}$ preserving finite products. A *homotopy* ${\mathcal {T}}$-*algebra* preserves finite products up to a weak equivalence, i.e., $$A(X_1\times\dots\times X_n)\to A(X_1)\times\dots\times A(X_n)$$ are weak equivalences. Let $\Phi$ consist of constant weights on finite discrete categories with the value $I$. Then $\Phi$-weighted homotopy limit theories are precisely algebraic theories. Following [@KS] 3.8, the saturation $\Phi^\ast$ consists of finite coproducts of representables. It is easy to see that the corresponding sets of morphisms $\varphi_l$ for $\Phi$ and $\Phi^\ast$ are equal for each algebraic theory ${\mathcal {T}}$. Thus both algebras and homotopy algebras are unchanged by the passage from $\Phi$ to $\Phi^\ast$. Over simplicial sets, every homotopy colimit is weakly equivalent to a homotopy invariant $\Phi^{\ast +}$-colimit of finite coproducts of representables (see [@V]). Thus the result of Badzioch and Bergner is a consequence of \[th3.3\] applied to $\Phi^\ast$. We can not use $\Phi$ for this purpose because $[{\mathcal {T}},{\mathcal {V}}]$ contains no elements of $\Phi$, and so $\Phi^\diamond$ is also empty. Observe that the saturation does not change the flatness, i.e., $\Phi^+=\Phi^{\ast +}$. J. Bourke [@Bo] proved that, over ${\operatorname{\bf Cat}}$, any cofibrant weight belongs to the iterative closure of finite coproducts of representables under colimits weighted by homotopy invariant $\Phi$-flat weights. Thus there is a rigidification theorem for homotopy algebras in this case as well. \(2) Let $\Phi$ consist of constant weights on finite discrete categories with the value $I$ and of weights on the single object discrete category with a finitely presentable cofibrant value. Then $\Phi$-weighted homotopy limit theories contain finite products and cotensors with finitely presentable cofibrant objects and are related to enriched Lawvere theories in the sense of [@P]. Over ${\operatorname{\bf SSet}}$, [@V] and \[th3.3\] yield the rigidification theorem for them. Again, we have pass to the saturation $\Phi^\ast$ of $\Phi$. Conservative free completion ============================ \[def4.1\] [*Let ${\mathcal {H}}=({\mathcal {T}},L)$ be a finite weighted homotopy limit sketch. We say that $E:{\mathcal {T}}\to{\mathcal {T}}^\ast$ is a *conservative free completion* of ${\mathcal {H}}$ if ${\mathcal {T}}^\ast$ has limits weighted by finitely presentable cofibrant weights of diagrams $D:{\mathcal {D}}\to{\mathcal {T}}$ and, for each model $A:{\mathcal {T}}\to{\mathcal {V}}$, there is an *essentially unique* (i.e., unique up to an isomorphism) ${\mathcal {V}}$-functor $A^\ast:{\mathcal {T}}^\ast\to{\mathcal {V}}$ preserving limits weighted by finitely presentable cofibrant weights of diagrams $D:{\mathcal {D}}\to{\mathcal {T}}$ such that $A^\ast E\cong A$.* ]{} \[le4.2\] Let ${\mathcal {V}}$ be locally finitely presentable as a closed category. Then each finite weighted homotopy limit sketch has a conservative free completion. Moreover, the functor $E:{\mathcal {T}}\to{\mathcal {T}}^\ast$ is a full embedding provided that ${\mathcal {H}}$ is normal. Let ${\mathcal {T}}$ be a finite weighted homotopy limit sketch. Then $$Y:{\mathcal {T}}^{{\operatorname{op}}}\to[{\mathcal {T}},{\mathcal {V}}]$$ is a free completion of ${\mathcal {T}}^{{\operatorname{op}}}$ under weighted colimits and the full subcategory $[{\mathcal {T}},{\mathcal {V}}]_{{\operatorname{fp}}}$ of $[{\mathcal {T}},{\mathcal {V}}]$ consisting of finitely presentable objects is a free completion of ${\mathcal {T}}^{{\operatorname{op}}}$ under finite weighted colimits (see [@KS], 3.13). This implies that for each ${\mathcal {V}}$-functor $H:{\mathcal {T}}^{{\operatorname{op}}}\to{\mathcal {V}}^{{\operatorname{op}}}$, there is an essentially unique ${\mathcal {V}}$-functor $\overline{H}:[{\mathcal {T}},{\mathcal {V}}]\to{\mathcal {V}}^{{\operatorname{op}}}$ preserving weighted colimits such that $\overline{H}Y\cong H$. Moreover, its restriction $\overline{H}_0:[{\mathcal {T}},{\mathcal {V}}]_{{\operatorname{fp}}}\to{\mathcal {V}}^{{\operatorname{op}}}$ is an essentially unique ${\mathcal {V}}$-functor preserving finite weighted colimits such that $\overline{H}_0Y\cong H$. Let $G:{\mathcal {D}}^{{\operatorname{op}}}\to{\mathcal {V}}$ be a finitely presentable weight and $D:{\mathcal {D}}\to[{\mathcal {T}},{\mathcal {V}}]_{{\operatorname{fp}}}$ a diagram. Following [@KS], 3.13 again, $G$ belongs to the closure of representable functors under finite colimits in $[{\mathcal {D}}^{{\operatorname{op}}},{\mathcal {V}}]$. The ${\mathcal {V}}$-functor $$-\ast D:[{\mathcal {D}}^{{\operatorname{op}}},{\mathcal {V}}]\to[{\mathcal {T}},{\mathcal {V}}]$$ preserves weighted colimits because it has a right ${\mathcal {V}}$-adjoint $$[{\mathcal {D}}^{{\operatorname{op}}},{\mathcal {V}}](-,[{\mathcal {T}},{\mathcal {V}}](D,-)):[{\mathcal {T}},{\mathcal {V}}]\to[{\mathcal {D}}^{{\operatorname{op}}},{\mathcal {V}}].$$ Since ${\mathcal {D}}(-,d)\ast D=Dd$, $[{\mathcal {T}},{\mathcal {V}}]_{{\operatorname{fp}}}$ has colimits weighted by finitely presentable weights and $\overline{H}_0:[{\mathcal {T}},{\mathcal {V}}]_{{\operatorname{fp}}}\to{\mathcal {V}}^{{\operatorname{op}}}$ preserves them. Let $[{\mathcal {T}},{\mathcal {V}}]_{{\operatorname{cfp}}}$ be the full subcategory of $[{\mathcal {T}},{\mathcal {V}}]$ consisting of finitely presentable cofibrant objects. Since the class of cofibrant weights is saturated (see [@LR]), $[{\mathcal {T}},{\mathcal {V}}]_{{\operatorname{cfp}}}$ is closed under colimits weighted by finitely presentable cofibrant weights. Moreover, $\overline{H}_1:[{\mathcal {T}},{\mathcal {V}}]_{{\operatorname{cfp}}}\to{\mathcal {V}}^{{\operatorname{op}}}$ preserves these colimits. Following [@R] 4.6, $[{\mathcal {T}},{\mathcal {V}}]$ is a finitely tractable model category with cofibrations consisting of all morphisms and weak equivalences of isomorphisms. The same is true for ${\mathcal {V}}$ and $[{\mathcal {T}},{\mathcal {V}}]$ is a model ${\mathcal {V}}$-category where both $[{\mathcal {T}},{\mathcal {V}}]$ and ${\mathcal {V}}$ are taken with this trivial model structure. Following [@B] 3.18., there exists a left Bousfield ${\mathcal {V}}$-localization $\overline{{\mathcal {M}}}$ of $[{\mathcal {T}},{\mathcal {V}}]$ with respect to the set $\{\varphi_l\backslash l\in L\}$ from the proof of \[th2.7\]. Then $\overline{H}:[{\mathcal {T}},{\mathcal {V}}]\to{\mathcal {V}}^{{\operatorname{op}}}$ is a left Quillen functor where ${\mathcal {V}}^{{\operatorname{op}}}$ is again taken with the trivial model structure. Since $\overline{H}$ sends each $\varphi_l$ to an isomorphism and ${\operatorname{Id}}$ is a cofibrant replacement functor on $[{\mathcal {T}},{\mathcal {V}}]$, $\overline{H}:\overline{{\mathcal {M}}}\to{\mathcal {V}}^{{\operatorname{op}}}$ is a left Quillen functor (see [@H] 3.3.18 and [@B]). Thus it induces a left adjoint functor $H':{\operatorname{Ho}}\overline{{\mathcal {M}}}\to{\operatorname{Ho}}{\mathcal {V}}^{{\operatorname{op}}}$ (see [@Ho] 1.3.10). Following [@R] 4.6, ${\operatorname{Ho}}\overline{{\mathcal {M}}}$ is the reflective full subcategory of $[{\mathcal {T}},{\mathcal {V}}]$ consisting of objects orthogonal to each $\varphi_l$ and the canonical functor $P:\overline{{\mathcal {M}}}\to{\operatorname{Ho}}\overline{{\mathcal {M}}}$ is the corresponding reflector. Since ${\mathcal {V}}^{{\operatorname{op}}}={\operatorname{Ho}}{\mathcal {V}}^{{\operatorname{op}}}$, both ${\operatorname{Ho}}\overline{{\mathcal {M}}}$ and ${\operatorname{Ho}}{\mathcal {V}}^{{\operatorname{op}}}$ are ${\mathcal {V}}$-categories. Let $J:{\operatorname{Ho}}\overline{{\mathcal {M}}}\to\overline{{\mathcal {M}}}$ denote the inclusion ${\mathcal {V}}$-functor. Since $H'\cong H'PJ\cong \overline{H}J$, $H'$ is a ${\mathcal {V}}$-functor. We have $$H'(V\cdot PA)\cong H'P(V\cdot A)\cong\overline{H}(V\cdot A)\cong V\cdot\overline{H}A\cong V\cdot H'PA.$$ Since $P$ is surjective on objects, $H'$ preserves tensors. Thus $H'$ is a left ${\mathcal {V}}$-adjoint (see [@B] 6.7.6) and, consequently, it preserves weighted colimits. Thus the full subcategory $P([{\mathcal {T}},{\mathcal {V}}]_{{\operatorname{cfp}}})^{op}$ of $({\operatorname{Ho}}\overline{{\mathcal {M}}})^{{\operatorname{op}}}$ consisting of $P$-images of objects from $[{\mathcal {T}},{\mathcal {V}}]_{{\operatorname{cfp}}}$ is a conservative free completion of ${\mathcal {H}}$. It follows from the construction of $E:{\mathcal {T}}\to{\mathcal {T}}^\ast$ as $(PY)^{{\operatorname{op}}}$ that morphisms $E_{AB}:{\mathcal {T}}(A,B)\to{\mathcal {T}}^\ast(EA,EB)$ are split epimorphisms (as the composition of isomorphisms $Y^{{\operatorname{op}}}_{AB}$ with morphisms $P^{{\operatorname{op}}}_{YA,YB}$ split by $J^{{\operatorname{op}}}_{YA,YB}$). The Yoneda embedding $\overline{Y}:{\mathcal {T}}\to[{\mathcal {T}}^{{\operatorname{op}}},{\mathcal {V}}]$ preserves existing weighted limits provided that ${\mathcal {H}}$ is normal. Since $\overline{Y}\cong (\overline{Y})^\ast E$, the $E_{AB}$ are monomorphisms. Thus they are isomorphisms, which proves that $E$ is a full embedding. \[re4.3\] *(1) Since I could not find any reference for the existence of a conservative ${\mathcal {V}}$-completion, I gave the proof above written in the language of model categories.* \(2) Let ${\mathcal {V}}$ be a finitely combinatorial monoidal model category having all objects cofibrant and equipped with a fibrant approximation ${\mathcal {V}}$-functor $R:{\mathcal {V}}\to{\mathcal {V}}$ preserving finite weighted limits. We need this assumption because ${\mathcal {T}}$ being locally cofibrant does not imply that ${\mathcal {T}}^\ast$ is locally cofibrant. Consider the commutative square $$\xymatrix@C=4pc@R=3pc{ {\operatorname{Mod}}({\mathcal {H}}^\ast) \ar[r]^{U_{{\mathcal {T}}^\ast}} \ar[d]_{U_1} & {\mathcal {M}}_{{\mathcal {H}}^\ast}\ar [d]^{U_2}\\ {\operatorname{Mod}}({\mathcal {H}}) \ar[r]_{U_{\mathcal {T}}}& {\mathcal {M}}_{\mathcal {H}}}$$ where both $U_1$ and $U_2$ are given by the precomposition with $E$. The functor $U_1$ is an equivalence of categories. If $B:{\mathcal {T}}^\ast\to{\mathcal {V}}$ is a homotopy model of ${\mathcal {H}}^\ast$ then $U_2B=BE$ is a homotopy model of ${\mathcal {H}}$. But we can prove more. \[le4.4\] $U_2$ is a right Quillen functor. We will prove that the left adjoint $F_2$ of $U_2$ is left Quillen. Consider a morphism $\varphi_l:G_l\ast{\mathcal {T}}(D_l,-)\to{\mathcal {T}}(X_l,-)$ from the proof of \[th2.7\]. Since $F_2\varphi_l:G_l\ast{\mathcal {T}}^\ast(ED_l,-)\to{\mathcal {T}}^\ast(EX_l,-)$ is the corresponding morphism in ${\mathcal {T}}^\ast$, it is a weak equivalence in ${\mathcal {M}}_{{\mathcal {H}}^\ast}$. Let $Q$ be a cofibrant replacement functor on ${\mathcal {M}}_{{\mathcal {H}}}$. We have a commutative square $$\xymatrix@C=4pc@R=3pc{ F_2QA \ar[r]^{F_2Q\varphi} \ar[d]_{F_2\gamma_A} & F_2QB\ar [d]^{F_2\gamma_B}\\ F_2A \ar[r]_{F_2\varphi}& F_2B }$$ where $\varphi:A\to B$ denotes $\varphi_l$. The morphisms $\gamma_A$ and $\gamma_B$ are trivial fibrations in $[{\mathcal {T}},{\mathcal {V}}]$ and $F_2:[{\mathcal {T}},{\mathcal {V}}]\to [{\mathcal {T}}^\ast,{\mathcal {V}}]$ is left Quillen. Since $\varphi$ is a morphism between cofibrant objects, $F_2\gamma_A$ and $F_2\gamma_B$ are weak equivalences in $[{\mathcal {T}}^\ast,{\mathcal {V}}]$ and thus in ${\mathcal {M}}_{{\mathcal {H}}^\ast}$. Hence $F_2Q\varphi$ is a weak equivalence. Following [@H] 3.3.18, $F_2:{\mathcal {M}}_{\mathcal {H}}\to{\mathcal {M}}_{{\mathcal {H}}^\ast}$ is left Quillen. \[re4.5\] *The functor $U_2$ has a right ${\mathcal {V}}$-adjoint $S$ as well. If ${\mathcal {T}}$ is normal, this right adjoint can be easily calculated.* Following the proof of \[le4.2\], ${\mathcal {T}}^\ast$ is a coreflective full subcategory of $\tilde{{\mathcal {T}}}=([{\mathcal {T}},{\mathcal {V}}]_{{\operatorname{cfp}}})^{{\operatorname{op}}}$ with the inclusion $\overline{J}=J^{{\operatorname{op}}}$ and its right ${\mathcal {V}}$-adjoint $\overline{P}=P^{{\operatorname{op}}}$. We use the notation of the proof of \[le4.2\], except that $J$ and $P$ are the domain restrictions of those from \[le4.2\]. The counit of this adjunction will be denoted $\varepsilon:\overline{J}\overline{P}\to{\operatorname{Id}}$. Then $E:{\mathcal {T}}\to{\mathcal {T}}^\ast$ is the composition $\overline{P}\overline{Y}$ where $\overline{Y}:{\mathcal {T}}\to\tilde{{\mathcal {T}}}$ is the dual of the codomain restriction of the Yoneda embedding ${\mathcal {T}}^{{\operatorname{op}}}\to[{\mathcal {T}},{\mathcal {V}}]$. Since the values of $\overline{Y}$ belong to ${\mathcal {T}}^\ast$, we have $\overline{J}\overline{P}\overline{Y}\cong\overline{Y}$. The ${\mathcal {V}}$-category $[{\mathcal {T}},{\mathcal {V}}]$ is isomorphic to a full reflective subcategory of $[\tilde{{\mathcal {T}}},{\mathcal {V}}]$ consisting of ${\mathcal {V}}$ functors preserving weighted limits. Thus the restriction ${\mathcal {V}}$-functor $[\overline{Y},{\mathcal {V}}]$ has a right ${\mathcal {V}}$-adjoint sending $A:{\mathcal {T}}\to{\mathcal {V}}$ to its weighted limit preserving extension $\tilde{A}:\tilde{{\mathcal {T}}}\to{\mathcal {V}}$. The ${\mathcal {V}}$-functor $[\overline{P},{\mathcal {V}}]:[{\mathcal {T}}^\ast,{\mathcal {V}}]$ has a right ${\mathcal {V}}$-adjoint $[\overline{J},{\mathcal {V}}]$. Thus $U_2=[E,{\mathcal {V}}]=[\overline{P}\overline{Y},{\mathcal {V}}]$ has a right ${\mathcal {V}}$-adjoint $\tilde{-}\overline{J}$ sending $A$ to the composition $\tilde{A}\overline{J}$. \[le4.6\] $U_2$ preserves cofibrations. The claim is equivalent to the preservation of trivial fibrations by the right ${\mathcal {V}}$-adjoint $S$. Consider a trivial fibration $\alpha:A\to B$ and an object $\{G,D\}$ in $\tilde{{\mathcal {T}}}$ where $G$ is a finitely presentable cofibrant weight. Since $\tilde{\alpha}_{\{G,D\}}=\{G,\alpha D\}$, we get (analogously as in [@H] 18.4.2) that $\tilde{\alpha}$ is a trivial fibration. Thus $S\alpha=\tilde{\alpha}\overline{J}$ is a trivial fibration. Rigidification ============== \[th5.1\] Let ${\mathcal {V}}$ be a finitely combinatorial monoidal model category having all objects cofibrant and equipped with a fibrant approximation ${\mathcal {V}}$-functor $R:{\mathcal {V}}\to{\mathcal {V}}$ preserving finite weighted limits. Let ${\mathcal {H}}=({\mathcal {T}},L)$ be a normal finite weighted homotopy limit sketch. Then the following conditions are equivalent: 1. $(U_{\mathcal {T}},F_{\mathcal {T}})$ is a Quillen equivalence, 2. each homotopy model of ${\mathcal {H}}$ is weakly equivalent in $[{\mathcal {T}},{\mathcal {V}}]$ to a model of ${\mathcal {H}}$, 3. $(U_2,F_2)$ is a Quillen equivalence, and 4. each fibrant homotopy model $A$ of ${\mathcal {H}}$ is weakly equivalent in $[{\mathcal {T}},{\mathcal {V}}]$ to $U_2B$ for a fibrant homotopy model $B$ of ${\mathcal {H}}^\ast$. Since the square from \[re4.3\] (2) consists of right Quillen functors, $U_1$ is an equivalence and $U_{{\mathcal {T}}^\ast}$ is a Quillen equivalence (see \[th3.4\]), [@Ho] 1.3.15 implies that $(i)\Leftrightarrow (iii)$. $(i)\Rightarrow (ii)$: Let $A$ be a homotopy model of ${\mathcal {H}}$. Since $R$ preserves finite weighted limits, $RA$ is a fibrant homotopy model of ${\mathcal {H}}$. Moreover, the natural transformation $\rho_A:A\to RA$ is a pointwise trivial cofibration and thus a weak equivalence in $[{\mathcal {T}},{\mathcal {V}}]$. Now, we take a cofibrant replacement $\gamma_{RA}:QRA\to RA$ of $RA$ in $[{\mathcal {T}},{\mathcal {V}}]$. Consider a commutative square $$\xymatrix@C=4pc@R=3pc{ QRA\{G_l,D_l\} \ar[r]^{t^{QRA}_l} \ar[d]_{\gamma_{RA\{G_l,D_l\}}} & \{G_l,QRAD_l\}\ar [d]^{\{G_l,\gamma_{RAD_l}\}}\\ RA\{G_l,D_l\} \ar[r]_{t^{RA}_l}& \{G_l,RAD_l\} }$$ Since $\gamma_{RAD_l}$ is a weak equivalence between fibrant objects in $[{\mathcal {T}},{\mathcal {V}}]$, the right vertical morphism $\{G_l,\gamma_{RAD_l}\}$ is a weak equivalence (analogously as in [@H] 18.4.4). Hence $t^{QRA}_l$ is a weak equivalence and thus $QRA$ is a homotopy model of ${\mathcal {H}}$. Since $\gamma_{RA}$ is a trivial fibration between fibrant objects in ${\mathcal {M}}_{\mathcal {H}}$, it is a weak equivalence in $[{\mathcal {T}},{\mathcal {V}}]$. Thus $A$ is weakly equivalent in $[{\mathcal {T}},{\mathcal {V}}]$ to the homotopy model $QRA$ of ${\mathcal {H}}$ which is fibrant and cofibrant in $[{\mathcal {T}},{\mathcal {V}}]$. Therefore, it suffices to prove $(ii)$ for any homotopy model $A$ of ${\mathcal {H}}$ which is fibrant and cofibrant in $[{\mathcal {T}},{\mathcal {V}}]$. Following [@Ho] 1.3.13, we have a weak equivalence $$\overline{\eta}_A: A \xrightarrow{\eta_A} U_{{\mathcal {T}}}F_{{\mathcal {T}}}A \xrightarrow{U_{{\mathcal {T}}}\rho_{F_{{\mathcal {T}}}A}} U_{{\mathcal {T}}}RF_{{\mathcal {T}}}A$$ Since $A$ is fibrant in ${\mathcal {M}}_{\mathcal {H}}$ (see \[th2.7\]), $\overline{\eta}_A$ is a weak equivalence between fibrant objects in ${\mathcal {M}}_{\mathcal {H}}$. Thus $\overline{\eta}_A$ is a weak equivalence in $[{\mathcal {T}},{\mathcal {V}}]$ (see [@H] 3.2.13). Since $RF_{{\mathcal {T}}}A$ is a model of ${\mathcal {H}}$, $(ii)$ is proved. $(iv)\Rightarrow(iii)$: At first, we will show that $U_2$ reflects weak equivalences between fibrant objects. Let $f:A\to B$ be a morphism between fibrant objects in ${\mathcal {M}}_{{\mathcal {H}}^\ast}$ such that $U_2f$ is a weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$. Since $U_2f$ is a weak equivalence between fibrant objects, it is a weak equivalence in $[{\mathcal {T}},{\mathcal {V}}]$. Consider an object $\{G,ED\}\in{\mathcal {T}}^\ast$ which does not belong to the image of $E$. Then $fED$ is a pointwise weak equivalence between fibrant objects. Since $G$ is cofibrant, $\{G,EDf\}$ is a weak equivalence (cf. [@H] 18.4.4). Consequently, $f$ is a weak equivalence. Since $E$ is a full embedding (see \[le4.2\]), the adjunction units $\eta_X:X\to G_2F_2X$ are isomorphisms. Following [@Ho] 1.3.16, $(U_2,F_2)$ is a Quillen equivalence if and only if $U_2\rho_{F_2X}$ is a weak equivalence for each cofibrant object $X$. We will prove at first that $U_2\varphi_l$ is a weak equivalence for each $\varphi_l$ from ${\mathcal {H}}^\ast$. We know that $[{\mathcal {T}}^\ast,{\mathcal {V}}](\varphi_l,Z)$ is a weak equivalence in ${\mathcal {V}}$ for each fibrant object $Z$ of ${\mathcal {M}}_{{\mathcal {H}}^\ast}$. Let $Z_1$ be a fibrant object in ${\mathcal {M}}_{\mathcal {H}}$. Assuming $(iv)$, there is a fibrant object $Z$ in ${\mathcal {M}}_{{\mathcal {H}}^\ast}$ such that $QZ_1$ is weakly equivalent in $[{\mathcal {T}},{\mathcal {V}}]$ to $U_2Z$. Consider the cofibrant replacement $\gamma^\ast_Z:Q^\ast Z\to Z$ of $Z$. Since $\gamma^\ast_Z$ is a trivial fibration in ${\mathcal {M}}_{{\mathcal {H}}^\ast}$, it is a weak equivalence in $[{\mathcal {T}},{\mathcal {V}}]$ and $Q^\ast Z$ is a fibrant homotopy model of ${\mathcal {H}}^\ast$ because it is a fibrant object in ${\mathcal {M}}_{{\mathcal {H}}^\ast}$. Thus $QZ_1$ is weakly equivalent to $U_2Q^\ast Z$ in $[{\mathcal {T}},{\mathcal {V}}]$. Since $U_2$ preserves cofibrations (see \[le4.6\]), both $QZ_1$ and $U_2Q^\ast Z$ are fibrant and cofibrant in $[{\mathcal {T}},{\mathcal {V}}]$. Thus they are homotopy equivalent in $[{\mathcal {T}},{\mathcal {V}}]$. Thus there exists a homotopy equivalence $h:U_2Q^\ast Z\to QZ_1$. The composition $g=\gamma_{Z_1}h:U_2Q^\ast Z\to Z_1$ is a weak equivalence between fibrant objects in $[{\mathcal {T}},{\mathcal {V}}]$. Consider the commutative square $$\xymatrix@C=7pc@R=4pc{ [{\mathcal {T}},{\mathcal {V}}]({\operatorname{cod}}U_2\varphi_l,U_2Q^\ast Z) \ar[r]^{[{\mathcal {T}},{\mathcal {V}}](U_2\varphi_l,U_2Q^\ast Z)} \ar[d]_{[{\mathcal {T}},{\mathcal {V}}]({\operatorname{cod}}U_2\varphi_l,g)} & [{\mathcal {T}},{\mathcal {V}}]({\operatorname{dom}}U_2\varphi_l,U_2Q^\ast Z) \ar [d]^{[{\mathcal {T}},{\mathcal {V}}]({\operatorname{dom}}U_2\varphi_l,g)}\\ [{\mathcal {T}},{\mathcal {V}}]({\operatorname{cod}}U_2\varphi_l,Z_1) \ar[r]_{[{\mathcal {T}},{\mathcal {V}}](U_2\varphi_l,Z_1)}& [{\mathcal {T}},{\mathcal {V}}]({\operatorname{dom}}U_2\varphi_l,Z_1) }$$ Since $\varphi_l$ has the domain and the codomain cofibrant and $U_2$ preserves cofibrations, the vertical morphisms are weak equivalences because $g$ is a weak equivalence between fibrant objects. For the same reason, the upper horizontal morphism is a weak equivalence. Thus $[{\mathcal {T}},{\mathcal {V}}](U_2\varphi_l,Z_1)$ is a weak equivalence. Since $U_2\varphi_l$ has the domain and the codomain cofibrant, it is a weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$. Take the (cofibration, trivial fibration) factorization $$\varphi_l:A_l \xrightarrow{\quad \varphi^l_l\quad} C_l \xrightarrow{\quad \varphi^2_l \quad} B_l.$$ Since $U_2$ preserves projective weak equivalences and cofibrations and $U_2\varphi_l$ is a weak equivalence, $U_2\varphi^1_l$ is a trivial cofibration. Following the properties of enriched left Bousfield localizations (see [@H] and [@B]), a fibrant object $Z$ in $[{\mathcal {T}}^\ast,{\mathcal {V}}]$ is fibrant in ${\mathcal {M}}_{{\mathcal {H}}^\ast}$ if and only if it is injective to horns $\varphi^1_l\boxdot i$ where $i:V\to V'$ is a generating cofibration in ${\mathcal {V}}$. Recall that $\varphi^1_l\boxdot i:K\to V'\cdot C_l$ is the pushout corner morphism for the pushout $$\xymatrix@C=4pc@R=3pc{ V\cdot A_l \ar[r]^{V\cdot\varphi^1_l} \ar[d]_{i\cdot A_l} & V\cdot C_l\ar [d]^{}\\ V'\cdot A_l \ar[r]_{}& K }$$ Thus $\rho_{F_2X}$ is cofibrantly generated by these horns and trivial cofibrations in $[{\mathcal {T}}^\ast,{\mathcal {V}}]$. Since horns are trivial cofibrations in ${\mathcal {M}}_{{\mathcal {H}}^\ast}$ and $U_2$ preserves colimits and cofibrations, $U_2(\varphi^1_l\boxdot i)$ is a trivial cofibration. Thus $U_2\rho_{F_2X}$ is a weak equivalence. $(ii)\Rightarrow(iv)$: Let $A$ be a fibrant homotopy model of ${\mathcal {H}}$. Following $(ii)$, $A$ is weakly equivalent in $[{\mathcal {T}},{\mathcal {V}}]$ to $U_{\mathcal {T}}B_1=U_{\mathcal {T}}U_1B_2$. Like at the beginning of the proof of $(i)\Rightarrow (ii)$, $B_2$ is weakly equivalent in $[{\mathcal {T}}^\ast,{\mathcal {V}}]$ to the fibrant ${\mathcal {H}}^\ast$-model $R^\ast B_2$. Thus $A$ is weakly equivalent in $[{\mathcal {T}},{\mathcal {V}}]$ to $U_2U_{{\mathcal {T}}^\ast}R^\ast B_2$ and $U_{{\mathcal {T}}^\ast}R^\ast B_2$ is a fibrant homotopy model of ${\mathcal {H}}^\ast$. \[re5.2\] *(1) In the proof above (the implication $(i)\Rightarrow (ii)$), we showed that any fibrant ${\mathcal {V}}$-functor ${\mathcal {T}}\to{\mathcal {V}}$ weakly equivalent in $[{\mathcal {T}},{\mathcal {V}}]$ to a fibrant homotopy model of ${\mathcal {H}}$ is a homotopy model of ${\mathcal {H}}$. In order that homotopy models of ${\mathcal {H}}$ are closed under weak equivalences in $[{\mathcal {T}},{\mathcal {V}}]$, we need that $\{G_l,-\}$ preserves weak equivalences for each $l\in L$.* \(2) We also showed that, assuming (iv), $U_2\varphi_l$ is a weak equivalence in ${\mathcal {M}}_{\mathcal {H}}$. Thus $[{\mathcal {T}},{\mathcal {V}}](U_2\varphi_l,A)$ is a weak equivalence for each fibrant homotopy model $A$ of ${\mathcal {H}}$. Hence $[{\mathcal {T}}^\ast,{\mathcal {V}}](\varphi_l,S(A))$ is a weak equivalence. Therefore $S(A)$ is a fibrant homotopy model of ${\mathcal {H}}^\ast$. Since $U_2S(A)=A$, (iv) is equivalent to \(v) $S$ preserves fibrant objects. \(3) Without any change of the proof, \[th5.1\] is valid for any class $\Phi$ of finitely presentable cofibrant weights from \[th3.3\]. In particular we can take $\Phi^\ast$ from \[re3.5\]. In this case, we get a characterization when any homotopy algebra of a normal finite product sketch is equivalent to a strict algebra. The following example shows that this is not always true and indicates that this fact is a kind of a coherence theorem. \[ex5.3\] *Let ${\mathcal {H}}=({\mathcal {T}},L)$ be a normal finite product sketch for monoids. This means that ${\mathcal {T}}$ contains objects $X_0,X_1,X_2,X_3$ where $X_0$ is terminal, $X_2=X_1\times X_1$ and $X_3=X_1\times X_1\times X_1$. Moreover, ${\mathcal {T}}$ contains morphisms $e:X_0\to X_1$ and $m:X_2\to X_1$ playing the role of unit and multiplication subjected to the axioms for unit and the associativity axiom. Models of ${\mathcal {H}}$ in ${\operatorname{\bf Cat}}$ are precisely strict monoidal categories. Assume that each homotopy model of ${\mathcal {H}}$ is weakly equivalent in $[{\mathcal {T}},{\operatorname{\bf Cat}}]$ to a model of ${\mathcal {H}}$. Since equivalences of categories are closed under finite products, \[re5.2\] (1) implies that homotopy models are precisely functors ${\mathcal {T}}\to{\operatorname{\bf Cat}}$ equivalent to models of ${\mathcal {H}}$. This means that any homotopy model is equivalent by a strong monoidal functor to a strict monoidal category. Hence homotopy models of ${\mathcal {H}}$ are precisely monoidal categories (see [@ML] XI.1-3.). But this is not possible because the pentagon axiom does not follow from the associativity up to a natural isomorphism.* On the other hand, by adding an object $X_4= X_1^4$ to ${\mathcal {T}}$, each homotopy model of the enlarged sketch is weakly equivalent to a model of ${\mathcal {H}}$. Of course, this example leans on [@Bo] as mentioned in \[re3.5\]. \[ex5.4\] *(1) Recall that a *Segal category* is a bisimplicial set $A:\boldsymbol{\Delta}^{{\operatorname{op}}}\to{\operatorname{\bf SSet}}$ such that $A_0$ is a discrete simplicial set and the Segal maps $$s_k:A_k\to A_1\times_{A_0}\dots\times_{A_0} A_1$$ (on the right side we have a limit of $k$ copies of $A_1$ over $A_0$) are weak equivalences for each $k\geq 2$ (see [@HS]). Thus Segal categories are precisely homotopy models $A$ of the finite limit theory of categories in ${\operatorname{\bf SSet}}$ such that $A_0$ is discrete. When we fix the set $A_0$, this finite limit theory turns into an algebraic theory and thus each Segal category is equivalent to a simplicial category, i.e., to a category enriched over ${\operatorname{\bf SSet}}$ (see [@Be1]).* We can also sketch discreteness by forcing $A_0$ to be the cotensor $\Delta_1\pitchfork A_0$. Models of the resulting finite normal weighted limit sketch ${\mathcal {H}}$ are precisely simplicial categories. This sketch is not a weighted homotopy limit sketch because we use limits (multiple pullbacks) and not homotopy limits. Let ${\mathcal {H}}'$ be a weighted homotopy limit sketch associated to ${\mathcal {H}}$. This means that each weight $G_l$ is substituted by its cofibrant replacement $G'_l$. Following [@H] 18.1.8, given a diagram $D:{\mathcal {D}}\to\boldsymbol{\Delta}^{{\operatorname{op}}}$ this replacement is $B({\mathcal {D}}\downarrow -)$. For a multiple pullback diagram, this weight is finitely presentable. Since the weight for the cotensor $\Delta_1\pitchfork A_0$ is $\Delta_1$ (as the constant functor from a single morphism category to ${\operatorname{\bf SSet}}$), it is cofibrant. Homotopy models $A$ of ${\mathcal {H}}'$ can be called weak Segal categories because discreteness of $A_0$ is replaced by $A_0\to\Delta_1\pitchfork A_0$ being a weak equivalence. This means that $A_0$ is homotopy discrete, i.e., a coproduct of contractible simplicial sets. Now, let $A$ be a fibrant Segal category. Since $A_1$ is fibrant and $A_0$ is discrete, morphisms $A_1\to A_0$ are fibrations. Since homotopy pullbacks are isomorphic with pullbacks in this case, $A$ is a homotopy model of ${\mathcal {H}}'$. Since ${\operatorname{Ex}}^\infty$ preserves discrete simplicial sets (see [@GJ] 4.2), any Segal category $A$ is weakly equivalent to a fibrant Segal category ${\operatorname{Ex}}^\infty A$. Hence $A$ is weakly equivalent to a simplicial category. But I do not know whether each homotopy model of ${\mathcal {H}}'$ is weakly equivalent to a Segal category. For sketching simplicial categories, we need only the Segal maps $s_k$ for $k\leq 3$. Since such a truncation does not seem to be possible for Segal categories, we get another example of a normal finite product sketch without a rigidification. \(2) Similarly, we can treat Tamsamani 2-categories which correspond to Segal categories when we replace ${\operatorname{\bf SSet}}$ by ${\operatorname{\bf Cat}}$. In the same way as in (1), [@Bo] implies that any Tamsamani 2-category is equivalent to a 2-category. \[def5.5\] [*Let ${\mathcal {H}}=({\mathcal {T}},L)$ be a weighted homotopy limit sketch. We say that a ${\mathcal {V}}$-functor $A:{\mathcal {T}}\to{\mathcal {V}}$ is an *easy homotopy model* of ${\mathcal {H}}$ if the morphisms $$t_l^A:AX_l\to\{G,AD_l\}$$ are trivial fibrations.* ]{} We are following the terminology of [@Si]. \[le5.6\] $S$ preserves easy homotopy models. Let $A$ be an easy homotopy model of ${\mathcal {H}}$. We have to show that $$t^{S(A)}:S(A)\{G,D\}\to\{G,S(A)D\}$$ is a trivial fibration for each $G:{\mathcal {D}}\to{\mathcal {V}}$ and $D:{\mathcal {D}}\to{\mathcal {T}}^\ast$. Following \[re4.5\], $$t^{S(A)}:\tilde{A}\overline{J}\{G,D\}\to\{G,\tilde{A}\overline{J}D\}\cong\tilde{A}\{G,\overline{J}D\}.$$ Since the weighted limit $\{G,D\}$ in the coreflective full subcategory ${\mathcal {T}}^\ast$ of $\tilde{{\mathcal {T}}}$ is calculated as the coreflection of the weighted limit $\{G,D\}$ in $\tilde{{\mathcal {T}}}$, $$t^{S(A)}=\tilde{A}\varepsilon_{\{G,\overline{J}D\}}$$ where $\varepsilon:\overline{J}\overline{P}\to{\operatorname{Id}}$ is the counit. Since $\varepsilon_X$ is fibrantly generated by $\varphi_l$ for $l\in L$ and $\tilde{A}$ preserves all limits, $\tilde{A}_{\{G,D\}}$ is fibrantly generated by $\tilde{A}\varphi_l$, $l\in L$. Since $\tilde{A}\varphi_l=t^A_l$, $\tilde{A}_{\{G,D\}}$ is a trivial fibration. Thus $S(A)$ is an easy homotopy model of ${\mathcal {H}}^\ast$. \[cor5.7\] Let ${\mathcal {V}}$ be a finitely combinatorial monoidal model category having all objects cofibrant and equipped with a fibrant approximation ${\mathcal {V}}$-functor $R:{\mathcal {V}}\to{\mathcal {V}}$ preserving finite weighted limits. Let ${\mathcal {H}}=({\mathcal {T}},L)$ a normal finite weighted homotopy limit sketch. Then each easy homotopy model of ${\mathcal {H}}$ is weakly equivalent in $[{\mathcal {T}},{\mathcal {V}}]$ to a model of ${\mathcal {H}}$. Let $A$ be an easy homotopy model of ${\mathcal {H}}$. Following \[le5.6\], $S(A)$ is a homotopy model of ${\mathcal {H}}^\ast$. Following \[th3.4\] and \[th5.1\], $S(A)$ is weakly equivalent to a model $B$ of ${\mathcal {H}}^\ast$. Thus $A$ is weakly equivalent to the model $U_1B$ of ${\mathcal {H}}$. \[re5.8\] [ *A consequence of \[cor5.7\] is that a normal finite weighted homotopy limit sketch ${\mathcal {H}}$ admits a rigidification if and only if it admits an *easyfication*, i.e., if and only if each homotopy model of ${\mathcal {H}}$ is weakly equivalent to an easy homotopy model of ${\mathcal {H}}$. The same is true in a more general context of \[th3.3\].* ]{} [AHRT]{} J. Adámek and J. Rosický, [*Locally Presentable and Accessible Categories*]{}, Cambridge University Press 1994. B. Badzioch, [*Algebraic theories in homotopy theory*]{}, Ann. Math. 155 (2002), 895-913. C. Barwick, [*On left and right model categories and left and right Bousfield localizations*]{}, Homology, Homotopy Appl. 1 (2010), 1-76. J. E. Bergner, [*Rigidification of algebras over multi-sorted theories*]{}, Alg. Geom. Topology 6 (2005), 1925-1955. J. E. Bergner, [*Simplicial monoids and Segal categories*]{}, Contemp. Math. 431 (2007), 59-83. J. Bourke, [*A colimit decomposition for homotopy algebras in CAT*]{}, arXiv:1206.1203. B. I. Dundas, O. R" ondings and P. A. Østvaer, [*Enriched functors and stable homotopy theory*]{}, Doc. Math. 8 (2003), 409-488. P. G. Goerss and J. F. Jardine, [*Simplicial Homotopy theory*]{}, Birkh" auser 1999. P.S. Hirschhorn, [*Model categories and Their Localizations*]{}, Amer. Math. Soc. 2003. A. Hirschowitz and C. Simpson, [*Descent pour les $n$-champs*]{}, arXiv:math.AG/9807049. M. Hovey, [*Model categories*]{}, Amer. Math. Soc. 1999. P. T. Johnstone, [*Sketches of an Elefant. A Topos Theory Compendium*]{}, Vol. 1, Oxford Univ. Press 2002. G. M.Kelly, [*Structure defined by finite limits in the enriched context, I*]{}, Cah. Top. G' eom. Diff. XXIII (1982), 3-42. G. M. Kelly, [*Basic Concepts of Enriched Category Theory*]{}, Cambridge Univ. Press 1982. G. M. Kelly and V. Schmitt, Notes on enriched categories with colimits of some class, Th. Appl. Categ. 14 (2005), 399-423. S. Lack and J. Rosický, [*Enriched weakness*]{}, J. Pure Appl. Alg. 216 (2012), 1807-1822. S. Lack and J. Rosický, [*Homotopy locally presentable enriched categories*]{}, preprint 2012. J. Lurie, [*Higher Topos Theory*]{}, Princeton Univ. Press 2009. S. Mac Lane, [*Categories for the Working Mathematician*]{}, Springer 1998. M. Makkai and R. Par' e, [*Accessible Categories: The Foundation of Categorical Model Theory*]{}, AMS 1989. M. Makkai, J. Rosick' y and L. Vokřínek, [*On a fat small object argument*]{}, arXiv:1304.6974. J. Power, [*Enriched Lawvere theories*]{}, Th. Appl. Cat. 6 (1999), 83-93. J. Rosick' y, [*On combinatorial model categories*]{}, Appl. Cat. Struct. 17 (2009), 303-316. J. Rosick' y and W. Tholen, [*Left-determined model categories and universal homotopy theories*]{}, Trans. AMS 355 (2003), 3611-3623. S. Schwede, [*Stable homotopical algebra and $\Gamma$-spaces*]{}, Math. Proc. Cambr. Phil. Soc. 126 (1999), 329-356. C. Simpson [*A closed model structure for $n$-categories, internal $Hom$, $n$-stacks and generalized Seifert-Van Kampen theorem*]{}, arXiv:alg-geom/9704006. M. Shulman, [*Homotopy limits and colimits and enriched homotopy theory*]{}, arXiv:math.AT/0610194. L. Vokřínek, [*Homotopy weighted colimits*]{}, arXiv:1201.2970. [^1]: $^*$ Supported by the Grant Agency of the Czech republic under grant 201/11/0528.
--- abstract: 'Center-based clustering is a fundamental primitive for data analysis and becomes very challenging for large datasets. In this paper, we focus on the popular $k$-center variant which, given a set $S$ of points from some metric space and a parameter $k<|S|$, requires to identify a subset of $k$ centers in $S$ minimizing the maximum distance of any point of $S$ from its closest center. A more general formulation, introduced to deal with noisy datasets, features a further parameter $z$ and allows up to $z$ points of $S$ (outliers) to be disregarded when computing the maximum distance from the centers. We present coreset-based 2-round MapReduce algorithms for the above two formulations of the problem, and a 1-pass Streaming algorithm for the case with outliers. For any fixed ${\varepsilon}>0$, the algorithms yield solutions whose approximation ratios are a mere additive term ${\varepsilon}$ away from those achievable by the best known polynomial-time sequential algorithms, a result that substantially improves upon the state of the art. Our algorithms are rather simple and adapt to the intrinsic complexity of the dataset, captured by the doubling dimension $D$ of the metric space. Specifically, our analysis shows that the algorithms become very space-efficient for the important case of small (constant) $D$. These theoretical results are complemented with a set of experiments on real-world and synthetic datasets of up to over a billion points, which show that our algorithms yield better quality solutions over the state of the art while featuring excellent scalability, and that they also lend themselves to sequential implementations much faster than existing ones.' author: - Matteo Ceccarello - Andrea Pietracaprina - Geppino Pucci bibliography: - 'references.bib' title: '[Solving k-center Clustering (with Outliers) in MapReduce and Streaming, almost as Accurately as Sequentially.]{}' --- Introduction {#sec-introduction} ============ Center-based clustering is a fundamental unsupervised learning primitive for data management, with applications in a variety of domains such as database search, bioinformatics, pattern recognition, networking, facility location, and many more [@HennigMMR15]. Its general goal is to partition a set of data items into groups according to a notion of similarity, captured by closeness to suitably chosen group representatives, called centers. There is an ample and well-established literature on sequential strategies for different instantiations of center-based clustering [@AwasthiB15]. However, the explosive growth of data that needs to be processed often rules out the use of these strategies which are efficient on small-sized datasets, but impractical on large ones. Therefore, it is of paramount importance to devise efficient clustering strategies tailored to the typical computational frameworks for big data processing, such as MapReduce and Streaming [@LeskovecRU14]. In this paper, we focus on the *$k$-center* problem, formally defined as follows. Given a set $S$ of points in a metric space and a positive integer $k < |S|$, find a subset $T \subseteq S$ of $k$ points, called *centers*, so that the maximum distance between any point of $S$ to its closest center in $T$ is minimized. (Note that the association of each point to the closest center naturally defines a clustering of $S$.) Along with $k$-median and $k$-means, which require to minimize, respectively, the sum of all distances and all square distances to the closest centers, $k$-center is a very popular instantiation of center-based clustering which has recently proved a pivotal primitive for data and graph analytics [@IndykMMM14; @AghamolaeiFZ15; @CeccarelloPPU15; @CeccarelloPPU16; @CeccarelloPPU17; @CeccarelloFPPV17], and whose efficient solution in the realm of big data has attracted a lot of attention in the literature [@Charikar2001; @McCutchen2008; @EneIM11; @MalkomesKCWM15]. The $k$-center problem is NP-hard [@Gonzalez85], therefore one has to settle for approximate solutions. Also, since its objective function involves a maximum, the solution is at risk of being severely influenced by a few “distant” points, called *outliers*. In fact, the presence of outliers is inherent in many datasets, since these points are often artifacts of data collection, or represent noisy measurements, or simply erroneous information. To cope with this problem, $k$-center admits a formulation that takes into account outliers [@Charikar2001]: when computing the objective function, up to $z$ points are allowed to be discarded, where $z$ is a user-defined input parameter. A natural approach to compute approximate solutions to large instances of combinatorial optimization problems entails efficiently extracting a much smaller subset of the input, dubbed *coreset*, which contains a good approximation to the global optimum, and then applying a standard sequential approximation algorithm to such a coreset. The benefits of this approach are evident when the coreset construction is substantially more efficient than running the (possibly very expensive) sequential approximation algorithm directly on the whole input, so that significant performance improvements are attained by confining the execution of such algorithm on a small subset of the data. Using coresets much smaller than the input, the authors of [@MalkomesKCWM15] present MapReduce algorithms for the $k$-center problem with and without outliers, whose (constant) approximation factors are, however, substantially larger than their best sequential counterparts. In this work, we further leverage the coreset approach and unveil interesting tradeoffs between the coreset size and the approximation quality, showing that better approximation is achievable through larger coresets. The obtainable tradeoffs are regulated by the doubling dimension of the underlying metric space and allow us to obtain improved MapReduce and Streaming algorithms for the two formulations of the $k$-center problem, whose approximation ratios can be made arbitrarily close to the one featured by the best sequential algorithms. Also, as a by-product, we obtain a sequential algorithm for the case with outliers which is considerably faster than existing ones. Related work {#subsec:relwork} ------------ Back in the 80’s, Gonzalez [@Gonzalez85] developed a very popular 2-approximation sequential algorithm for the $k$-center problem running in ${O\left(k|S|\right)}$ time, which is referred to as [<span style="font-variant:small-caps;">gmm</span>]{}in the recent literature. In the same paper, the author showed that it is impossible to achieve an approximation factor $2-{\varepsilon}$, for fixed ${\varepsilon}>0$, in general metric spaces, unless $P=NP$. To deal with noise in the dataset, Charikar et al. [@Charikar2001] introduced the $k$-center problem with $z$ outliers, where the clustering is allowed to ignore $z$ points of the input. For this problem, they gave a 3-approximation algorithm which runs in ${O\left(k|S|^2 \log |S|\right)}$ time. Furthermore, they proved that, for this problem, it is impossible to achieve an approximation factor $3-{\varepsilon}$, for fixed ${\varepsilon}>0$, in general metric spaces, unless $P=NP$. With the advent of big data, a lot of attention has been devoted to the MapReduce model of computation, where a set of processors with limited-size local memories process data in a sequence of parallel rounds [@DeanG08; @PietracaprinaPRSU12; @LeskovecRU14]. The $k$-center problem under this model was first studied by Ene et al. [@EneIM11], who provided a 10-approximation randomized algorithm. This result was subsequently improved in [@MalkomesKCWM15] with a deterministic 4-approximation algorithm requiring an ${O\left(\sqrt{|S|k}\right)}$-size local memory. As for the $k$-center problem with $z$ outliers, a deterministic $13$-approximation MapReduce algorithm was presented in [@MalkomesKCWM15], requiring an ${O\left(\sqrt{|S|(k+z)}\right)}$-size local memory. We remark that randomized multi-round MapReduce algorithms for the two formulations of the $k$-center problem, with approximation ratios $2$ and 4 respectively, have been claimed but not described in the short communication [@ImM15]. While, theoretically, the MapReduce algorithms proposed in our work seem competitive with respect to both round complexity and space requirements with the algorithms announced in [@ImM15], any comparison is clearly subject to the availability of more details. As mentioned before, the algorithms in [@MalkomesKCWM15] are based on the use of (*composable*) *coresets*, a very useful tool in the MapReduce setting [@Agarwal2004; @IndykMMM14]. For a given objective function, a coreset is a small subset extracted from the input which embodies a solution whose cost is close to the cost of the optimal solution on the whole set. The additional property of composability requires that, if coresets are extracted from distinct subsets of a given partition of the input, their union embodies a close-to-optimal solution of the whole input. Composable coresets enable the development of parallel algorithms, where each processor computes the coreset relative to one subset of the partition, and the computation of the final solution is then performed by one processor that receives the union of the coresets. Composable coresets have been used for a number of problems, including diversity maximization [@IndykMMM14; @AghamolaeiFZ15; @CeccarelloPPU17; @CeccarelloPP18], submodular maximization [@Mirrokni2015], graph matching and vertex cover [@Assadi2017]. In [@BadoiuHI02] the authors provide a coreset-based $(1+{\varepsilon})$-approximation sequential algorithm to the $k$-center problem for $d$-dimensional Euclidean spaces, whose time is exponential in $k$ and $(1/{\varepsilon})^2$ and linear in $d$ and $|S|$. However, the coreset construction is rather involved, not easily parallelizable and the resulting algorithm seems to be mainly of theoretical interest. Another option when dealing with large amounts of data is to process the data in a streaming fashion. In the Streaming model, algorithms use a single processor with limited working memory and are allowed only a few sequential passes over the input (ideally just one) [@HenzingerRR98; @LeskovecRU14]. Originally developed for the external memory setting, this model also captures the scenario in which data is generated on the fly and must be analyzed in real-time, for instance in a streamed DMBS or in a social media platform (e.g., Twitter trends detection). Under this model, Charikar et al. [@CharikarCFM04] developed a 1-pass algorithm for the $k$-center problem which requires ${\Theta\left(k\right)}$ working memory and computes an 8-approximation, deterministically, or a 5.43-approximation, probabilistically. Later, the result was improved in [@McCutchen2008] attaining a $(2+{\varepsilon})$ approximation, deterministically, needing a working memory of size ${\Theta\left(k{\varepsilon}^{-1}\log({\varepsilon}^{-1})\right)}$. In the same paper, the authors give a deterministic $(4+{\varepsilon})$-approximation Streaming algorithm for the formulation with $z$ outliers, which requires ${O\left(kz{\varepsilon}^{-1}\right)}$ working memory. Our contribution {#sec:contribution} ---------------- The coreset-based MapReduce algorithms of [@MalkomesKCWM15] for $k$-center, with and without outliers, use the [<span style="font-variant:small-caps;">gmm</span>]{} sequential approximation algorithm for $k$-center in a “bootstrapping” fashion: namely, in a first phase, a set of $k$ centers ($k+z$ centers in the case with $z$ outliers) is determined in each subset of an arbitrary partition of the input dataset, and then the final solution is computed on the coreset provided by the union of these centers, using a sequential approximation algorithm for the specific problem formulation. Our work is motivated by the following natural question: what if we select more centers from each subset of the partition in the first phase? Intuitively, we should get a better solution than if we just selected $k$ (resp., $k+z$) centers. In fact, selecting more and more centers from each subset should yield a solution progressively closer to the one returned by the best sequential algorithm on the whole input, at the expense of larger space requirements. This paper provides a thorough characterization of the space-accuracy tradeoffs achievable by exploiting the aforementioned idea for both formulations of the $k$-center problem (with and without outliers). We present improved MapReduce and Streaming algorithms which leverage a judicious selection of larger (composable) coresets to boost the quality of the solution embodied in the (union of the) coresets. We analyze the memory requirements of our algorithms in terms of the desired approximation quality, captured by a precision parameter ${\varepsilon}$, and of the *doubling dimension* $D$ of the underlying metric space, a parameter that generalizes the dimensionality of Euclidean spaces to arbitrary metric spaces and is thus related to the difficulty of spotting good clusterings. We remark that this kind of parametrized analysis is particularly relevant in the realm of big data, where distortions introduced to account for worst-case scenarios may be too extreme to provide meaningful insights on actual algorithm’s performance, and it has been employed in a variety of contexts including diversity maximization, clustering, nearest neighbour search, routing, machine learning, and graph analytics (see [@CeccarelloPPU17] and references therein). At the heart of our algorithms are improved constructions of (composable) coresets which boost the quality of the solution embodied in the (union of the) coreset(s) by leveraging a judicious selection of a larger number of representatives in the coreset(s). The memory requirements of our algorithms are analyzed in terms of the desired approximation guarantee, and of the *doubling dimension* $D$ of the underlying metric space, a parameter which generalizes the dimensionality of Euclidean spaces to general metric spaces and is thus related to the increasing difficulty of spotting good clusterings as the parameter $D$ grows. Our specific results are the following: - A deterministic 2-round, $(2+{\varepsilon})$-approximation MapReduce algorithm for the $k$-center problem, which requires ${O\left(\sqrt{|S|k}(4/{\varepsilon})^D\right)}$ local memory. - A deterministic 2-round, $(3+{\varepsilon})$-approximation MapReduce algorithm for the $k$-center problem with $z$ outliers, which requires ${O\left(\sqrt{|S|(k+z)}(24/{\varepsilon})^D\right)}$ local memory. - A randomized 2-round, $(3+{\varepsilon})$-approximation MapReduce algorithm for the $k$-center problem with $z$ outliers, which reduces the local memory requirements to ${O\left(\left(\sqrt{|S|(k+\log|S|)}+z\right)(24/{\varepsilon})^D\right)}$. - A deterministic 1-pass, $(3+{\varepsilon})$-approximation Streaming algorithm for the $k$-center problem with $z$ outliers, which requires ${O\left((k+z)(96/{\varepsilon})^D\right)}$ working memory. Using our coreset constructions we can also attain a $(2+{\varepsilon})$-approximation Streaming algorithm for $k$-center without outliers, which however would not improve on the state-of-the-art algorithm [@McCutchen2008]. Nonetheless, for the sake of completeness, we will compare these two algorithms experimentally in Section \[sec:experiments\]. Observe that for both formulations of the problem, our algorithms feature approximation guarantees which are a mere additive term ${\varepsilon}$ larger than the best achievable sequential guarantee, and yield substantial quality improvements over the state-of-the-art [@MalkomesKCWM15; @McCutchen2008]. Moreover, the randomized MapReduce algorithm for the formulation with outliers features smaller coresets, thus attaining a reduction in the local memory requirements which becomes substantial in plausible scenarios where the number of outliers $z$ (e.g., due to noise) is considerably larger than the target number $k$ of clusters, although much smaller than the input size. While our algorithms are applicable to general metric spaces, on spaces of constant doubling dimension $D$ and for constant ${\varepsilon}$, their local space/working memory requirements are polynomially sublinear in the dataset size, in the MapReduce setting, and independent of the dataset size, in the Streaming setting. Moreover, a very desirable feature of our MapReduce algorithms is that they are *oblivious to $D$*, in the sense that the value $D$ (which may be not known in advance and hard to evaluate) is not used explicitly in the algorithms but only in their analysis. In contrast, the 1-pass Streaming algorithm makes explicit use of $D$, although we will show that it can be made oblivious to $D$ at the expense of one extra pass on the input stream. As a further important result, the MapReduce algorithm for the case with outliers admits a direct sequential implementation which substantially improves the time performance of the state-of-the-art algorithm by [@Charikar2001] while essentially preserving the approximation quality. We also provide experimental evidence of the competitiveness of our algorithms on real-world and synthetic datasets of up to over a billion points, comparing with baselines set by the algorithms in [@MalkomesKCWM15] for MapReduce, and [@McCutchen2008] for Streaming. In the MapReduce setting, the experiments show that tighter approximations over the algorithms in [@MalkomesKCWM15] are indeed achievable with larger coresets. In fact, while our theoretical bounds on the space requirements embody large constant factors, the improvements in the approximation quality are already noticeable with a modest increase of the coreset size. In the Streaming setting, for $k$-center without outliers we show that the $(2+{\varepsilon})$-approximation algorithm based on our techniques is comparable to [@McCutchen2008], whereas for $k$-center with outliers we obtain solutions of better quality using significantly less memory and time. The experiments also show that the Streaming algorithms feature high-throughput, and that the MapReduce algorithms exhibit high scalability. Finally, we show that, indeed, implementing our coreset strategy sequentially yields a substantial running time improvement with respect to the state-of-the art algorithm [@Charikar2001], while preserving the approximation quality.\ [**Organization of the paper**]{} The rest of the paper is organized as follows. A formal definition of the problems, together with some key properties and a description of the two computational frameworks, are presented in Section \[sec:prelim\] contains a number of preliminary concepts. Section \[sec:MR\] and Section \[sec:streaming\] present, respectively, our MapReduce and Streaming algorithms. The experimental results are reported in Section \[sec:experiments\]. Finally, Section \[sec:conclusions\] offers some concluding remarks. Preliminaries {#sec:prelim} ============= Consider a metric space $\mathcal{S}$ with distance function $d(\cdot, \cdot)$. For a point $u \in \mathcal{S}$, the *ball of radius $r$ centered at $u$* is the set of points at distance at most $r$ from $u$. The *doubling dimension* of $\mathcal{S}$ is the smallest $D$ such that for any radius $r$ and point $u \in \mathcal{S}$, all points in the ball of radius $r$ centered at $u$ are included in the union of at most $2^D$ balls of radius $r/2$ centered at suitable points. It immediately follows that, for any $0 < {\varepsilon}\le 1$, a ball of radius $r$ can be covered by at most $(1/{\varepsilon})^D$ balls of radius ${\varepsilon}r$. Notable examples of metric spaces with bounded doubling dimension are Euclidean spaces and spaces induced by shortest-path distances in mildly-expanding topologies. Also, the notion of doubling dimension can be defined for an individual dataset and it may turn out much lower than the one of the underlying metric space (e.g., a set of collinear points in $\Re^2$). In fact, the space-accuracy tradeoffs of our algorithms only depend on the doubling dimension of the input dataset. Define the distance between a point $s \in \mathcal{S}$ and a set $X \subseteq \mathcal{S}$ as $d(s, X) = \min_{x \in X}d(s, x)$. Consider now a dataset $S\subseteq \mathcal{S}$ and a subset $T\subseteq S$. We define the *radius of $S$ with respect to $T$* as $$r_T(S) = \max_{s \in S}d(s, T).$$ The *$k$-center problem* requires to find a subset $T \subseteq S$ of size $k$ such that $r_{T}(S)$ is minimized. We define $r_k^*(S)$ as the radius achieved by the optimal solution to the problem. Note that $T$ induces immediately a partition of $S$ into $k$ clusters by assigning each point to its closest center, and we say that $r_T(S)$ is the radius of such a clustering. In Section \[subsec:relwork\] we mentioned the [<span style="font-variant:small-caps;">gmm</span>]{}algorithm [@Gonzalez85], which provides a sequential 2-approximation to the $k$-center problem. Here we briefly review how [<span style="font-variant:small-caps;">gmm</span>]{}works. Given a set $S$, [<span style="font-variant:small-caps;">gmm</span>]{}builds a set of centers $T$ incrementally in $k$ iterations. An arbitrary point of $S$ is selected as the first center and is added to $T$. Then, the algorithm iteratively selects the next center as the point with maximum distance from $T$, and adds it to $T$, until $T$ contains $k$ centers. Note that, rather than setting $k$ *a priori*, [<span style="font-variant:small-caps;">gmm</span>]{}can be used to grow the set $T$ until a target radius is achieved. In fact, the radius of $S$ with respect to the set of centers $T$ incrementally built by [<span style="font-variant:small-caps;">gmm</span>]{}is a non-increasing function of the iteration number. In this paper, we will make use of the following property of [<span style="font-variant:small-caps;">gmm</span>]{}which bounds its accuracy when run on a subset of the data. \[lem:gmm-subset\] Let $X \subseteq S$. For a given $k$, let $T_X$ be the output of [<span style="font-variant:small-caps;">gmm</span>]{}when run on $X$. We have $r_{T_X}(X) \le 2 \cdot {r_k^*(S)}$. We prove this lemma by rephrasing the proof by Gonzalez [@Gonzalez85] in terms of subsets. We need to prove that, $\forall x \in X$, $d(x, T_X) \le 2 \cdot {r_k^*(S)}$. Assume by contradiction that this is not the case. Then, for some $y \in X$ it holds that $d(y, T_X) > 2 \cdot {r_k^*(S)}$. By the greedy choice of [<span style="font-variant:small-caps;">gmm</span>]{}, we have that for any pair $t_1, t_2 \in T_X$, $d(t_1, t_2) \ge d(y, T_X)$, otherwise $y$ would have been included in $T_X$. So we have that $d(t_1, t_2) > 2 \cdot {r_k^*(S)}$. Therefore, the set $\{y\} \cup T_X$ consists of $k+1$ points at distance $> 2\cdot{r_k^*(S)}$ from each other. Consider now the optimal solution to $k$-center on the set $S$. Since $(\{y\} \cup T_X) \subseteq S$, two of the $k+1$ points of $\{y\} \cup T_X$, say $x_1$ and $x_2$, must be closest to the same optimal center $o^*$. By the triangle inequality we have $2\cdot{r_k^*(S)}< d(x_1, x_2) \le d(x_1, o^*) + d(o^*, x_2) \le 2 \cdot {r_k^*(S)}$, a contradiction. For a given set $S \subseteq \mathcal{S}$, the *$k$-center problem with $z$ outliers* requires to identify a set $T$ of $k$ centers which minimizes $$r_{T,Z_T}(S) = \max_{s \in S \setminus Z_T} d(s, T),$$ where $Z_T$ is the set of $z$ points in $S$ with largest distance from $T$ (ties broken arbitrarily). In other words, the problem allows to discard up the $z$ farthest points when computing the radius of the set of centers, hence of its associated clustering. For given $S$, $k$, and $z$, we denote the radius of the optimal solution of this problem by $r_{k, z}^*(S)$. It is straightforward to argue that the optimal solution of the problem without outliers with $k+z$ centers has a smaller radius than the optimal solution of the problem with $k$ centers and $z$ outliers, that is $$r_{k+z}^*(S) \le r_{k,z}^*(S). \label{eq:radius-relation}$$ Computational frameworks ------------------------ A *MapReduce* algorithm [@DeanG08; @PietracaprinaPRSU12; @LeskovecRU14] executes in a sequence of parallel *rounds*. In a round, a multiset $X$ of key-value pairs is first transformed into a new multiset $X'$ of key-value pairs by applying a given *map function* (simply called *mapper*) to each individual pair, and then into a final multiset $Y$ of pairs by applying a given *reduce function* (simply called *reducer*) independently to each subset of pairs of $X'$ having the same key. The model features two parameters, $M_L$, the *local memory* available to each mapper/reducer, and $M_A$, the *aggregate memory* across all mappers/reducers. In our algorithms, mappers are straightforward constant-space transformations, thus the memory requirements will be related to the reducers. We remark that the MapReduce algorithms presented in this paper also afford an immediate implementation and similar analysis in the *Massively Parallel Computation* (MPC) model [@BeameKS13], which is popular in the database community. In the *Streaming* framework [@HenzingerRR98; @LeskovecRU14] the computation is performed by a single processor with a small working memory, and the input is provided as a continuous stream of items which is usually too large to fit in the working memory. Multiple passes on the input stream may be allowed. Key performance indicators are the size of the working memory and the number of passes. The holy grail of big data algorithmics is the development of MapReduce (resp., Streaming) algorithms which work in as few rounds (resp., passes) as possible and require substantially sublinear local memory (resp., working memory) and linear aggregate memory. MapReduce algorithms {#sec:MR} ==================== The following subsections present our MapReduce algorithms for the $k$-center problem (Subsection \[sec-noout\]) and the $k$-center problem with $z$ outliers (Subsection \[sec-out\]). The algorithms are based on the use of composable coresets, which were reviewed in the introduction, and can be viewed as improved variants of those by [@MalkomesKCWM15]. The main novelty of our algorithms is their leveraging a judiciously increased coreset size to attain approximation qualities that are arbitrarily close to the ones featured by the best known sequential algorithms. Also, in the analysis, we relate the required coreset size to the doubling dimension of the underlying metric space (whose explicit knowledge, however, is not required by the algorithms) showing that coreset sizes stay small for spaces of bounded doubling dimension. MapReduce algorithm for $k$-center {#sec-noout} ---------------------------------- Consider an instance $S$ of the $k$-center problem and fix a precision parameter ${\varepsilon}\in (0, 1]$, which will be used to regulate the approximation ratio. The MapReduce algorithm works in two rounds. In the first round, $S$ is partitioned into $\ell$ subsets $S_i$ of equal size, for $1 \leq i \leq \ell$. In parallel, on each $S_i$ we run [<span style="font-variant:small-caps;">gmm</span>]{}incrementally and call $T_i^j$ the set of $j$ centers selected in the first $j$ iterations of the algorithm. Let $r_{T_i^k}(S_i)$ denote the radius of the set $S_i$ with respect to the first $k$ centers. We continue to run [<span style="font-variant:small-caps;">gmm</span>]{}until the first iteration $\tau_i \geq k$ such that $r_{T_i^{\tau_i}}(S_i) \le {\varepsilon}/2 \cdot r_{T_i^k}(S_i)$, and define the coreset $T_i = T_i^{\tau_i}$. In the second round, the union of the coresets $T = \bigcup_{i=1}^\ell T_i$ is gathered into a single reducer and [<span style="font-variant:small-caps;">gmm</span>]{} is run on $T$ to compute the final set of $k$ centers. In what follows, we show that these centers are a good solution to the $k$-center problem on $S$. The analysis relies on the following two lemmas which state that each input point has a close-by representative in $T$ and that $T$ has small size. We define a *proxy function* $p: S \to T$ that maps each $s \in S_i$ into the closest point in $T_i$, for every $1 \leq i \leq \ell$. The following lemma is an easy consequence of Lemma \[lem:gmm-subset\]. \[lem:k-center-proxy\] For each $s \in S$, $d(s, p(s)) \le {\varepsilon}\cdot {r_k^*(S)}$. Fix $i \in [1, \ell]$, and consider $S_i \subseteq S$, and the set $T_i^k$ computed by the first $k$ iterations of [<span style="font-variant:small-caps;">gmm</span>]{}. Since $S_i$ is a subset of $S$, by Lemma \[lem:gmm-subset\] we have that $r_{T_i^k}(S_i) \le 2\cdot {r_k^*(S)}$. By construction, we have that $r_{T_i}(S_i) \le {\varepsilon}/2 \cdot r_{T_i^k}(S_i)$, hence $r_{T_i}(S_i) \le {\varepsilon}{r_k^*(S)}$. Consider now the proxy function $p$. For every $1 \leq i \leq \ell$ and $s \in S_i$, it holds that $d(s, p(s)) \le r_{T_i}(S_i) \le {\varepsilon}{r_k^*(S)}$. We can conveniently bound the size of $T$, the union of the coresets, as a function of the doubling dimension of the underlying metric space. \[lem:k-center-size\] If $S$ belongs to a metric space of doubling dimension $D$, then $$|T| \le \ell\cdot k \cdot \left( \frac{4}{{\varepsilon}} \right)^D.$$ Fix an $i\in[1, \ell]$. We prove an upper bound on the number $\tau_i$ of iterations of [<span style="font-variant:small-caps;">gmm</span>]{}needed to obtain $r_{T_i^{\tau_i}}(S_i) \le ({\varepsilon}/2) r_{T_i^k}(S_i)$, which in turn bounds the size of $T_i$. Consider the $k$-center clustering of $S_i$ induced by the $k$ centers in $T_i^k$, with radius $r_{T_i^k}(S_i)$. By the doubling dimension property, we have that each of the $k$ clusters can be covered using at most $(4/{\varepsilon})^D$ balls of radius $\le ({\varepsilon}/4) \cdot r_{T_i^k}(S_i)$, for a total of at most $h=k(4/{\varepsilon})^D$ such balls. Consider now the execution of $h$ iterations of the [<span style="font-variant:small-caps;">gmm</span>]{}algorithm on $S_i$. Let $T_i^h$ be the set of returned centers and let $x \in S_i$ be the farthest point of $S_i$ from $T_i^h$. The center selection process of the [<span style="font-variant:small-caps;">gmm</span>]{}algorithm ensures that any two points in $T_i^h \cup \{x\} $ are at distance at least $r_{T_i^h}(S_i)$ from one another. Thus, since two of these points must fall into one of the $h$ aforementioned balls of radius $\le ({\varepsilon}/4) \cdot r_{T_i^k}(S_i)$, this implies immediately (by the triangle inequality) that $$r_{T_i^h}(S_i) \le 2 ({\varepsilon}/4) \cdot r_{T_i^k}(S_i) = ({\varepsilon}/2) \cdot r_{T_i^k}(S_i).$$ Hence, after $h$ iterations we are guaranteed that [<span style="font-variant:small-caps;">gmm</span>]{}finds a set $T_i^h$ which meets the stopping condition. Therefore, $|T_i|=\tau_i \leq h = k(4/{\varepsilon})^D$, for every $i \in [1,\ell]$, and the lemma follows. We now state the main result of this subsection. Let $0 < {\varepsilon}\le 1$. If the points of $S$ belong to a metric space of doubling dimension $D$, then the above 2-round MapReduce algorithm computes a $(2+{\varepsilon})$-approximation for the $k$-center problem with local memory $M_L = {O\left(|S|/\ell + \ell\cdot k \cdot (4/{\varepsilon})^D\right)}$ and linear aggregate memory. Let $X$ be the solution found by [<span style="font-variant:small-caps;">gmm</span>]{}on $T$. Since $T \subseteq S$, from Lemma \[lem:gmm-subset\] it follows that $r_X(T) \le 2 \cdot {r_k^*(S)}$. Consider an arbitrary point $s \in S$, along with its proxy $p(s) \in T$, as defined before. By Lemma \[lem:k-center-proxy\] we know that $d(s, p(s)) \le {\varepsilon}\cdot{r_k^*(S)}$. Let $x\in X$ be the center closest to $p(s)$. It holds that $d(x, p(s)) \le 2\cdot {r_k^*(S)}$. By applying the triangle inequality, we have that $d(x, s) \le d(x, p(s)) + d(p(s), s) \le 2\cdot {r_k^*(S)}+ {\varepsilon}\cdot {r_k^*(S)}= (2+{\varepsilon}){r_k^*(S)}$. The bound on $M_L$ follows since in the first round each processor needs to store $|S|/\ell$ points of the input and computes a coreset of size ${O\left(k \cdot (4/{\varepsilon})^D\right)}$, as per Lemma \[lem:k-center-size\], while in the second round, one processor needs enough memory to store $\ell$ such coresets. Finally, it is immediate to see that aggregate memory proportional to the input size suffices. By setting $\ell = {\Theta\left(\sqrt{|S|/k}\right)}$ in the above theorem we obtain: Our 2-round MapReduce algorithm computes a $(2+{\varepsilon})$-approximation for the $k$-center problem with local memory $M_L = {O\left(\sqrt{|S|k} (4/{\varepsilon})^D\right)}$ and linear aggregate memory. For constant ${\varepsilon}$ and $D$, the local memory bound becomes $M_L = {O\left(\sqrt{|S|k}\right)}$. MapReduce algorithm for $k$-center with $z$ outliers {#sec-out} ---------------------------------------------------- Consider an instance $S$ of the $k$-center problem with $z$ outliers and fix a precision parameter $\hat{{\varepsilon}} \in (0, 1]$ intended, as before, to regulate the approximation ratio. We propose the following 2-round MapReduce algorithm for the problem. In the first round, $S$ is partitioned into $\ell$ equally-sized subsets $S_i$, with $1 \leq i \leq \ell$, and for each $S_i$, in parallel, [<span style="font-variant:small-caps;">gmm</span>]{}is run incrementally. Let $T_i^j$ be the set of the first $j$ selected centers. We continue to run [<span style="font-variant:small-caps;">gmm</span>]{} until the first iteration $\tau_i \geq k+z$ such that $r_{T_i^{\tau_i}}(S_i) \le \hat{{\varepsilon}}/2 \cdot r_{T_i^{k+z}}(S_i)$. Define the coreset $T_i=T_i^{\tau_i}$. As before, for each point $s \in S_i$ we define its *proxy* $p(s)$ to be the point of $T_i$ closest to $s$, but, furthermore, we attach to each $t \in T_i$ a *weight* $w_t \geq 1$, which is the number of points of $S_i$ with proxy $t$. In the second round, the union of the weighted coresets $T = \cup_{i=1}^\ell T_i$ is gathered into a single reducer. Before describing the details of this second round, we need to introduce a sequential algorithm, dubbed [[OutliersCluster]{}]{}  (see pseudocode below), for solving a weighted variant of the $k$-center problem with outliers which is a modification of the one presented in [@MalkomesKCWM15] (in turn, based on the unweighted algorithm of [@Charikar2001]). [$T'$ $\leftarrow$ $T$]{} [$X$ $\leftarrow$ $\emptyset$]{} $X, T'$ [[OutliersCluster]{}]{}$(T,k,r,\hat{{\varepsilon}})$ returns two subsets $X, T' \subseteq T$ such that $X$ is a set of (at most) $k$ centers, and $T'$ is a set of points referred to as *uncovered points*. The algorithm starts with $T'=T$ and builds $X$ incrementally in $|X| \le k$ iterations as follows. In each iteration, the next center $x$ is chosen as the point maximizing the aggregate weight of uncovered points in its ball of radius $(1+2\hat{{\varepsilon}})\cdot r$ (note that $x$ needs not be an uncovered point). Then, all uncovered points at distance at most $(3+4\hat{{\varepsilon}})\cdot r$ from $x$ are removed from $T'$. The algorithm terminates when either $|X|=k$ or $T'=\emptyset$. By construction, the final $T'$ consists of all points at distance greater than $(3+4\hat{{\varepsilon}})\cdot r$ from $X$. Let us return to the second round of our MapReduce algorithm. The reducer that gathered $T$ runs [[OutliersCluster]{}]{}$(T,k,r,\hat{{\varepsilon}})$ multiple times to estimate the minimum value $r_{\rm min}$ such that the aggregate weight of the points in the set $T'$ returned by [[OutliersCluster]{}]{}$(T,k,r_{\rm min},\hat{{\varepsilon}})$ is at most $z$. More specifically, the computed estimate, say $\tilde{r}_{\rm min}$, is within a multiplicative tolerance $(1+\delta)$ from the true $r_{\rm min}$, with $\delta = \hat{{\varepsilon}}/(3+4\hat{{\varepsilon}})$, and it is obtained through a binary search over all possible ${O\left(|T|^2\right)}$ distances between points of $T$ combined with a geometric search with step $(1+\delta)$. To avoid storing all ${O\left(|T|^2\right)}$ distances, the value of $r$ at each iteration of the binary search can be determined in space linear in $T$ by the median-finding Streaming algorithm in [@MunroP80]. The output of the MapReduce algorithm is the set of centers computed by [[OutliersCluster]{}]{}$(T,k,\tilde{r}_{\rm min},\hat{{\varepsilon}})$. We now analyze our 2-round MapReduce algorithm. The following lemma bounds the distance between a point and its proxy. \[lem:proxy-outliers\] For each $s \in S$, $d(s, p(s)) \le \hat{{\varepsilon}}\cdot{r_{k, z}^*(S)}$. Consider any subset $S_i$ of the partition $S_1, \dots, S_\ell$ of $S$. By construction, we have that for each $s \in S_i$, $d(s, p(s)) \le (\hat{{\varepsilon}}/2)\cdot r_{T_i^{k+z}}(S_i)$. Since $S_i$ is a subset of $S$, Lemma \[lem:gmm-subset\] ensures that $r_{T_i^{k+z}}(S_i) \le 2 r^*_{k+z}(S)$. Hence, $d(s, p(s)) \le \hat{{\varepsilon}} r^*_{k+z}(S)$. Since $r^*_{k+z}(S) \le {r_{k, z}^*(S)}$, as observed before in Eq. \[eq:radius-relation\], we have $d(x, p(x)) \le \hat{{\varepsilon}}\cdot{r_{k, z}^*(S)}$. Next, we characterize the quality of the solution returned by [[OutliersCluster]{}]{} when run on $T$, the union of the weighted coresets, and with a radius $r\geq {r_{k, z}^*(S)}$. (\#2)(\#3:\#4:\#5)[ ($(#2)+({#5*cos(#3)},{#5*sin(#3)})$) arc (\#3:\#4:\#5); ]{} =\[circle, fill=black, minimum size=5pt, inner sep=0pt\] \(A) at (0, 0) ; (B) at (2, 0) ; (C) at (3, 1) ; (D) at (4.5, 1) ; (E) at (6, 1) ; (F) at (7, 0) ; \(A) – node\[above\] [$\le (1+2\hat{{\varepsilon}})r$]{} (B) – node\[midway,right\] [$\le \hat{{\varepsilon}}{r_{k, z}^*(S)}$]{} (C) – node\[above\] [$\le {r_{k, z}^*(S)}$]{} (D) – node\[above\] [$\le {r_{k, z}^*(S)}$]{} (E) – node\[midway, right\] [$\le \hat{{\varepsilon}}{r_{k, z}^*(S)}$]{} (F) ; (A)(110:-10:2.3); (D)(0:360:2.1); (A)(20:-8:8); \(XC) at (1.2,1.1) [$B_{x_j}$]{}; (XC) – (1.7,1.55); (LC) at (4,2.4) [$C_{o_i}$]{}; (LC) – (3.7,3); (EC) at (7,1.8) [$E_{x_j}$]{}; (EC) – (7.8,2); \[lem:outliers-approx\] For $r\geq {r_{k, z}^*(S)}$, let $X,T' \subseteq T$ be the sets returned by [[OutliersCluster]{}]{}$(T,k,r,\hat{{\varepsilon}})$, and define $S_{T'} = \{s\in S: p(s)\in T'\}$. Then, $$d(t, X) \le (3+4\hat{{\varepsilon}}) \cdot r \quad \forall t \in T \setminus T'$$ and $|S_{T'}| \le z$. The proof uses an argument akin to the one used for the analysis of the sequential algorithm by [@Charikar2001] and later adapted by [@MalkomesKCWM15] to the weighted coreset setting. The first claim follows immediately from the workings of the algorithm, since each point in $T-T'$ belongs to some $E_x$, with $x\in X$. We are left to show that $|S_{T'}| \le z$. Suppose first that $|X|<k$. In this case, it must be $T' = \emptyset$, hence $|S_{T'}|= 0$, and the proof follows. We now concentrate on the case $|X|=k$. Consider the $i$-th iteration of the while loop of [[OutliersCluster]{}]{}$(T,k,r,\hat{{\varepsilon}})$ and define $x_i$ as the center of $X$ selected in the iteration, and $T'_i$ as the set $T'$ of uncovered points at the beginning of the iteration. Recall that $x_i$ is the point of $T$ which maximizes the cumulative weight of the set $B_{x_i}$ of uncovered points in $T'_i$ at distance at most $(1+2\hat{{\varepsilon}})\cdot r$ from $x_i$, and that the set $E_{x_i}$ of all uncovered points at distance at most $(3+4\hat{{\varepsilon}})\cdot r$ from $x_i$ is removed from $T'_i$ at the end of the iteration. We now show that $$\label{eq:weight-outliers} \sum_{i=1}^k \sum_{t \in E_{x_i}} w_t \ge |S|-z,$$ which will immediately imply that $|S_{T'}| \le z$. For this purpose, let $O$ be an optimal set of $k$ centers for the problem instance under consideration, and let $Z$ be the set of at most $z$ outliers at distance greater than ${r_{k, z}^*(S)}$ from $O$. For each $o \in O$, define $C_o \subseteq S \setminus Z$ as the set of nonoutlier points which are closer to $o$ than to any other center of $O$, with ties broken arbitrarily. To prove (\[eq:weight-outliers\]), it is sufficient to exhibit an ordering $o_1, o_2, \ldots, o_k$ of the centers in $O$ so that, for every $1 \leq i \leq k$, it holds $$\sum_{j=1}^i \sum_{t \in E_{x_j}} w_t \ge |C_{o_1} \cup \dots \cup C_{o_i}|.$$ The proof uses an inductive charging argument to assign each point in $\bigcup_{j=1}^i C_{o_j}$ to a point in $\bigcup_{j=1}^i E_{x_j}$, where each $t$ in the latter set will be in charge of at most $w_t$ points. We define two charging rules. A point can be either charged to its own proxy (*Rule 1*) or to another point of $T$ (*Rule 2*). Fix some arbitrary $i$, with $1 \leq i \leq k$, and assume, inductively, that the points in $C_{o_1} \cup \dots \cup C_{o_{i-1}}$ have been charged to points in $\bigcup_{j=1}^{i-1} E_j$ for some choice of distinct optimal centers $o_1, o_2, \ldots, o_{i-1}$. We have two cases.\ [**Case 1.**]{} *There exists an optimal center $o$ still unchosen such that there is a point $v\in C_{o}$ with $p(v) \in B_{x_j}$, for some $1 \leq j \leq i$.* We choose $o_i$ as one such center. Hence $d(x_j,p(v)) \le (1 + 2\hat{{\varepsilon}}) \cdot r$. By repeatedly applying the triangle inequality we have that for each $u \in C_{o_i}$ $$\begin{aligned} d(x_j,p(u)) \leq & \;\; d(x_j,p(v)) + d(p(v),v) + d(v, o_i) + d(o_i,u) + \\ & + d(u,p(u)) \leq (3 + 4\hat{{\varepsilon}}) \cdot r\end{aligned}$$ hence, $p(u) \in E_{x_j}$. Therefore we can charge each point $u\in C_{o_i}$ to its proxy, by Rule 1.\ [**Case 2.**]{} *For each unchosen optimal center $o$ and each $v \in C_o$, $p(v) \not\in \bigcup_{j=1}^i B_{x_j}$.* We choose $o_i$ to be the unchosen optimal center which maximizes the cardinality of $\{p(u) : u \in C_{o_i}\} \cap T'_i$. We distinguish between points $u\in C_{o_i}$ with $p(u) \notin T'_i$, hence $p(u) \in \bigcup_{j=1}^{i-1} E_{x_j}$, and those with $p(u) \in T'_i$. We charge each $u\in C_{o_i}$ with $p(u) \notin T'_i$ to its own proxy by Rule 1. As for the other points, we now show that we can charge them to the points of $B_{x_i}$. To this purpose, we first observe that $B_{p(o_i)}$ contains $\{p(u) : u \in C_{o_i}\} \cap T'_i$, since for each $u \in C_{o_i}$ $$\begin{aligned} d(p(o_i),p(u)) &\leq d(p(o_i),o_i) + d(o_i,u) + d(u,p(u)) \\ &\leq (1 + 2\hat{{\varepsilon}}) \cdot {r_{k, z}^*(S)}\leq (1 + 2\hat{{\varepsilon}}) \cdot r. \end{aligned}$$ Therefore the aggregate weight of $B_{p(o_i)}$ is at least $\left|\left\{u \in C_{o_i} : p(u) \in T'_i\right\}\right|$. Since Iteration $i$ selects $x_i$ as the center such that $B_{x_i}$ has maximum aggregate weight, we have that $$\sum_{t \in B_{x_i}} w_t \ge \sum_{z \in B_{p(o_i)}} w_z \ge \left|\left\{u \in C_{o_i} : p(u) \in T'_i\right\}\right|,$$ hence, the points in $B_{x_i}$ have enough weight to be charged with each point $u \in C_{o_i}$ with $p(u) \in T'_i$. Figure \[fig:case-2-charging-rule\] illustrates the charging under Case 2. =\[circle, fill=black, minimum size=5pt, inner sep=0pt\] =\[star,star points=5, fill=black, minimum size=8pt, inner sep=0pt\] (\#1)(\#2:\#3:\#4)[($(#1)+({#4*cos(#2)},{#4*sin(#2)})$) arc (\#2:\#3:\#4) ]{} (\#2)[ (\#2)node\[proxy,\#1\] (\#2)(0:360:0.3) ]{} \(E) at (-3,0) ; (O) at (0.3,1) ; (X) at (2.5,-0.5) ; ; ; ; (covered) at (0.08,0.8) ; (covered2) at (-0.3,0.3) ; (uncovered) at (0.9,1.2) ; (uncovered2) at (0.6,1.85) ; (proxycovered) at (-0.22, 0.8) ; (proxycovered); (covered) .. controls (-0.12,0.4) and (-0.15,0.5) .. (proxycovered); (proxycovered2) at (-0.5, 0.1) ; (proxycovered2); (covered2) .. controls (-0.2, 0.05) and (-0.3,0) .. (proxycovered2); (proxyuncovered) at (1,1.3) ; (proxyuncovered); (uncovered) arc (80:30:3); (proxyuncovered2) at (0.65,2.1) ; (proxyuncovered2); (uncovered2) arc (175:250:3); (1.5,2.1) node\[right\][ ]{}; (-0.7,-0.8) node\[left\]; at (.1,-1.3) ; at (1.15,-1) ; at (0.7,-0.4) ; Note that the points of $B_{x_i}$ did not receive any charging by Rule 1 in previous iterations, since they are uncovered at the beginning of Iteration $i$, and will not receive chargings by Rule 1 in subsequent iterations, since $B_{x_i}$ does not intersect the set $C_o$ of any optimal center $o$ yet to be chosen. Also, no further charging to points of $B_{x_i}$ by Rule 2 will happen in subsequent iterations, since Rule 2 will only target sets $B_{x_h}$ with $h > i$. These observations ensure that any point of $T$ receives charges through either Rule 1 or Rule 2, but not both, and never in excess of its weight, and the proof follows. The following lemma bounds the size of $T$, the union of the weighted coresets. \[lem:outliers-size\] If $S$ belongs to a metric space of doubling dimension $D$, then $$|T| \le \ell\cdot (k+z) \cdot \left( \frac{4}{\hat{{\varepsilon}}} \right)^D$$ The proof proceeds similarly to the one of Lemma \[lem:k-center-size\], with the understanding that the definition of doubling dimension is applied to each of the $(k+z)$ clusters induced by the points of $T_i^{k+z}$ on $S_i$. Finally, we state the main result of this subsection. \[thm:outliers-mr\] Let $0 < {\varepsilon}\le 1$. If the points of $S$ belong to a metric space of doubling dimension $D$, then, when run with $\hat{{\varepsilon}}={\varepsilon}/6$, the above 2-round MapReduce algorithm computes a $(3+{\varepsilon})$-approximation for the $k$-center problem with $z$ outliers with local memory $M_L = {O\left(|S|/\ell + \ell\cdot (k+z) \cdot (24/{\varepsilon})^D\right)}$ and linear aggregate memory. The result of Lemma \[lem:outliers-approx\] combined with the stipulated tolerance of the search performed in the second round of the algorithm implies that the radius discovered by the search is $\tilde{r}_{\rm min} \le {r_{k, z}^*(S)}(1+\delta)$ with $\delta=\hat{{\varepsilon}}/(3+4\hat{{\varepsilon}})$. Also, by the triangle inequality, the distance between each non-outlier point in $S$ and its closest center will be at most $\hat{{\varepsilon}}{r_{k, z}^*(S)}+ (3+4\hat{{\varepsilon}}) {r_{k, z}^*(S)}(1+\delta) \leq (3+6 \hat{{\varepsilon}}) {r_{k, z}^*(S)}\leq (3+{\varepsilon}){r_{k, z}^*(S)}$, which proves the approximation bound. The bound on $M_L$ follows since in the first round each reducer needs enough memory to store $|S|/\ell$ points of the input, while in the second round the reducer computing the final solution requires enough memory to store the union of the $\ell$ coresets, which, by Lemma \[lem:outliers-size\], has size ${O\left((k+z)(4/\hat{{\varepsilon}})^D\right)} = {O\left((k+z)(24/{\varepsilon})^D\right)}$ each. Also, globally, the reducers need only sufficient memory to store the input, hence $M_A = {O\left(|S|\right)}$. By setting $\ell = {\Theta\left(\sqrt{|S|/(k+z)}\right)}$ in the above theorem we obtain: Our 2-round MapReduce algorithm computes a $(3+{\varepsilon})$-approximation for the $k$-center problem with $z$ outliers, with local memory $M_L = {O\left(\sqrt{|S|(k+z)}(24/{\varepsilon})^D)\right)}$ and linear aggregate memory. For constant ${\varepsilon}$ and $D$, the local memory bound becomes $M_L = {O\left(\sqrt{|S|(k+z)}\right)}$. A simple analysis implies that, by setting $\ell=1$, our MapReduce strategy for the $k$-center problem with $z$ outliers yields an efficient sequential $(3+{\varepsilon})$-approximation algorithm whose running time is ${O\left(|S||T|+k |T|^2 \log |T|\right)}$, where $|T|=(k+z)(24/{\varepsilon})^D$, is the coreset size. For a wide range of values of $k,z, {\varepsilon}$ and $D$ this yields a substantially improved performance over the ${O\left(k |S|^2 \log |S|\right)}$-time state-of-the-art algorithm of [@Charikar2001], at the expense of a negligibly worse approximation. ### Higher space efficiency through randomization {#sec:improved-space} The analysis of very noisy datasets might require setting the number $z$ of outliers much larger than $k$, while still $o(|S|)$. In this circumstance, the size of the union of the coresets $T$ is proportional to $\sqrt{|S|z}$, and may turn out too large for practical purposes, due to the large local memory requirements and to the running time of the cubic sequential approximation algorithm run on $T$ in the second round, which may become the real performance bottleneck of the entire algorithm. In this subsection, we show that this drawback can be significantly ameliorated by simply partitioning the pointset at random in the first round, at the only expense of probabilistic rather than deterministic guarantees on the resulting space and approximation guarantees. We say that an event related to a dataset $S$ occurs *with high probability* $p$ if $p \geq 1-1/|S|^c$, for some constant $c \geq 1$. The randomized variant of the algorithm works as follows. In the first round, the input set $S$ is partitioned into $\ell$ subsets $S_i$, with $1 \leq i \leq \ell$, by assigning each point to a random subset chosen uniformly and independently of the other points. Let $z'=6((z/\ell)+\log_2 |S|)$ and observe that, for large $z$ and $\ell$, we have that $z' \ll z$. Then, in parallel on each partition $S_i$, [<span style="font-variant:small-caps;">gmm</span>]{}is run to yield a set $T_i^{\tau_i}$ of $\tau_i$ centers, where $\tau_i \geq k+z'$ is the smallest value such that $r_{T_i^{\tau_i}}(S_i) \le (\hat{{\varepsilon}}/2) \cdot r_{T_i^{k+z'}}(S_i)$. Define the coreset $T_i=T_i^{\tau_i}$ and, again, for each point $s \in S_i$ define its *proxy* $p(s)$ to be the point of $T_i$ closest to $s$. The rest of the algorithm is exactly as before using these new $T_i$’s. The analysis proceeds as follows. Consider an optimal solution of the $k$-center problem with $z$ outliers for $S$, and let $O=\{o_1, o_2, \ldots, o_k\}$ be the set of $k$ centers and $Z_O$ the set of $z$ outliers, that is the $z$ points of $S$ most distant from $O$. Recall that any point of $S\setminus Z_O$ is at distance at most ${r_{k, z}^*(S)}$ from $O$. The following lemma states that the outliers (set $Z_O$) are well distributed among the $S_i$’s. \[lem:occupancy\] With high probability, each $S_i$ contains no more than $z'=6((z/\ell)+\log_2 |S|)$ points of $Z_O$. The result follows by applying Chernoff bound (4.3) of [@MitzemacherU17] and the union bound, which yield that the stated event occurs with probability at least $1-1/|S|^5$. The rest of the analysis mimics the one of the deterministic version. \[lem:condensed\] The statements of both Lemmas \[lem:proxy-outliers\] and \[lem:outliers-approx\] hold with high probability. We first prove that, with high probability, for each for each $s \in S$, $d(s, p(s)) \le \hat{{\varepsilon}}\cdot{r_{k, z}^*(S)}$ (same as Lemma \[lem:proxy-outliers\]). Consider $O$ and $Z_O$. We condition on the event that each $S_i$ contains at most $z'$ points of $Z_O$, which, by Lemma \[lem:occupancy\], occurs with high probability. Focus on an arbitrary subset $S_i$. For $1 \leq j \leq \ell$, let $C_j$ be the set of points of $S\setminus Z_O$ whose closest optimal center is $o_j$, and let $C_j(i) = C_j \cap S_i$. Consider the set $T_i^{k+z'}$ of centers determined by the first $k+z'$ iterations of the [<span style="font-variant:small-caps;">gmm</span>]{}algorithm and let $x \in S_i$ be the farthest point of $S_i$ from $T_i^{k+z'}$. By arguing as in the proof of Lemma \[lem:k-center-size\], it can be shown that any two points in $T_i^{k+z'} \cup \{x\}$ are at distance at least $r_{T_i^{k+z'}}(S_i)$ from one another and since two of these points must belong to the same $C_j(i)$ for some $j$, by the triangle inequality we have that $$r_{T_i^{k+z'}}(S_i) \le 2 {r_{k, z}^*(S)}.$$ Recall that the [<span style="font-variant:small-caps;">gmm</span>]{}algorithm on $S_i$ is stopped at the first iteration $\tau_i$ such that $r_{T_i^{\tau_i}}(S_i) \leq (\hat{{\varepsilon}}/2) \cdot r_{T_i^{k+z'}}(S_i)$, hence $$r_{T_i^{\tau_i}}(S_i) \leq (\hat{{\varepsilon}}/2) \cdot r_{T_i^{k+z'}}(S_i) \leq (\hat{{\varepsilon}}/2) \cdot 2 {r_{k, z}^*(S)}= \hat{{\varepsilon}} \cdot {r_{k, z}^*(S)}.$$ The desired bound on $d(s,p(s))$ immediately follows. Conditioning on this bound, the proof of Lemma \[lem:outliers-approx\] can be repeated identically, hence the stated property holds. By repeating the same argument used in Lemma \[lem:outliers-size\], one can easily argue that, if $S$ belongs to a metric space of doubling dimension $D$, then the size of the weighted coreset $T$ is $$|T| \le \ell\cdot (k+z') \cdot \left( \frac{4}{\hat{{\varepsilon}}} \right)^D.$$ This bound, together with the results of the preceding lemma, immediately implies the analogous of Theorem \[thm:outliers-mr\] stating that, with high probability, the randomized algorithm computes a $(3+{\varepsilon})$-approximation for the $k$-center problem with $z$ outliers with local memory $M_L = {O\left(|S|/\ell + \ell\cdot (k+z') \cdot (24/{\varepsilon})^D\right)}$ and linear aggregate memory. Observe that $z$ is now replaced by (the much smaller) $z'$ in the local memory bound. By choosing $\ell = {\Theta\left(\sqrt{|S|/(k+\log |S|)}\right)}$ we obtain: With high probability, our 2-round MapReduce algorithm computes a $(3+{\varepsilon})$-approximation for the $k$-center problem with $z$ outliers, with local memory $M_L = {O\left(\left(\sqrt{|S|(k+\log |S|)}+z\right)(24/{\varepsilon})^D\right)}$ and linear aggregate memory. For constant ${\varepsilon}$ and $D$, the local memory bound becomes $M_L = {O\left(\sqrt{|S|(k+\log |S|)}+z\right)}$ With respect to the deterministic version, for large values of $z$ a substantial improvement in the local memory requirements is achieved.\ [**Remark.**]{} Thanks to the incremental nature of [<span style="font-variant:small-caps;">gmm</span>]{}, our coreset-based MapReduce algorithms for the $k$-center problem, both without and with outliers, need not know the doubling dimension $D$ of the underlying metric space in order to attain the claimed performance bounds. This is a very desirable property, since, in general, $D$ may not be known in advance. Moreover, if $D$ were known, a factor $\sqrt{(c/{\varepsilon})^D}$ in local memory (where $c=4$ for $k$-center, and $c=24$ for $k$-center with $z$ outliers) could be saved by setting $\ell$ to be a factor ${\Theta\left(\sqrt{(c/{\varepsilon})^D}\right)}$ smaller. Streaming algorithm for $k$-center with $z$ outliers {#sec:streaming} ==================================================== As mentioned in the introduction, in the Streaming setting we will only consider the $k$-center problem with $z$ outliers. Consider an instance $S$ of the problem and fix a precision parameter $\hat{{\varepsilon}} \in (0, 1]$. Suppose that the points of $S$ belong to a metric space of known doubling dimension $D$. Our Streaming algorithm also adopts a coreset-based approach. Specifically, in a pass over the stream of points of $S$ a suitable weighted coreset $T$ is selected and stored in the working memory. Then, at the end of the pass, the final set of centers is determined through multiple runs of [[OutliersCluster]{}]{} on $T$ as was done in the second round of the MapReduce algorithm described in Subsection \[sec-out\]. Below, we will focus on the coreset construction. The algorithm computes a coreset $T$ of $\tau \geq k+z$ centers which represent a good approximate solution to the $\tau$-center problem on $S$ (without outliers). The value of $\tau$, which will be fixed later, depends on $\hat{{\varepsilon}}$ and $D$. The main difference with the MapReduce algorithm is the fact that we cannot exploit the incremental approach provided by [<span style="font-variant:small-caps;">gmm</span>]{}, since no efficient implementation of [<span style="font-variant:small-caps;">gmm</span>]{}in the Streaming setting is known. Hence, for the computation of $T$ we resort to a novel weighted variant of the *doubling algorithm* by Charikar et al. [@CharikarCFM04] which is described below. For a given stream of points $S$ and a target number of centers $\tau$, the algorithm maintains a weighted set $T$ of centers selected among the points of $S$ processed so far, and a lower bound $\phi$ on $r_\tau^*(S)$. $T$ is initialized with the first $\tau+1$ points of $S$, with each $t \in T$ assigned weight $w_t=1$, while $\phi$ is initialized to half the minimum distance between the points of $T$. For the sake of the analysis, we will define a proxy function $p: S \to T$ which, however, will not be explicitly stored by the algorithm. Initially, each point of $T$ is proxy for itself. The remaining points of $S$ are processed one at a time maintaining the following invariants: 1. $T$ contains at most $\tau$ centers. 2. $\forall t_1, t_2 \in T$ we have $d(t_1, t_2) > 4\phi$ 3. $\forall s \in S$ processed so far, $d(s,p(s)) \le 8\phi$. 4. $\forall t\in T$, $w_t = |\{s \in S \mbox{ processed so far} : \; p(s) = t\}|$. 5. $\phi \le r_\tau^*(S)$. The following two rules are applied to process each new point $s \in S$. The *update rule* checks if $d(s,T) \le 8\phi$. If this is the case, the center $t \in T$ closest to $s$ is identified and $w_t$ is incremented by one, defining $p(s) = t$. If instead $d(s,T) > 8\phi$, then $s$ is added as a new center to $T$, setting $w_s$ to 1 and defining $p(s)=s$. Note that in this latter case, the size of $T$ may exceed $\tau$, thus violating invariant (a). When this happens, the following *merge rule* is invoked repeatedly until invariant (a) is re-established. Each invocation of this rule first sets $\phi$ to $2\phi$, which, in turn, may lead to a violation of invariant (b). If this is the case, for each pair of points $u, v \in T$ violating invariant (b), we discard $u$ and set $w_v \leftarrow w_v + w_u$. Conceptually, this corresponds to the update of the proxy function which redefines $p(x)=v$, for each point $x$ for which $p(x)$ was equal to $u$. Observe that, at the end of the initialization, invariants (a) and (b) do not hold, while invariants (c)$\div$(e) do hold. Thus, we prescribe that the merge rule and the reinforcement of invariant (b) are applied at the end of the initialization before any new point is processed. This will ensure that all invariants hold before the $(\tau+2)$nd point of $S$ is processed. The following lemma shows the above rules maintain all invariants. \[lem:doubling-algorithm\] After the initialization, at the end of the processing of each point $s \in S$, all invariants hold. As explained above, all invariants are enforced at the end of the initialization. Consider the processing of a new point $s$. It is straightforward to see that the combination of update and merge rules maintain invariants (a)-(d). We now show that invariant (e) is also maintained. After the update rule is applied, only invariant (a) can be violated. Suppose that this is the case, hence $|T| = \tau+1$. Each pair of centers in $T$ are at distance at least $4\phi$ from one another (invariant (b)). Let $\phi'$ be the new value of $\phi$ resulting after the required applications of the merging rule. It is easy to see that until the penultimate application of the merge rule, $T$ still contains $\tau+1$ points. Therefore each pair of these points must be at distance at least $4(\phi'/2)=2\phi'$ from one another. This implies, that $\phi'$ is still a lower bound to $r_\tau^*(S)$. As an immediate corollary of the previous lemma, we have that after all points of $S$ have been processed, $d(s, p(s)) \le 8 \cdot r_\tau^*(S)$ for every $s \in S$. Moreover, it is immediate to see that the working memory required by the algorithm has size ${\Theta\left(\tau\right)}$. Fix now $\tau = (k+z)(16/\hat{{\varepsilon}})^D$ and let $T$ be the weighted coreset $T$ of size $\tau$ returned by the above algorithm. The following lemma is the counterpart of Lemma \[lem:proxy-outliers\] in the Streaming setting. \[lem:proxy-outliers-streaming\] For every $s \in S$, $d(s, p(s)) \le \hat{{\varepsilon}}\cdot {r_{k, z}^*(S)}$. Observe that $S$ can be covered using $k+z$ balls of radius $r_{k+z}^*(S)$. Since $S$ comes from a space of doubling dimension $D$, we know that $S$ can also be covered using $\tau = (k+z)(16/\hat{{\varepsilon}})^{D}$ balls (not necessarily centered at points in $S$) of radius $\le\hat{{\varepsilon}}/16 \cdot r_{k+z}^*(S)$. Picking an arbitrary center of $S$ from each such ball induces a $\tau$-clustering of $S$ with radius at most $\hat{{\varepsilon}}/8 \cdot r_{k+z}^*(S)$. Hence, $$r_\tau^*(S) \le \hat{{\varepsilon}}/8 \cdot r_{k+z}^*(S).$$ Since $r_{k+z}^*(S) \le {r_{k, z}^*(S)}$, it follows that $r_\tau^*(S) \le \hat{{\varepsilon}}/8 \cdot {r_{k, z}^*(S)}$. By invariants (c) and (e) we have that for every $s \in S$ $$d(s, p(s)) \le 8 \phi \le 8\cdot r_\tau^*(S) \le \hat{{\varepsilon}} \cdot {r_{k, z}^*(S)}. \qed$$ The following theorem states the main result of this section. \[thm:streaming-outliers\] Let $0 < {\varepsilon}\le 1$. If the points of $S$ belong to a metric space of doubling dimension $D$, then, when run with $\hat{{\varepsilon}}={\varepsilon}/6$, the above 1-pass Streaming algorithm computes a $(3+{\varepsilon})$-approximation for the $k$-center problem with $z$ outliers with working memory of size ${O\left((k+z)(96/{\varepsilon})^D\right)}$. Given the result of Lemma \[lem:proxy-outliers-streaming\], the approximation factor can be established in exactly the same way as done for the MapReduce algorithm (refer to Lemma \[lem:outliers-approx\] and Theorem \[thm:outliers-mr\]), while the bound on the working memory size follows directly from the choice of $\hat{{\varepsilon}}$, the fact that $|T|=\tau=(k+z)(16/\hat{{\varepsilon}})^D$, and the fact that the Streaming algorithm needs memory proportional $|T|$. \[corol:streaming-outliers\] For constant ${\varepsilon}$ and $D$, the above Streaming algorithm computes a $(3+{\varepsilon})$-approximation for the $k$-center problem with $z$ outliers with working memory of size ${O\left((k+z)\right)}$, independent of $|S|$. A few remarks are in order. For simplicity, to compute the weighted coreset $T$ we preferred to adapt the 8-approximation algorithm by [@CharikarCFM04] rather than the more complex $(2+{\varepsilon})$-approximation algorithm by [@McCutchen2008], since this choice does not affect the approximation guarantee of our algorithm but comes only at the expense of a slight increase in the coreset size. Also, by applying similar techniques, we can obtain a Streaming algorithm for the $k$-center problem without outliers which uses ${O\left(k(1/{\varepsilon})^D\right)}$ space and features the same $(2+{\varepsilon})$-approximation as [@McCutchen2008]. In Section \[sec:experiments\] we compare the two algorithms experimentally.\ Finally, while theoretically our algorithm requires the apriori knowledge of doubling dimension $D$ to determine the proper value of $\tau$, in practice one can set $\tau$ to exercise suitable tradeoffs between running time, working memory space and approximation quality. [**A 2-pass Streaming algorithm oblivious to $D$.**]{} As explained before, thanks to its incremental nature, the MapReduce coreset construction does not require explicit knowledge of the doubling dimension $D$ of the metric space. However, this is not the case for the 1-pass Streaming algorithm described above, which requires the apriori knowledge of $D$ to determine the proper value of $\tau$. While in practice one can set $\tau$ to exercise suitable tradeoffs between running time, working memory space and approximation quality, it is of theoretical interest to observe that a simple-two pass algorithm oblivious to $D$ with roughly the same bounds on the size of the working memory can be obtained by “simulating” the 2-round MapReduce algorithm for $\ell=1$. In what follows, we just sketch the main idea behind this simulation, omitting the details for brevity and referring the reader to the full version of this paper [@CeccarelloPP18]. In the first pass, we run the doubling algorithm of [@CharikarCFM04] for the $(k+z)$-center problem, thus obtaining a radius value $\hat{r} \leq 8 r^*_{k+z}\leq 8r^*_{k,z}$. Using $\hat{r}$ as an estimate for $r^*_{k,z}$, in the second pass we determine a maximal weighted coreset $T$ of points whose mutual distances are greater than $({\varepsilon}/48)\hat{r}$. During the pass, each point $s \in S-T$ is virtually assigned to a proxy in $T$ at distance at most $({\varepsilon}/48)\hat{r}$, and for every $x \in T$ a weight is computed as the number of points for which $x$ is proxy. Finally, our weighted variant of the algorithm of [@Charikar2001] is run on $T$. It is easy to see that $|T| \leq (k+z)(96/{\varepsilon})^D$ and that each point of $S$ is at distance at most ${\varepsilon}/6$ from its proxy. This immediately implies this two-pass strategy returns a $(3+{\varepsilon})$-approximate solution to the $k$-center problem with $z$ outliers with the same working memory bounds as those stated in Theorem \[thm:streaming-outliers\] and Corollary \[corol:streaming-outliers\]. Conclusions {#sec:conclusions} =========== We presented MapReduce and Streaming algorithms for the $k$-center problem (with and without outliers) based on flexible coreset constructions. These constructions yield a spectrum of space-accuracy tradeoffs regulated by the doubling dimension $D$ of the underlying space, and afford approximation guarantees arbitrarily close to those of the best sequential strategies, using moderate space in the case of small $D$. The theoretical analysis of the algorithms is complemented by experimental evidence of their practicality. Coresets provide an effective way of processing large amounts of data by building a succinct summary of the input which can then be processed with the sequential algorithm of choice. In particular, we showed how to leverage coresets to build MapReduce and Streaming algorithms for the $k$-center problem with and without outliers. Building on state-of-the art approaches for these problems, we provide flexible coreset constructions which yield a spectrum of space-accuracy tradeoffs which allow to obtain approximation guarantees that can be made arbitrarily close to those obtainable with the best sequential strategies at the expense of an increase of the memory requirements, regulated by the dimensionality of the underlying metric space. The theoretical findings are complemented by experimental evidence of the practicality of the proposed algorithms. Future avenues of research include further improvements of the local memory requirements of the MapReduce algorithms, the development of a 1-pass Streaming algorithm oblivious to the doubling dimension $D$ of the metric space, and the extension of our approach to other (center-based) clustering problems.
--- abstract: | In this paper, we study the following fractional Schrödinger-Poisson system $$\left\{ \begin{array}{ll} \varepsilon^{2s}(-\Delta)^su+V(x)u+\phi u=g(u) & \hbox{in $\mathbb{R}^3$,} \\ \varepsilon^{2t}(-\Delta)^t\phi=u^2,\,\, u>0& \hbox{in $\mathbb{R}^3$,} \end{array} \right.$$ where $s,t\in(0,1)$, $\varepsilon>0$ is a small parameter. Under some suitable assumptions on potential function $V(x)$ and critical nonlinearity term $g(u)$, we construct a family of positive solutions $u_{\varepsilon}\in H^s(\mathbb{R}^3)$ which concentrates around the global minima of $V$ as $\varepsilon\rightarrow0$. address: 'Kaimin Teng (Corresponding Author)Department of Mathematics, Taiyuan University of Technology, Taiyuan, Shanxi 030024, P. R. China' author: - Kaimin Teng title: 'Concentrating phenomenon for fractional nonlinear Schrödinger-Poisson system with critical nonlinearity' --- \[section\] \[theorem\][Lemma]{} \[theorem\][Definition]{} \[theorem\][Remark]{} \[theorem\][Proposition]{} \[theorem\][Corollary]{} Introduction ============ In this paper, we study the following fractional Schrödinger-Poisson system $$\label{main} \left\{ \begin{array}{ll} \varepsilon^{2s}(-\Delta)^su+V(x)u+\phi u=g(u) & \hbox{in $\mathbb{R}^3$,} \\ \varepsilon^{2t}(-\Delta)^t\phi=u^2,\,\, u>0& \hbox{in $\mathbb{R}^3$,} \end{array} \right.$$ where $s,t\in(0,1)$ and $\varepsilon>0$ is a small parameter. The potential $V:\mathbb{R}^3\rightarrow\mathbb{R}$ is a bounded continuous function satisfying\ $(V_0)$ $\inf\limits_{x\in\mathbb{R}^3}V(x)=V_0>0$;\ $(V_1)$ There is a bounded domain $\Lambda\subset\mathbb{R}^3$ such that $$V_0<\min_{\partial\Lambda}V(x),\quad \mathcal{M}=\{x\in\Lambda\,\,|\,\, V(x)=V_0\}\neq{\O}.$$ Without of loss of generality, we may assume that $0\in\mathcal{M}$. The nonlinearity $g:\mathbb{R}\rightarrow\mathbb{R}$ is of $C^1$-class function. The non-local operator $(-\Delta)^s$ ($s\in(0,1)$), which is called fractional Laplace operator, can be defined by $$(-\Delta)^su(x)=C_s\,P.V.\int_{\mathbb{R}^3}\frac{u(x)-u(y)}{|x-y|^{3+2s}}\,{\rm d}y=C_s\lim_{\varepsilon\rightarrow0}\int_{\mathbb{R}^3\backslash B_{\varepsilon}(x)}\frac{u(x)-u(y)}{|x-y|^{3+2s}}\,{\rm d}y$$ for $u\in\mathcal{S}(\mathbb{R}^3)$, where $\mathcal{S}(\mathbb{R}^3)$ is the Schwartz space of rapidly decaying $C^{\infty}$ function, $B_{\varepsilon}(x)$ denote an open ball of radius $r$ centered at $x$ and the normalization constant $C_s=\Big(\int_{\mathbb{R}^3}\frac{1-\cos(\zeta_1)}{|\zeta|^{3+2s}}\,{\rm d}\zeta\Big)^{-1}$. Fractional Laplacian appears in lots of real world, such as: fractional quantum mechanics [@La; @La1], anomalous diffusion [@MK], financial [@CT], obstacle problems [@S], conformal geometry and minimal surfaces [@CM]. In the very recent years, the progress of nonlinear equations involving fractional Lapalcian can be found in [@AM; @BCPS; @BV; @CS; @CSi; @CT; @CW; @DMV; @DPDV; @DPW; @FL; @HZ1; @NPV; @PP; @S; @SZ; @Teng-1; @Teng-2] and so on. For $u\in\mathcal{S}(\mathbb{R}^3)$, the fractional Laplace operator $(-\Delta)^s$ can be expressed as an inverse Fourier transform $$(-\Delta)^su=\mathcal{F}^{-1}\Big((2\pi|\xi|)^{2s}\mathcal{F}u(\xi)\Big),$$ where $\mathcal{F}$ and $\mathcal{F}^{-1}$ denote the Fourier transform and inverse transform, respectively. If $u$ is sufficiently smooth, it is known that (see [@NPV]) it is equivalent to $$(-\Delta)^su(x)=-\frac{1}{2}C_s\int_{\mathbb{R}^3}\frac{u(x+y)+u(x-y)-2u(y)}{|x-y|^{3+2s}}\,{\rm d}y.$$ By a classical solution of , we mean two continuous functions that $(-\Delta)^su$ is well defined for all $x\in\mathbb{R}^3$ and satisfies in pointwise sense. Since we are looking for positive solutions, we may assume that $g(s)=0$ for $s<0$. Furthermore, we need the following conditions:\ $(g_0)$ $\lim\limits_{\tau\rightarrow0^{+}}\frac{g(\tau)}{\tau}=0$;\ $(g_1)$ $\lim\limits_{\tau\rightarrow+\infty}\frac{g(\tau)}{\tau^{2_s^{\ast}-1}}=\kappa>0$;\ $(g_2)$ there exists $\lambda>0$ such that $g(\tau)\geq\lambda \tau^{q-1}+\tau^{2_s^{\ast}-1}$ for some $\frac{4s+2t}{s+t}<q<2_s^{\ast}$ and all $\tau\geq0$. The hypotheses $(g_0)$-$(g_2)$ are so-called the critical Berestycki-Lions type conditions, which was introduced in [@ZCZ]. For simplicity, we may assume that $\kappa=1$ and $g(\tau)=f(\tau)+|\tau|^{2_s^{\ast}-2}\tau$, for $\tau>0$. Then system is equivalent to the following one $$\label{main-0} \left\{ \begin{array}{ll} \varepsilon^{2s}(-\Delta)^su+V(x)u+\phi u=f(u)+u^{2_s^{\ast}-1} & \hbox{in $\mathbb{R}^3$,} \\ \varepsilon^{2t}(-\Delta)^t\phi=u^2,\,\, u>0& \hbox{in $\mathbb{R}^3$} \end{array} \right.$$ where $f$ satisfies\ $(f_0)$ $\lim\limits_{\tau\rightarrow0^{+}}\frac{f(\tau)}{\tau}=0$;\ $(f_1)$ $\liminf\limits_{\tau\rightarrow+\infty}\frac{f'(\tau)}{\tau^{2_s^{\ast}-2}}=0$;\ $(f_2)$ there exists $\lambda>0$ such that $f(\tau)\geq\lambda \tau^{q-1}$, for $\tau>0$ and some $q\in(\frac{4s+2t}{s+t},2_s^{\ast})$. In the very recent years, the study of the existence, concentration and multiplicity of positive solutions for fractional Schrödinger-Poisson system is just starting. When $\varepsilon=1$, by using the Nehari-Pohozaev manifold combing monotone trick with global compactness Lemma, Teng [@Teng2] studied the existence of positive ground state solution for the system $$\label{main1} \left\{ \begin{array}{ll} (-\Delta)^su+V(x)u+\phi u=|u|^{p-1}u+|u|^{2_s^{\ast}-2}u & \hbox{in $\mathbb{R}^3$,} \\ (-\Delta)^t\phi=u^2& \hbox{in $\mathbb{R}^3$.} \end{array} \right.$$ Using the similar methods, in [@Teng3], positive ground state solutions for problem with $|u|^{p-1}u+|u|^{2_s^{\ast}-2}u$ replaced by $|u|^{p-1}u$ with $p\in(2,2_s^{\ast}-1)$, were established when $s=t$. In [@ZDS], the authors studied the existence of radial solutions for system with $|u|^{p-1}u+|u|^{2_s^{\ast}-2}u$ replaced by $f(u)$, where the nonlinearity $f(u)$ verifies the subcritical or critical assumptions of Berestycki-Lions type. When $0<\varepsilon<1$ small, in [@MS], the authors studied the semiclassical state of the following system $$\left\{ \begin{array}{ll} \varepsilon^{2s}(-\Delta)^su+V(x)u+\phi u=f(u) & \hbox{in $\mathbb{R}^N$,} \\ \varepsilon^{\theta}(-\Delta)^{\frac{\alpha}{2}}\phi=\gamma_{\alpha}u^2& \hbox{in $\mathbb{R}^N$,} \end{array} \right.$$ where $s\in(0,1)$, $\alpha\in(0,N)$, $\theta\in(0,\alpha)$, $N\in(2s,2s+\alpha)$, $\gamma_{\alpha}$ is a positive constant, $f(u)$ satisfies the following subcritical growth assumptions: $0<KF(t)\leq f(t)t$ with some $K>4$ for all $t\geq0$ and $\frac{f(t)}{t^3}$ is strictly increasing on $(0,+\infty)$. In [@LZ], by using the methods mentioned before, Liu and Zhang proved the existence and concentration of positive ground state solution for problem . When the system verifying that $s=t$ and $f(u)+u^{2_s^{\ast}-1}$ replaced by $K(x)|u|^{p-2}u$ which $V$ has positive global minimum and $K(x)$ has global maximum, in [@YZZ], the authors prove the existence of a positive ground state for $\varepsilon>0$ sufficiently small and concentration behavior of these ground state solutions as $\varepsilon\rightarrow0$. In [@Teng2], we studied the system with competing potential, i.e., $g(u)=K(x)f(u)+Q(x)|u|^{2_s^{\ast}-2}u$, where $f$ is a function of $C^1$ class, superlinear and subcritical nonlinearity, $V(x)$, $K(x)$ and $Q(x)$ are positive continuous functions. Under some suitable assumptions on $V$, $K$ and $Q$, we prove that there is a family of positive ground state solutions which concentrate on the set of minimal points of $V(x)$ and the sets of maximal points of $K(x)$ and $Q(x)$. For the local assumption on the potential $V(x)$, Teng [@Teng3] firstly applied the penalization methods developed by [@DF] to study the concentration phenomenon of system under the hypotheses made on $V(x)$ and $f$: - $\inf\limits_{x\in\mathbb{R}^3}V(x)=\alpha_0>0$ and there is a bounded domain $\Lambda\subset\mathbb{R}^3$ such that $$V_0=\inf_{\Lambda}V(x)<\min_{\partial\Lambda}V(x);$$ - $\lim\limits_{\tau\rightarrow0^{+}}\frac{f(\tau)}{\tau^3}=0$, there exist $\lambda>0$ and $C>0$ such that $f(\tau)\geq\lambda \tau^{q-1}$ for some $4\leq q<2_s^{\ast}$ and $|f'(\tau)|\leq C(1+|\tau|^{p-2})$, where $4<p<2_s^{\ast}$; - $\frac{f(\tau)}{\tau^{3}}$ is non-decreasing in $\tau\in (0,+\infty)$. The penalization methods were applied to fractional Schrödinger equations, please see [@AM; @A; @HZ1]. For extending our result in [@Teng3], through modifying the penalization methods developed by Byeon and Wang [@BW], Teng [@Teng4] studied the concentration behavior of system with $V(x)$ satisfying $(V_0)$-$(V_1)$ and $g$ verifying - $\lim\limits_{\tau\rightarrow0^{+}}\frac{g(\tau)}{\tau}=0$, $\lim\limits_{\tau\rightarrow+\infty}\frac{g'(\tau)}{\tau^{2_s^{\ast}-2}}=0$; - there exists $\lambda>0$ such that $g(\tau)\geq\lambda \tau^{q-1}$ for some $\frac{4s+2t}{s+t}<q<2_s^{\ast}$ and all $\tau\geq0$; - $\frac{g(\tau)}{\tau^{q-1}}$ is non-decreasing in $\tau\in (0,+\infty)$. When $s=1$, system reduces to the following Schrödinger-Poisson system $$\label{main-1} \left\{ \begin{array}{ll} -\varepsilon^2\Delta u+V(x)u+\phi u=g(x,u) & \hbox{in $\mathbb{R}^3$,} \\ -\varepsilon^2\Delta \phi=u^2& \hbox{in $\mathbb{R}^3$.} \end{array} \right.$$ In recent years, there has been increasing attention to on the existence of positive solutions, ground state solutions, multiple solutions and semiclassical states, see for example [@AR; @BF; @DW; @HZ; @Ruiz; @Ruiz1; @ZLZ] and the references therein. It is well known that system appearing in quantum mechanics models (see e.g. [@LS]) and in semiconductor theory [@MRS]. Especially, systems like have been introduced in [@BF] as a model to describe solitary waves. Regarding the concentration phenomenon of solutions for Schrödinger-Poisson systems like , there has been the object of interest for many scholars. In [@H], the author studied the system with $g(x,v)=f(v)$ satisfying - $\frac{f(t)}{t^3}$ increasing in $(0,\infty)$, $\exists\theta>4$ such that $0<\theta F(t)=\int_0^tf(s)\,{\rm d}s\leq tf(t)$ for all $t>0$, - $f'(t)t^2-3f(t)t\geq Ct^{\sigma}$, $\sigma\in(4,6)$, and $f(t)=o(t^3)$ as $t\rightarrow0$. By using Ljusternik-Schnirelmann theory and minimax method, the author obtained the multiplicity of positive solutions for $\varepsilon>0$ small which concentrate on the minima of $V(x)$. In [@WTXZ], Wang et al. studied the existence and concentration of positive ground state solutions for system with $g(x,v)=b(x)f(v)$ satisfying - $\frac{f(t)}{t^3}$ increasing in $(0,\infty)$, $\frac{F(t)}{t^4}\rightarrow+\infty$ as $t\rightarrow\infty$, - $|f(t)|\leq c(1+|t|^{p-1})$, $p\in(4,6)$, and $f(t)=o(t^3)$ as $t\rightarrow0$. In the critical case, He and Zou [@HZ] considered system with $g(x,v)=v^5+ f(v)$, where $f$ satisfies the similar hypotheses as [@H], proved that system has a ground state solution concentrating around a global minimum of $V(x)$ as $\varepsilon\rightarrow0$. In [@WTXZ1], the authors studied the system with $g(x,v)=b(x)f(v)+|v|^4v$, where $f$ satisfies - $\frac{f(t)}{t^3}$ increasing in $(0,\infty)$, $f(t)=o(t^3)$ as $t\rightarrow0$, - $f(t)\geq ct^{q-1}$, $|f(t)|\leq c(1+|t|^{p-1})$, $4<q\leq p<6$. Under some suitable assumptions on $V(x)$ and $b(x)$, Wang et al. [@WTXZ1] proved the existence of least energy solution $(u_{\varepsilon},\phi_{\varepsilon})$ and then showed that $u_{\varepsilon}$ converges to the least energy solution of the associated limit problem and concentrates to some set in $\mathbb{R}^3$ depending on the potentials $V$ and $b$. The above assumptions made on the potential $V(x)$ is global, for the local assumption, there are few results obtained in the literature. As far as we know, only in [@HL] studied the Schrödinger-Poisson system with $V(x)$ satisfying the local condition $\inf\limits_{\Lambda}V(x)<\inf\limits_{\partial\Lambda}V(x)$ and $g(x,v)=\lambda|v|^{p-2}v+|v|^4v$ for $3< p\leq4$, where $\Lambda$ is an open set of $\mathbb{R}^3$ and $\lambda>0$, the authors constructed a family of positive solutions which concentrates around a local minimum of $V$ as $\varepsilon\rightarrow0$. The semiclassical state of the following Schrödinger-Poisson system $$\label{main-1-1} \left\{ \begin{array}{ll} -\varepsilon^2\Delta u+V(x)u+K(x)\phi u=u^p & \hbox{in $\mathbb{R}^3$,} \\ -\Delta \phi=u^2& \hbox{in $\mathbb{R}^3$} \end{array} \right.$$ has attracted many scholars’ attention. When $p\in(1,5)$, Ruiz and Vaira [@RV] proved the existence of multi-bump solutions of system and these bumps concentrate around a local minimum of the potential $V$. Ianni and Vaira [@IV] obtained the existence of positive bound state solutions which concentrate on a non-degenerate local minimum or maximum of $V$ by using a Lyapunov-Schmitt reduction method. Ianni and Vaira [@IV1] also showed the existence of radially symmetric solutions, which concentrate on the spheres. For the critical case, for system with $u^p$ replaced by $f(u)+u^5$, in [@LGF], the authors proved the multiplicity of positive solutions and the number of positive solutions depends on the profile of the potential and that each solution concentrates around its corresponding global minimum point of the potential in the semiclassical limit. For the local assumptions on the potential $V(x)$, Seok [@S] studied the system with $u^p$ replaced by $f(u)$ satisfying - $f(t)=o(t)$ as $t\rightarrow0$, $\lim\limits_{t\rightarrow\infty}\frac{f(t)}{t^p}<\infty$ for some $p\in(1,5)$, - $\exists T>0$ such that $\frac{1}{2}mT^2<F(T)$, $F(t)=\int_0^tf(s)\,{\rm d}s$ and proved the existence of the spike solutions through following a variational approach developed by Byeon-Jeanjean [@BJ; @BJ1]. Using the similar ideas as Byeon-Jeanjean [@BJ], Zhang [@Z] considered the system with $u^p$ replaced by a general nonlinearity $f(u)$ satisfying the critical growth assumptions - $f(t)=o(t)$ as $t\rightarrow0$, $\lim\limits_{t\rightarrow\infty}\frac{f(t)}{t^5}=\kappa>0$, - $\exists C>0$ and $p<6$ such that $f(t)\geq\kappa t^5+Ct^{p-1}$ for $t\geq0$ and constructed a solution $(u_{\varepsilon},\phi_{\varepsilon})$, which concentrates at an isolated component of positive locally minimum points of $V$ as $\varepsilon\rightarrow0$. From the above known results, we see that the monotonic hypothesis $\frac{f(t)}{t^3}$ is necessary to study the concentration behavior of system whatever critical case or subcritical case. The purpose of this paper is to weak this monotonic hypothesis to the following one:\ $(f_3)$ $\frac{f(\tau)}{\tau^{q-1}}$ is non-decreasing in $\tau\in (0,+\infty)$, where $q\in(\frac{4s+2t}{s+t},2_s^{\ast})$. To the best of our knowledge, except [@HL], there are few papers to study the concentration phenomenon of Schrödinger-Poisson system with local assumption on the potential $V(x)$, not mention to the fractional Schrödinger-Poisson system . Motivated by the above cited papers, the goal of this paper is to study the existence and concentration of positive bound state solutions for system under $(V_0)$-$(V_1)$ and $(f_0)$-$(f_3)$. Our main results is as follows. \[thm1-1\] Let $2s+2t>3$, $s,t\in(0,1)$. Suppose that $V$ satisfies $(V_0)$, $(V_1)$ and $g\in C(\mathbb{R}^{+},\mathbb{R})$ satisfies $(g_0)$–$(g_3)$. Then there exists an $\varepsilon_0>0$ such that system possesses a positive solution $(u_{\varepsilon},\phi_{\varepsilon})\in H_{\varepsilon}\times\mathcal{D}^{t,2}(\mathbb{R}^3)$ for all $\varepsilon\in(0,\varepsilon_0)$. Moreover, there exists a maximum point $x_{\varepsilon}$ of $u_{\varepsilon}$ such that $\lim\limits_{\varepsilon\rightarrow0}{\rm dist}(x_{\varepsilon},\mathcal{M})=0$ and $$u_{\varepsilon}(x)\leq\frac{C\varepsilon^{3+2s}}{C_0\varepsilon^{3+2s}+|x-x_{\varepsilon}|^{3+2s}}\quad x\in\mathbb{R}^3,\,\,\text{and}\,\, \varepsilon\in(0,\varepsilon_0)$$ for some constants $C>0$ and $C_0\in\mathbb{R}$. We will give some comments on our main result. The hypothesis $(V_1)$ is a special case of the local assumption $$\inf_{\Lambda}V(x)<\inf_{\partial\Lambda}V(x)$$ which first introduced by M. del Pino and P. L. Felmer [@DF], because there have not some local priori estimates like Theorem 8.17 in [@GT]. Comparing with the results in [@H; @HZ; @WTXZ; @WTXZ1], the monotone hypothesis $(f_3)$ is weaker even in the case $s=t=1$ ($q-1>\frac{4s+2t}{s+t}-1=2$). The condition $(V_2)$ is local and the $(AR)$-condition for fractional Schrödinger-Poisson system is not satisfied, we need to modify the penalization methods developed by J. Byeon, Z. Q. Wang [@BJ; @BW] and combine the penalization methods introduced by M. del Pino, P. L. Felmer [@DF], for overcoming the obstacle caused by the non-compactness due to the unboundedness of the domain and the lack of $(AR)$ condition. The paper is organized as follows, in Section 2, we give some preliminary results. In Section 3, we prove the existence of positive ground state solutions for “limit problem”. In Section 4, we prove the main result Theorem \[thm1-1\]. Variational Setting =================== In this section, we outline the variational framework for studying problem and list some preliminary Lemma which used later. In the sequel, we denote by $\|\cdot\|_{p}$ the usual norm of the space $L^p(\mathbb{R}^3)$, the letter $c_i$ ($i=1,2,\ldots$) or $C$ denote by some positive constants. Work space stuff ---------------- We define the homogeneous fractional Sobolev space $\mathcal{D}^{\alpha,2}(\mathbb{R}^3)$ as follows $$\mathcal{D}^{\alpha,2}(\mathbb{R}^3)=\Big\{u\in L^{2_{\alpha}^{\ast}}(\mathbb{R}^3)\,\,\Big|\,\,|\xi|^{\alpha}(\mathcal{F}u)(\xi)\in L^2(\mathbb{R}^3)\Big\}$$ which is the completion of $C_0^{\infty}(\mathbb{R}^3)$ under the norm $$\|u\|_{\mathcal{D}^{\alpha,2}}=\Big(\int_{\mathbb{R}^3}|(-\Delta)^{\frac{\alpha}{2}}u|^2\,{\rm d}x\Big)^{\frac{1}{2}}=\Big(\int_{\mathbb{R}^3}|\xi|^{2\alpha}|(\mathcal{F}u)(\xi)|^2\,{\rm d}\xi\Big)^{\frac{1}{2}}$$ The fractional Sobolev space $H^{\alpha}(\mathbb{R}^3)$ can be described by means of the Fourier transform, i.e. $$H^{\alpha}(\mathbb{R}^3)=\Big\{u\in L^2(\mathbb{R}^3)\,\,\Big|\,\,\int_{\mathbb{R}^3}(|\xi|^{2\alpha}|(\mathcal{F}u)(\xi)|^2+|(\mathcal{F}u)(\xi)|^2)\,{\rm d}\xi<+\infty\Big\}.$$ In this case, the inner product and the norm are defined as $$(u,v)=\int_{\mathbb{R}^3}(|\xi|^{2\alpha}(\mathcal{F}u)(\xi)\overline{(\mathcal{F}v)(\xi)}+(\mathcal{F}u)(\xi)\overline{(\mathcal{F}v)(\xi)})\,{\rm d}\xi$$ and $$\|u\|_{H^{\alpha}}=\bigg(\int_{\mathbb{R}^3}(|\xi|^{2\alpha}|(\mathcal{F}u)(\xi)|^2+|(\mathcal{F}u)(\xi)|^2)\,{\rm d}\xi\bigg)^{\frac{1}{2}}.$$ From Plancherel’s theorem we have $\|u\|_2=\|\mathcal{F}u\|_2$ and $\||\xi|^{\alpha}\mathcal{F}u\|_2=\|(-\Delta)^{\frac{\alpha}{2}}u\|_2$. Hence $$\|u\|_{H^{\alpha}}=\bigg(\int_{\mathbb{R}^3}(|(-\Delta)^{\frac{\alpha}{2}}u(x)|^2+|u(x)|^2)\,{\rm d}x\bigg)^{\frac{1}{2}},\quad \forall u\in H^{\alpha}(\mathbb{R}^3).$$ We denote $\|\cdot\|$ by $\|\cdot\|_{H^{\alpha}}$ in the sequel for convenience. In terms of finite differences, the fractional Sobolev space $H^{\alpha}(\mathbb{R}^3)$ also can be defined as follows $$H^{\alpha}(\mathbb{R}^3)=\Big\{u\in L^2(\mathbb{R}^3)\,\,\Big|\,\,D_{\alpha}u\in L^2(\mathbb{R}^3)\Big\},\quad |D_{\alpha}u|^2=\int_{\mathbb{R}^3}\frac{|u(x)-u(y)|^2}{|x-y|^{3+2\alpha}}\,{\rm d}y$$ endowed with the natural norm $$\|u\|_{H^{\alpha}}=\bigg(\int_{\mathbb{R}^3}|u|^2\,{\rm d}x+\int_{\mathbb{R}^3}|D_{\alpha}u|^2\,{\rm d}x\bigg)^{\frac{1}{2}}.$$ Also, in view of Proposition 3.4 and Proposition 3.6 in [@NPV], we have $$\label{equ2-1} \|(-\Delta)^{\frac{\alpha}{2}}u\|_2^2=\int_{\mathbb{R}^3}|\xi|^{2\alpha}|(\mathcal{F}u)(\xi)|^2\,{\rm d}\xi=\frac{1}{C_{\alpha}}\int_{\mathbb{R}^3}|D_{\alpha}u|^2\,{\rm d}x.$$ We define the Sobolev space $H_{\varepsilon}=\{u\in H^s(\mathbb{R}^3)\,\,|\,\, \int_{\mathbb{R}^3}V(\varepsilon x)u^2\,{\rm d}x<\infty\}$ endowed with the norm $$\|u\|_{H_{\varepsilon}}=\Big(\int_{\mathbb{R}^3}(|D_su|^2+V(\varepsilon x)u^2)\,{\rm d}x\Big)^{\frac{1}{2}}.$$ It is well known that (see [@LM]) $H^s(\mathbb{R}^3)$ is continuously embedded into $L^r(\mathbb{R}^3)$ for $2\leq r\leq 2_{s}^{\ast}$ ($2_{s}^{\ast}=\frac{6}{3-2s}$). Obviously, the conclusion also holds for $H_{\varepsilon}$. Formulation of Problem ----------------------- It is easily seen that, just performing the change of variables $u(x)\rightarrow u(x/\varepsilon)$ and $\phi(x)\rightarrow \phi(x/\varepsilon)$, and taking $z=x/\varepsilon$, problem can be rewritten as the following equivalent form $$\label{main-2-1} \left\{ \begin{array}{ll} (-\Delta)^su+V(\varepsilon z)u+\phi u=f(u)+u^{2_s^{\ast}-1} & \hbox{in $\mathbb{R}^3$,} \\ (-\Delta)^t\phi=u^2,\,\, u>0& \hbox{in $\mathbb{R}^3$} \end{array} \right.$$ which will be referred from now on. Observe that if $4s+2t\geq3$, there holds $2\leq\frac{12}{3+2t}\leq\frac{6}{3-2s}$ and thus $H_{\varepsilon}\hookrightarrow L^{\frac{12}{3+2t}}(\mathbb{R}^3)$. Considering $u\in H_{\varepsilon}$, the linear functional $\widetilde{\mathcal{L}}_u:\mathcal{D}^{t,2}(\mathbb{R}^3)\rightarrow\mathbb{R}$ is defined by $\widetilde{\mathcal{L}}_u(v)=\int_{\mathbb{R}^3}u^2v\,{\rm d}x$. Using the Lax-Milgram theorem, there exists a unique $\phi_u^t\in\mathcal{D}^{t,2}(\mathbb{R}^3)$ such that $$\begin{aligned} C_s\int_{\mathbb{R}^3\times\mathbb{R}^3}\frac{(\phi_u^t(z)-\phi_u^t(y))(v(z)-v(y))}{|z-y|^{3+2s}}\,{\rm d}y\,{\rm d}z&=\int_{\mathbb{R}^3}(-\Delta)^{\frac{t}{2}}\phi_u^t(-\Delta)^{\frac{t}{2}}v\,{\rm d}z\\ &=\int_{\mathbb{R}^3}u^2v\,{\rm d}z,\quad \forall v\in\mathcal{D}^{t,2}(\mathbb{R}^3),\end{aligned}$$ that is $\phi_u^t$ is a weak solution of $(-\Delta)^t\phi_u^t=u^2$ and so the representation formula holds $$\phi_u^t(x)=c_t\int_{\mathbb{R}^3}\frac{u^2(y)}{|x-y|^{3-2t}}\,{\rm d}y,\quad x\in\mathbb{R}^3,\quad c_t=\pi^{-\frac{3}{2}}2^{-2t}\frac{\Gamma(\frac{3-2t}{2})}{\Gamma(t)}.$$ Substituting $\phi_u^t$ in , it reduces to a single fractional Schrödinger equation $$\label{R-1} (-\Delta)^su+V(\varepsilon z)u+\phi_u^tu=f(u)+(u^{+})^{2_s^{\ast}-1}\quad z\in\mathbb{R}^3.$$ The solvation of can be looking for the critical points of the associated energy functional $J_{\varepsilon}: H_{\varepsilon}\rightarrow\mathbb{R}$ defined by $$\begin{aligned} J_{\varepsilon}(u)&=\frac{1}{2}\int_{\mathbb{R}^3}|D_su|^2\,{\rm d}z+\frac{1}{2}\int_{\mathbb{R}^3}V(\varepsilon z)u^2\,{\rm d}z+\frac{1}{4}\int_{\mathbb{R}^3}\phi_u^tu^2\,{\rm d}z-\int_{\mathbb{R}^3}F(u)\,{\rm d}z\\ &-\frac{1}{2_s^{\ast}}\int_{\mathbb{R}^3}(u^{+})^{2_s^{\ast}}\,{\rm d}z.\end{aligned}$$ Let us summarize some properties of the function $\phi_u^t$. By using simple computation, it is easy to check the following conclusions. \[lem2-1\] For every $u\in H_{\varepsilon}$ with $4s+2t\geq3$, define $\Phi(u)=\phi_u^t\in \mathcal{D}^{t,2}(\mathbb{R}^3)$, where $\phi_u^t$ is the unique solution of equation $(-\Delta)^t\phi=u^2$. Then there hold:\ $(i)$ If $u_n\rightharpoonup u$ in $H_{\varepsilon}$, then $\Phi(u_n)\rightharpoonup\Phi(u)$ in $\mathcal{D}^{t,2}(\mathbb{R}^3)$;\ $(ii)$ $\Phi(tu)=t^2\Phi(u)$ for any $t\in\mathbb{R}$;\ $(iii)$ For $u\in H_{\varepsilon}$, one has $$\|\Phi(u)\|_{\mathcal{D}^{t,2}}\leq C\|u\|_{\frac{12}{3+2t}}^2\leq C\|u\|_{H_\varepsilon}^2,\quad \int_{\mathbb{R}^3}\Phi(u)u^2\,{\rm d}x\leq C\|u\|_{\frac{12}{3+2t}}^4\leq C\|u\|_{H_\varepsilon}^4,$$ where constant $C$ is independent of $u$;\ $(iv)$ Let $2s+2t>3$, if $u_n\rightharpoonup u$ in $H_{\varepsilon}$ and $u_n\rightarrow u$ a.e. in $\mathbb{R}^3$, then for any $v\in H_{\varepsilon}$, $$\int_{\mathbb{R}^3}\phi_{u_n}^tu_nv\,{\rm d}z\rightarrow\int_{\mathbb{R}^3}\phi_{u}^tuv\,{\rm d}z\quad\text{and}\quad\int_{\mathbb{R}^3}f(u_n)v\,{\rm d}z\rightarrow\int_{\mathbb{R}^3}f(u)v\,{\rm d}z$$ and $$\int_{\mathbb{R}^3}(u_n^{+})^{2_s^{\ast}-1}v\,{\rm d}z\rightarrow\int_{\mathbb{R}^3}(u^{+})^{2_s^{\ast}-1}v\,{\rm d}z,$$ and thus $u$ is a solution for problem . In the following, we collect some useful Lemma. We define $$\mu_{\infty}=\lim_{R\rightarrow\infty}\limsup_{n\rightarrow\infty}\int_{\{|x|>R\}}|D_su_n|^2\,{\rm d}z\quad \nu_{\infty}=\lim_{R\rightarrow\infty}\limsup_{n\rightarrow\infty}\int_{\{|x|>R\}}|u_n|^{2_s^{\ast}}\,{\rm d}z.$$ \[lem2-2\]([@PP; @ZZX]) Let $\{u_n\}\subset H^s(\mathbb{R}^3)$ be such that $u_n\rightharpoonup u$ in $\mathcal{D}^{s,2}(\mathbb{R}^3)$, $|D_su_n|^2\rightharpoonup\mu$ and $|u_n|^{2_s^{\ast}}\rightharpoonup\nu$ weakly$-\ast$ in $\mathcal{M}(\mathbb{R}^3)$ as $n\rightarrow\infty$. Here $\mathcal{M}(\mathbb{R}^3)$ is the space of finite nonnegative Borel measures on $\mathbb{R}^3$. Then\ $(i)$ there exist a (at most countable) set of distinct points $\{x_j\}_{j\in J}\subset\mathbb{R}^3$, $\mu_j\geq0$, $\nu_j\geq0$ with $\mu_j+\nu_j>0$ ($j\in J$) such that $$\mu\geq|D_su|^2+\sum_{j\in J}\mu_j\delta_{x_j}\quad \nu=|u|^{2_s^{\ast}}+\sum_{j\in J}\nu_j\delta_{x_j},\quad \mu_j=\mu(\{x_j\}),\,\, \nu_j=\nu(\{x_j\});$$ $(ii)$ Then $\mu_{\infty}$ and $\nu_{\infty}$ are well defined satisfy $$\limsup_{n\rightarrow\infty}\int_{\mathbb{R}^3}|D_su_n|^2\,{\rm d}z=\int_{\mathbb{R}^3}\,{\rm d}\mu+\mu_{\infty}\quad \limsup_{n\rightarrow\infty}\int_{\mathbb{R}^3}|u_n|^{2_s^{\ast}}\,{\rm d}z=\int_{\mathbb{R}^3}\,{\rm d}\nu+\nu_{\infty};$$ $(iii)$ $$\nu_j\leq(\mathcal{S}_s^{-1}\mu_j)^{\frac{2_s^{\ast}}{2}}\,\, \text{for any}\,\, j\in J\,\, \text{and}\,\,\nu_{\infty}\leq(\mathcal{S}_s^{-1}\mu_{\infty})^{\frac{2_s^{\ast}}{2}}.$$ \[pro2-1\] ([@Teng]) Let $\{u_n\}$ be a bounded sequence in $H^s(\mathbb{R}^3)$. If $$\lim_{n\rightarrow\infty}\sup_{y\in\mathbb{R}^3}\int_{B_R(y)}|u_n|^{2_s^{\ast}}\,{\rm d}x=0,$$ where $R$ is a positive number, then $u_n\rightarrow0$ in $L^{2_s^{\ast}}(\mathbb{R}^3)$ as $n\rightarrow\infty$. \[pro2-3\] Let $\{u_k\}\subset \mathcal{D}^{s,2}(\mathbb{R}^3)$ be a bounded sequence such that $u_k\rightharpoonup0$ in $\mathcal{D}^{s,2}(\mathbb{R}^3)$. Suppose that there exists a bounded open set $Q\subset\mathbb{R}^3$ and a positive number $\gamma>0$ such that $$\label{equ2-6} \int_{Q}|u_k|^{2_s^{\ast}}\,{\rm d}x\geq\gamma>0.$$ Moreover, suppose that $$\label{equ2-7} (-\Delta)^su_k=|u_k|^{2_s^{\ast}-2}u_k-\chi_k\quad x\in\mathbb{R}^3,$$ where $\chi_k\in (H^s(\mathbb{R}^3))'$, and $|\langle\chi_k,\varphi\rangle|\leq\varepsilon_k\|\varphi\|$ for any $\varphi\in C_0^{\infty}(V)$, where $V$ is an open neighborhood of $Q$ and $\varepsilon_k\rightarrow0$ as $k\rightarrow\infty$. Then there exist a sequence of points $\{z_k\}\in\mathbb{R}^3$ and a sequence of positive numbers $\{\sigma_k\}$ such that $v_k(x)=\sigma_k^{\frac{3-2s}{2}}u_k(\sigma_k x+z_k)$ converges weakly in $\mathcal{D}^{s,2}(\mathbb{R}^3)$ to a nontrivial solution $v$ of $$\label{equ2-8} (-\Delta)^sv=|v|^{2_s^{\ast}-2}v\quad x\in\mathbb{R}^3.$$ Moreover, $z_k\rightarrow z\in \overline{Q}$ and $\sigma_k\rightarrow0$ as $k\rightarrow\infty$. Since $\{u_k\}$ is bounded in $\mathcal{D}^{s,2}(\mathbb{R}^3)$ and $u_k\rightharpoonup0$ in $\mathcal{D}^{s,2}(\mathbb{R}^3)$, by Phrokorov¡¯s theorem (Theorem 8.6.2 in [@B]), there exist $\mu,\nu\in\mathcal{M}(\mathbb{R}^3)$ such that $$|D_su_k|^2\rightharpoonup\mu\,\,\text{and}\,\, |u_k|^{2_s^{\ast}}\rightharpoonup\nu\,\, \text{weakly-}\ast\,\, \text{in}\,\, \mathcal{M}(\mathbb{R}^3)\,\,\text{as}\,\, k\rightarrow\infty.$$ By Lemma \[lem2-2\], there exist an at most countable index set $J$, sequence $\{x_j\}_{j\in J}\subset\mathbb{R}^3$ and $\{\nu_j\}\subset(0,\infty)$ such that $$\nu=\sum_{j\in J}\nu_j\delta_{x_j}.$$ We claim that there is at least one $j_0\in J$ such that $x_{j_0}\in \overline{Q}$ with $\nu_{j_0}>0$. If not, for all $j\in J$, $x_j\not\in \overline{Q}$ with $\nu_{j}>0$, then $$\int_{\mathbb{R}^3}|u_k|^{2_s^{\ast}}\varphi(x)\,{\rm d}x\rightarrow\int_{\mathbb{R}^3}\sum_{j\in J}\nu_j\delta_{x_j}\varphi\,{\rm d}x=\sum_{j\in J}\nu_j\varphi(x_j), \quad \forall \varphi\in C_0(\mathbb{R}^3).$$ Taking ${\rm supp}(\varphi)= \overline{Q}$, we see that $\int_{Q}|u_k|^{2_s^{\ast}}\,{\rm d}x\rightarrow0$, contradicts with . Thus, the claim is true. We define the Lévy concentration function $$Q_k(r)=\sup_{x\in\overline{Q}}\int_{B_r(x)}|u_k(z)|^{2_s^{\ast}}\,{\rm d}z,$$ then $Q_k$ is a non-decreasing and bounded function. Fixing a small $\tau\in(0,\mathcal{S}_s^{\frac{3}{2s}})$, we can find $\sigma_k:=\sigma_k(\tau)\in\mathbb{R}_{+}$, $z_k\in \overline{Q}$ such that $$\int_{B_{\sigma_k}(z_k)}|u_k|^{2_s^{\ast}}\,{\rm d}z=Q_k(\sigma_k)=\tau.$$ Set $v_k(x)=\sigma_k^{\frac{3-2s}{2}}u_k(\sigma_kx+z_k)$, we have that $$\widetilde{Q}_k(r)=:\sup_{x\in\bar{Q}_k}\int_{B_r(x)}|v_k|^{2_s^{\ast}}\,{\rm d}z=\sup_{x\in\bar{Q}}\int_{B_{\sigma_k r}(x)}|u_k(z)|^{2_s^{\ast}}\,{\rm d}z=Q_k(\sigma_k r),$$ where $\bar{Q}_k=\{x\in\mathbb{R}^3\,\,|\,\, \sigma_kx+z_k\in\overline{Q}\}$. Hence, we obtain that $$\widetilde{Q}_k(1)=\tau=Q_k(\sigma_k)=\int_{B_{\sigma_k}(z_k)}|u_k(z)|^{2_s^{\ast}}\,{\rm d}z=\int_{B_1(0)}|v_k(z)|^{2_s^{\ast}}\,{\rm d}z.$$ Now, we prove that there is a small $\tau_0\in(0,\mathcal{S}_s^{\frac{3}{2s}})$ such that $\sigma_k(\tau_0)\rightarrow0$ as $k\rightarrow\infty$. Otherwise, for any $\varepsilon>0$, there exists $r_{\varepsilon}>0$ such that $\sigma_k(\varepsilon)>r_{\varepsilon}$. Hence, for any $x\in\bar{\Omega}$, there holds $$\int_{B_{r_{\varepsilon}}(x)}|u_k(z)|^{2_s^{\ast}}\,{\rm d}z\leq\sup_{x\in\overline{Q}}\int_{B_{\sigma_k(\varepsilon)}(x)}|u_k(z)|^{2_s^{\ast}}\,{\rm d}z=Q_k(\sigma_k(\varepsilon))=\varepsilon.$$ Furthermore, $$\nu_{j_0}\leq\int_{B_{r_{\varepsilon}}(x_{j_0})}|u_k(z)|^{2_s^{\ast}}\,{\rm d}z+o_k(1)\leq\varepsilon+o_k(1),\quad \forall\varepsilon>0.$$ Let $k\rightarrow+\infty$ and then $\varepsilon\rightarrow0$, we get $\nu_{j_0}\leq0$, which achieves a contradiction. For the above $\tau_0$, we still denote $\sigma_k:=\sigma_k(\tau_0)$ and the corresponding sequence $z_k\in\overline{Q}$. Thus $v_k(x)=\sigma_k^{\frac{3-2s}{2}}u_k(\sigma_kx+z_k)$ satisfies $$\label{equ2-9} \widetilde{Q}_k(1)=\int_{B_1(0)}|v_k|^{2_s^{\ast}}\,{\rm d}z=\tau_0>0.$$ Note that $$\int_{\mathbb{R}^3}|D_sv_k|^2\,{\rm d}z=\int_{\mathbb{R}^3}|D_su_k|^2\,{\rm d}z,$$ by the boundedness of $\{u_k\}$ in $\mathcal{D}^{s,2}(\mathbb{R}^3)$, up to a subsequence, we may assume that there exists $v\in\mathcal{D}^{s,2}(\mathbb{R}^3)$ such that $v_k\rightharpoonup v$ in $\mathcal{D}^{s,2}(\mathbb{R}^3)$. For any $\phi\in C_0^{\infty}(\mathbb{R}^3)$, denote $\phi_k(x)=\phi((x-z_k)/\sigma_k)$. By the fact that $z_k\in \overline{Q}$ and $\sigma_k\rightarrow0$, we see that for $k$ large enough, ${\rm supp}\phi_k\subset B_{\sigma_k}(z_k)\subset V$, then $\phi_k\in C_0^{\infty}(V)$. From , we have that $$\begin{aligned} &\int_{\mathbb{R}^3}\frac{(v_k(z)-v_k(y))(\phi(z)-\phi(y))}{|z-y|^{3+2s}}\,{\rm d}y\,{\rm d}z-\int_{\mathbb{R}^3}|v_k|^{2_s^{\ast}-2}v_k\phi\,{\rm d}z\\ &=\int_{\mathbb{R}^3}\frac{(u_k(z)-u_k(y))(\phi_k(z)-\phi_k(y))}{|z-y|^{3+2s}}\,{\rm d}y\,{\rm d}z-\int_{\mathbb{R}^3}|u_k|^{2_s^{\ast}-2}u_k\phi_k\,{\rm d}z\\ &=o(1)\|\varphi_k\|=o(1)\|\varphi\|_{\mathcal{D}^{s,2}}+o(1).\end{aligned}$$ Thus, $v$ is a solution of equation . Next, we will prove that $v$ is nontrivial. By virtue of , we only need to show that $$\label{equ2-10} \int_{B_1(0)}|v_k|^{2_s^{\ast}}\,{\rm d}z\rightarrow\int_{B_1(0)}|v|^{2_s^{\ast}}\,{\rm d}z$$ If holds true, from , we know that $$\int_{B_1(0)}|v|^{2_s^{\ast}}\,{\rm d}z=\tau_0>0$$ which implies that $v$ is nontrivial. By the boundedness of $\{v_k\}$ in $\mathcal{D}^{s,2}(\mathbb{R}^3)$ and $v_k\rightharpoonup v$ in $\mathcal{D}^{s,2}(\mathbb{R}^3)$, by Phrokorov¡¯s theorem (Theorem 8.6.2 in [@B]), there exist $\mu,\nu\in\mathcal{M}(\mathbb{R}^3)$ such that $$|D_sv_k|^2\rightharpoonup\mu\,\,\text{and}\,\, |v_k|^{2_s^{\ast}}\rightharpoonup\nu\,\, \text{weakly-}\ast\,\, \text{in}\,\, \mathcal{M}(\mathbb{R}^3)\,\,\text{as}\,\, k\rightarrow\infty.$$ By Lemma \[lem2-2\], there exist an at most countable index set $J$, sequence $\{x_j\}_{j\in J}\subset\mathbb{R}^3$ and $\{\nu_j\}\subset(0,\infty)$ such that $$\label{equ2-11} \mu\geq|D_sv|^2+\sum_{j\in J}\mu_j\delta_{x_j},\quad \nu=|v|^{2_s^{\ast}}+\sum_{j\in J}\nu_j\delta_{x_j}.$$ and $$\label{equ2-12} \nu_j\leq(\mathcal{S}_s^{-1}\mu_j)^{\frac{2_s^{\ast}}{2}}\,\, \text{for any}\,\, j\in J.$$ Next, we show that $\{x_j\}_{j\in J}\cap\overline{B_1(0)}=\emptyset$. Suppose by contradiction that there exists $j_0\in J$ such that $x_{j_0}\in\overline{B_1(0)}$, and define the function $\phi_{\rho}=:\phi(\frac{x-x_{j_0}}{\rho})$, where $\phi$ is a smooth cut-off function such that $\phi=1$ on $B_1(0)$, $\phi=0$ on $\mathbb{R}^3\backslash B_2(0)$, $0\leq\phi\leq1$ and $|\nabla\phi|\leq C$. Denote $\phi_{k,\rho}(x)=\phi_{\rho}(\frac{x-z_k}{\sigma_k})$, by the fact that $z_k\in\overline{Q}$, $x_{j_0}\in\overline{B_1(0)}$ and $\sigma_k\rightarrow0$ as $k\rightarrow\infty$, we see that for $k$ large, ${\rm supp}\phi_{k,\rho}\subset B_{2\sigma_k\rho}(z_k+\sigma_kx_{j_0})\subset V$. Direct computation, it can be checked that $\phi_{k,\rho}u_k\in H^s(\mathbb{R}^3)$. Indeed, by Hölder’s inequality, we have that $$\begin{aligned} \int_{\mathbb{R}^3}|D_s(\phi_{\rho,k}u_k)|^2\,{\rm d}z&\leq2\int_{\mathbb{R}^3}\phi_{\rho,k}^2|D_su_k|^2\,{\rm d}z+2\int_{\mathbb{R}^3}u_k^2|D_s\phi_{\rho,k}|^2\,{\rm d}z\\ &\leq2\int_{\mathbb{R}^3}|D_su_k|^2\,{\rm d}z+\Big(\int_{\mathbb{R}^3}|u_k|^{2_s^{\ast}}\,{\rm d}z\Big)^{\frac{2}{2_s^{\ast}}}\Big(\int_{\mathbb{R}^3}|D_s\phi_{\rho,k}|^{\frac{3}{2s}}\Big)^{\frac{2s}{3}}\end{aligned}$$ and directly computations, we get $$\begin{aligned} &\int_{\mathbb{R}^3}\Big|\int_{\mathbb{R}^3}\frac{|\phi_{\rho,k}(z)-\phi_{\rho,k}(y)|^2}{|z-y|^{3+2s}}\,{\rm d}y\Big|^{\frac{3}{2s}}\,{\rm d}z=\int_{\mathbb{R}^3}\Big|\int_{\mathbb{R}^3}\frac{|\phi(z)-\phi(y)|^2}{|z-y|^{3+2s}}\,{\rm d}y\Big|^{\frac{3}{2s}}\,{\rm d}z\\ &=\int_{\mathbb{R}^3\backslash B_2(0)}\Big|\int_{\mathbb{R}^3}\frac{|\phi(z)-\phi(y)|^2}{|z-y|^{3+2s}}\,{\rm d}y\Big|^{\frac{3}{2s}}\,{\rm d}z+\int_{B_2(0)}\Big|\int_{\mathbb{R}^3}\frac{|\phi(z)-\phi(y)|^2}{|z-y|^{3+2s}}\,{\rm d}y\Big|^{\frac{3}{2s}}\,{\rm d}z\\ &=\int_{\mathbb{R}^3\backslash B_2(0)}\Big|\int_{B_2(0)}\frac{|\phi(z)-\phi(y)|^2}{|z-y|^{3+2s}}\,{\rm d}y\Big|^{\frac{3}{2s}}\,{\rm d}z+\int_{B_2(0)}\Big|\int_{\mathbb{R}^3}\frac{|\phi(z)-\phi(y)|^2}{|z-y|^{3+2s}}\,{\rm d}y\Big|^{\frac{3}{2s}}\,{\rm d}z\\ &\leq C\Big[\int_{B_3(0)}\Big|\int_{|z-y|\leq1}\frac{1}{|z-y|^{1+2s}}\,{\rm d}y\Big|^{\frac{3}{2s}}\,{\rm d}z+\int_{\mathbb{R}^3\backslash B_2(0)}\Big|\int_{|z-y|>1,y\in B_2(0)}\frac{|\phi(z)-\phi(y)|^2}{|z-y|^{3+2s}}\,{\rm d}y\Big|^{\frac{3}{2s}}\,{\rm d}z\\ &+\int_{B_2(0)}\Big|\int_{|z-y|\leq1}\frac{1}{|z-y|^{1+2s}}\,{\rm d}y+\int_{|z-y|>1}\frac{1}{|z-y|^{3+2s}}\,{\rm d}y\Big|^{\frac{3}{2s}}\,{\rm d}z\Big]\\ &\leq C\Big(1+\int_{\mathbb{R}^3\backslash B_2(0)}\Big|\int_{|z-y|>1,y\in B_2(0)}\frac{|\phi(z)-\phi(y)|^2}{|z-y|^{3+2s}}\,{\rm d}y\Big|^{\frac{3}{2s}}\,{\rm d}z\Big)\\ &=C\Big(1+\int_{\mathbb{R}^3\backslash B_2(0)}\Big|\int_{|z-y|>\frac{|z|}{2},y\in B_2(0)}\frac{1}{|z-y|^{3+2s}}\,{\rm d}y\Big|^{\frac{3}{2s}}\,{\rm d}z\\ &+\int_{\mathbb{R}^3\backslash B_2(0)}\Big|\int_{1<|z-y|\leq\frac{|z|}{2},y\in B_2(0)}\frac{1}{|z-y|^{1+2s}}\,{\rm d}y\Big|^{\frac{3}{2s}}\,{\rm d}z\Big)\\ &=C\Big(1+\int_{\mathbb{R}^3\backslash B_2(0)}\Big|\int_{|z-y|>\frac{|z|}{2},y\in B_2(0)}\frac{1}{|z-y|^{3+2s}}\,{\rm d}y\Big|^{\frac{3}{2s}}\,{\rm d}z+\int_{B_4(0)}\Big|\int_{1<|z-y|\leq\frac{|z|}{2}}\frac{1}{|z-y|^{1+2s}}\,{\rm d}y\Big|^{\frac{3}{2s}}\,{\rm d}z\Big)\\ &\leq C\Big(1+\int_{\mathbb{R}^3\backslash B_2(0)}\frac{1}{|z|^{(3+2s)\frac{3}{2s}}}\,{\rm d}z\Big)\leq C\end{aligned}$$ which implies that $\phi_{\rho,k}u_k\in H^s(\mathbb{R}^3)$. $$\begin{aligned} \label{equ2-13} &\int_{\mathbb{R}^3}\frac{(v_k(z)-v_k(y))(\phi_{\rho}(z)v_k(z)-\phi_{\rho}(y)v_k(y))}{|z-y|^{3+2s}}\,{\rm d}y\,{\rm d}z-\int_{\mathbb{R}^3}|v_k|^{2_s^{\ast}-2}v_k(\phi_{\rho}v_k)\,{\rm d}z\nonumber\\ &=\int_{\mathbb{R}^3}\frac{(u_k(z)-u_k(y))(\phi_{k,\rho}(z)u_k(z)-\phi_{k,\rho}(y)u_k(y))}{|z-y|^{3+2s}}\,{\rm d}y\,{\rm d}z-\int_{\mathbb{R}^3}|u_k|^{2_s^{\ast}-2}u_k(\phi_{k,\rho}u_k)\,{\rm d}z\nonumber\\ &=o_k(1)\|\phi_{k,\rho} u_k\|=o_k(1).\end{aligned}$$ Since $$\begin{aligned} &\int_{\mathbb{R}^3}\frac{(v_k(z)-v_k(y))(\phi_{\rho}(z)v_k(z)-\phi_{\rho}(y)v_k(y))}{|z-y|^{3+2s}}\,{\rm d}y\,{\rm d}z\\ &=C_s\int_{\mathbb{R}^3}|D_sv_k|^2\phi_{\rho}\,{\rm d}z+C_s\int_{\mathbb{R}^3\times\mathbb{R}^3}\frac{(\phi_{\rho}(z)-\phi_{\rho}(y))(v_k(z)-v_k(y))v_k(y)}{|z-y|^{3+2s}}\,{\rm d}y\,{\rm d}z,\end{aligned}$$ and $$\int_{\mathbb{R}^3\times\mathbb{R}^3}\frac{(\phi_{\rho}(z)-\phi_{\rho}(y))(v_k(z)-v_k(y))v_k(y)}{|z-y|^{3+2s}}\,{\rm d}y\,{\rm d}z\leq\|D_sv_k\|_{L^2}\|v_kD_s\phi_{\rho}\|_{L^2},$$ then we claim that $$\label{equ2-13-0} \lim_{\rho\rightarrow0{+}}\limsup_{k\rightarrow\infty}\int_{\mathbb{R}^3}v_k^2|D_s\phi_{\rho}|^2\,{\rm d}z=0.$$ Indeed, since $$\begin{aligned} \mathbb{R}^3\times\mathbb{R}^3&=(B_{2\rho}^{c}(x_j)\times B_{2\rho}^{c}(x_j))\cup (B_{2\rho}(x_j)\times B_{2\rho}(x_j))\cup(B_{2\rho}^{c}(x_j)\times B_{\rho}(x_j))\cup (B_{\rho}(x_j)\times B_{2\rho}^{c}(x_j))\\ &\cup( B_{2\rho}(x_j)\backslash B_{\rho}(x_j)\times B_{2\rho}^{c}(x_j))\cup(B_{2\rho}^{c}(x_j)\times B_{2\rho}(x_j)\backslash B_{\rho}(x_j)),\end{aligned}$$ where $B_{\rho}^{c}(x_j)=\mathbb{R}^3\backslash B_{\rho}(x_j)$ and $B_{2\rho}^{c}(x_j)=\mathbb{R}^3\backslash B_{2\rho}(x_j)$. Next we will discuss the six cases on the above domains, respectively. $\bullet$ $(y,z)\in B_{2\rho}^{c}(x_j)\times B_{2\rho}^{c}(x_j)$. Clearly $|\phi_{\rho}(z)-\phi_{\rho}(y)|=0$ and so $$\int_{B_{2\rho}^{c}(x_j)\times B_{2\rho}^{c}(x_j)}v_k^2(y)\frac{|\phi_{\rho}(z)-\phi_{\rho}(y)|^2}{|z-y|^{3+2s}}\,{\rm d}z\,{\rm d}y=0.$$ $\bullet$ $(y,z)\in B_{2\rho}(x_j)\times B_{2\rho}(x_j)$. Since $|\phi_{\rho}(z)-\phi_{\rho}(y)|\leq \frac{C}{\rho}|z-y|$ and $|y-z|\leq|y-x_j|+|z-x_j|\leq4\rho$, we have $$\begin{aligned} \int_{B_{2\rho}(x_j)}v_k^2(y)\int_{B_{2\rho}(x_j)}\frac{|\phi_{\rho}(z)-\phi_{\rho}(y)|^2}{|z-y|^{3+2s}}\,{\rm d}z\,{\rm d}y&\leq\frac{C}{\rho^2}\int_{B_{2\rho}(x_j)}v_k^2(y)\int_{|y-z|\leq4\rho}\frac{1}{|y-z|^{1+2s}}\,{\rm d}z\,{\rm d}y\\ &\leq\frac{C}{\rho^{2s}}\int_{B_{2\rho}(x_j)}v_k^2\,{\rm d}y.\end{aligned}$$ $\bullet$ $(y,z)\in B_{\rho}(x_j)\times B_{2\rho}^{c}(x_j)$. There holds $|z-y|\geq|z-x_j|-|y-x_j|\geq\rho$ and thus $$\begin{aligned} \int_{B_{\rho}(x_j)}v_k^2(y)\int_{|z-y|\geq\rho, z\in B_{2\rho}^{c}(x_j)}\frac{|\phi_{\rho}(y)-\phi_{\rho}(z)|^2}{|z-y|^{3+2s}}\,{\rm d}z\,{\rm d}y&\leq C\int_{B_{\rho}(x_j)}v_k^2\int_{|z-y|\geq\rho}\frac{1}{|z-y|^{3+2s}}\,{\rm d}z\,{\rm d}y\\ &\leq\frac{C}{\rho^{2s}}\int_{B_{\rho}(x_j)}v_k^2\,{\rm d}y.\end{aligned}$$ $\bullet$ $(y,z)\in B_{2\rho}^{c}(x_j)\times B_{\rho}(x_j)$. Obviously, $|y-z|\geq\rho$. Observe that for any fixed $K\geq4$, $ B_{2\rho}^c(x_j)\times B_{\rho}(x_j)\subset B_{K\rho}(x_j)\times B_{\rho}(x_j)\cup B_{K\rho}^c(x_j)\times B_{\rho}(x_j)$. Hence, if $|y-z|>\rho$ and $(y,z)\in B_{K\rho}(x_j)\times B_{\rho}(x_j)$, we have $$\begin{aligned} \int_{B_{K\rho}(x_j)}v_k^2\int_{|y-z|\geq\rho,z\in B_{\rho}(x_j)}\frac{|\phi_{\rho}(y)-\phi_{\rho}(z)|^2}{|y-z|^{3+2s}}\,{\rm d}z\,{\rm d}y&\leq C\int_{B_{K\rho}(x_j)}v_k^2\int_{|y-z|\geq\rho}\frac{1}{|y-z|^{3+2s}}\,{\rm d}z\,{\rm d}y\\ &\leq\frac{C}{\rho^{2s}}\int_{B_{K\rho}(x_j)}v_k^2\,{\rm d}y.\end{aligned}$$ If $(y,z)\in B_{K\rho}^c(x_j)\times B_{\rho}(x_j)$, $|y-z|\geq|y-x_j|-|z-x_j|\geq\frac{3|y-x_j|}{4}+\frac{K}{4}\rho-\rho>\frac{3|y-x_j|}{4}$. By Hölder’s inequality, we have $$\begin{aligned} &\int_{B_{K\rho}^c(x_j)}v_k^2\int_{|x-y|\geq\rho, z\in B_{\rho}(x_j)}\frac{|\phi_{\rho}(z)-\phi_{\rho}(y)|^2}{|z-y|^{3+2s}}\,{\rm d}z\,{\rm d}y\leq C\rho^3\int_{B_{K\rho}^c(x_j)}v_k^2\frac{1}{|y-x_j|^{3+2s}}\,{\rm d}y\\ &\leq C\rho^3\Big(\int_{B_{K\rho}^c(x_j)}|v_k|^{2_s^{\ast}}\,{\rm d}y\Big)^{\frac{3-2s}{3}}\Big(\int_{B_{K\rho}^c(x_j)}\frac{1}{|y-x_j|^{(3+2s)\frac{3}{2s}}}\,{\rm d}y\Big)^{\frac{2s}{3}}\\ &\leq \frac{C}{K^3}\Big(\int_{B_{K\rho}^c(x_j)}|v_k|^{2_s^{\ast}}\,{\rm d}y\Big)^{\frac{3-2s}{3}}\leq\frac{C}{K^3}.\end{aligned}$$ $\bullet$ $(y,z)\in B_{2\rho}^c(x_j)\times B_{2\rho}(x_j)\backslash B_{\rho}(x_j)$. If $|y-z|\leq\rho$, then $|y-x_j|\leq|y-z|+|z-x_j|\leq3\rho$ and thus $$\begin{aligned} \int_{ B_{2\rho}^c(x_j)}v_k^2\int_{|y-z|\leq\rho}\frac{|\phi_{\rho}(y)-\phi_{\rho}(z)|^2}{|y-z|^{3+2s}}\,{\rm d}z\,{\rm d}y&\leq\frac{C}{\rho^2}\int_{B_{3\rho}(x_j)}v_k^2\int_{|y-z|\leq\rho}\frac{|y-z|^2}{|y-z|^{3+2s}}\,{\rm d}z\,{\rm d}y\\ &\leq\frac{C}{\rho^{2s}}\int_{B_{3\rho}(x_j)}v_k^2\,{\rm d}y.\end{aligned}$$ Observe that for any fixed $K\geq4$, $ B_{2\rho}^c(x_j)\times B_{2\rho}(x_j)\backslash B_{\rho}(x_j) A \subset B_{K\rho}(x_j)\times B_{2\rho}(x_j)\cup B_{K\rho}^c(x_j)\times B_{2\rho}(x_j)$. Hence, if $|y-z|>\rho$ and $(y,z)\in B_{K\rho}(x_j)\times B_{2\rho}(x_j)$, we have $$\begin{aligned} \int_{B_{K\rho}(x_j)}v_k^2\int_{|y-z|>\rho,z\in B_{2\rho}(x_j)}\frac{|\phi_{\rho}(y)-\phi_{\rho}(z)|^2}{|y-z|^{3+2s}}\,{\rm d}z\,{\rm d}y&\leq C\int_{B_{K\rho}(x_j)}v_k^2\int_{|y-z|>\rho}\frac{1}{|y-z|^{3+2s}}\,{\rm d}z\,{\rm d}y\\ &\leq\frac{C}{\rho^{2s}}\int_{B_{K\rho}(x_j)}v_k^2\,{\rm d}y.\end{aligned}$$ If $(y,z)\in B_{K\rho}^c(x_j)\times B_{2\rho}(x_j)$, $|y-z|\geq|y-x_j|-|z-x_j|\geq\frac{|y-x_j|}{2}+\frac{K}{2}\rho-2\rho\geq\frac{|y-x_j|}{2}$. By Hölder’s inequality, we have $$\begin{aligned} &\int_{B_{K\rho}^c(x_j)}v_k^2\int_{|x-y|>\rho, z\in B_{2\rho}(x_j)}\frac{|\phi_{\rho}(z)-\phi_{\rho}(y)|^2}{|z-y|^{3+2s}}\,{\rm d}z\,{\rm d}y\leq C\rho^3\int_{B_{K\rho}^c(x_j)}v_k^2\frac{1}{|y-x_j|^{3+2s}}\,{\rm d}y\\ &\leq C\rho^3\Big(\int_{B_{K\rho}^c(x_j)}|v_k|^{2_s^{\ast}}\,{\rm d}y\Big)^{\frac{3-2s}{3}}\Big(\int_{B_{K\rho}^c(x_j)}\frac{1}{|y-x_j|^{(3+2s)\frac{3}{2s}}}\,{\rm d}y\Big)^{\frac{2s}{3}}\\ &\leq \frac{C}{K^3}\Big(\int_{B_{K\rho}^c(x_j)}|v_k|^{2_s^{\ast}}\,{\rm d}y\Big)^{\frac{3-2s}{3}}\leq\frac{C}{K^3}.\end{aligned}$$ $\bullet$ $(y,z)\in B_{2\rho}(x_j)\backslash B_{\rho}(x_j)\times B_{2\rho}^{c}(x_j)$. If $|y-z|\leq\rho$, we have $$\begin{aligned} \int_{ B_{2\rho}(x_j)\backslash B_{\rho}(x_j)}v_k^2\int_{|y-z|\leq\rho,z\in B_{2\rho}^{c}(x_j)}\frac{|\phi_{\rho}(y)-\phi_{\rho}(z)|^2}{|x-y|^{3+2s}}\,{\rm d}z\,{\rm d}y&\leq\frac{C}{\rho^2}\int_{B_{2\rho}(x_j)}v_k^2\int_{|y-z|\leq\rho}\frac{|y-z|^2}{|y-z|^{3+2s}}\,{\rm d}z\,{\rm d}y\\ &\leq\frac{C}{\rho^{2s}}\int_{B_{2\rho}(x_j)}v_k^2\,{\rm d}y.\end{aligned}$$ If $|y-z|>\rho$, then $|y-z|\geq\frac{|y-x_j|}{2}$. One has $$\begin{aligned} \int_{B_{2\rho}(x_j)\backslash B_{\rho}(x_j)}v_k^2(y)\int_{|z-y|>\rho, z\in B_{2\rho}^{c}(x_j)}\frac{|\phi_{\rho}(y)-\phi_{\rho}(z)|^2}{|z-y|^{3+2s}}\,{\rm d}z\,{\rm d}y&\leq C\int_{B_{2\rho}(x_j)}v_k^2\int_{|z-y|>\rho}\frac{1}{|z-y|^{3+2s}}\,{\rm d}z\,{\rm d}y\\ &\leq\frac{C}{\rho^{2s}}\int_{B_{2\rho}(x_j)}v_k^2\,{\rm d}y.\end{aligned}$$ From all the above estimates and using Hölder’s inequality, we get that $$\begin{aligned} &\limsup_{k\rightarrow\infty}\int_{\mathbb{R}^3}v_k^2|D_s\phi_{\rho}|^2\,{\rm d}x\\ &\leq \frac{C}{\rho^{2s}}\Big(\int_{B_{2\rho}(x_j)}v^2\,{\rm d}x+\int_{B_{3\rho}(x_j)}v^2\,{\rm d}x+\int_{B_{K\rho}(x_j)}v^2\,{\rm d}x\Big)+\frac{C}{K^3}\\ &\leq C\Big[\Big(\int_{B_{2\rho}(x_j)}|v|^{2_s^{\ast}}\,{\rm d }x\Big)^{\frac{2}{2_s^{\ast}}}+\Big(\int_{B_{3\rho}(x_j)}|v|^{2_s^{\ast}}\,{\rm d }x\Big)^{\frac{2}{2_s^{\ast}}}\Big] +K^{2s}\Big(\int_{B_{K\rho}(x_j)}|v|^{2_s^{\ast}}\,{\rm d }x\Big)^{\frac{2}{2_s^{\ast}}}+\frac{C}{K^3}.\end{aligned}$$ Letting $\rho\rightarrow0^{+}$ and then letting $K\rightarrow+\infty$, follows. Thus, from , and , we get that $$\begin{aligned} \mu_{j_0}&=\lim_{\rho\rightarrow0^{+}}\int_{B_{2\rho}(x_{j_0})}\phi_{\rho}\,{\rm d}\mu=\lim_{\rho\rightarrow0^{+}}\lim_{k\rightarrow\infty}\int_{\mathbb{R}^3}|D_sv_k|^2\phi_{\rho}\,{\rm d}x\\ &=\lim_{\rho\rightarrow0^{+}}\lim_{k\rightarrow\infty}\int_{\mathbb{R}^3}|v_k|^{2_s^{\ast}}\phi_{\rho}\,{\rm d}x=\lim_{\rho\rightarrow0^{+}}\int_{\mathbb{R}^3}|v|^{2_s^{\ast}}\phi_{\rho}\,{\rm d}x+\nu_{j_0}=\nu_{j_0}\end{aligned}$$ which leads to $$\nu_{j_0}\geq\mathcal{S}_s^{\frac{3}{2s}}.$$ But, by , we have that $$\mathcal{S}_s^{\frac{3}{2s}}\leq\nu_{j_0}\leq\int_{B_1(0)}|v_k|^{2_s^{\ast}}\,{\rm d}x+o(1)=\tau_0+o(1)$$ which contradicts with $\tau_0<\mathcal{S}_s^{\frac{3}{2s}}$. Hence, $\{x_j\}_{j\in J}\cap \overline{B_1(0)}=\emptyset$ and then holds. We complete the proof. \[pro2-2\]([@Teng3], Proposition 5.1) Assume that $u_n$ are nonnegative weak solution of $$\left\{ \begin{array}{ll} (-\Delta)^su+V_n(x) u+ \phi u=f_n(x,u) & \hbox{in $\mathbb{R}^3$,} \\ (-\Delta)^t\phi=u^2& \hbox{in $\mathbb{R}^3$,} \end{array} \right.$$ where $\{V_n\}$ satisfies $V_n(x)\geq V_0>0$ for all $x\in\mathbb{R}^3$ and $f_n(x,\tau)$ is a Carathedory function satisfying that for any $\delta>0$, there exists $C_{\varepsilon}>0$ such that $$|f_n(x,\tau)|\leq\delta|\tau|+C_{\delta}|\tau|^{2_s^{\ast}-1},\quad \forall (x,\tau)\in\mathbb{R}^3\times\mathbb{R}.$$ satisfying $u_n$ convergence strongly in $H^s(\mathbb{R}^3)$ or $u_n$ convergence strongly in $L^{2_s^{\ast}}(\mathbb{R}^3)$. Then there exists $C>0$ such that $$\|u_n\|_{L^{\infty}}\leq C\quad \text{for all}\,\, n.$$ \[lem2-3\]([@S], Proposition 2.9) Let $w=(-\Delta)^su$. Assume $w\in L^{\infty}(\mathbb{R}^n)$ and $u\in L^{\infty}(\mathbb{R}^n)$ for $s>0$.\ If $2s\leq1$, then $u\in C^{0,\alpha}(\mathbb{R}^n)$ for any $\alpha\leq2s$. Moreover $$\|u\|_{C^{0,\alpha}(\mathbb{R}^n)}\leq C\Big(\|u\|_{L^{\infty}(\mathbb{R}^n)}+\|w\|_{L^{\infty}(\mathbb{R}^n)}\Big)$$ for some constant $C$ depending only on $n$, $\alpha$ and $s$.\ If $2s>1$, then $u\in C^{1,\alpha}(\mathbb{R}^n)$ for any $\alpha<2s-1$. Moreover $$\|u\|_{C^{1,\alpha}(\mathbb{R}^n)}\leq C\Big(\|u\|_{L^{\infty}(\mathbb{R}^n)}+\|w\|_{L^{\infty}(\mathbb{R}^n)}\Big)$$ for some constant $C$ depending only on $n$, $\alpha$ and $s$. Limiting problem ================ In this section, we consider the “limiting problem” associated with problem $$\label{equ3-1} \left\{ \begin{array}{ll} (-\Delta)^su+\mu u+\phi u=f(u)+u^{2_s^{\ast}-1} & \hbox{in $\mathbb{R}^3$,} \\ (-\Delta)^t\phi=u^2,\,\, u>0& \hbox{in $\mathbb{R}^3$} \end{array} \right.$$ for $\mu>0$. The energy functional for the limiting problem is given by $$\begin{aligned} \mathcal{I}_{\mu}(u)&=\frac{1}{2}\int_{\mathbb{R}^3}|D_su|^2\,{\rm d}x+\frac{\mu}{2}\int_{\mathbb{R}^3}|u|^2\,{\rm d}x+\frac{1}{4}\int_{\mathbb{R}^3}\phi_u^tu^2\,{\rm d}x-\int_{\mathbb{R}^3}F(u)\,{\rm d}x\\ &-\frac{1}{2_s^{\ast}}\int_{\mathbb{R}^3}(u^{+})^{2_s^{\ast}}\,{\rm d}x,\quad u\in H^s(\mathbb{R}^3).\end{aligned}$$ Let $$\begin{aligned} \mathcal{P}_{\mu}(u)=&\frac{3-2s}{2}\int_{\mathbb{R}^3}|D_su|^2\,{\rm d}x+\frac{3}{2}\int_{\mathbb{R}^3}\mu|u|^2\,{\rm d}x+\frac{3+2t}{4}\int_{\mathbb{R}^3}\phi_{u}^tu^2\,{\rm d}x\\ &-3\int_{\mathbb{R}^3}F(u)\,{\rm d}x-\frac{3}{2_s^{\ast}}\int_{\mathbb{R}^3}(u^{+})^{2_s^{\ast}}\,{\rm d}x\end{aligned}$$ and $$\begin{aligned} \mathcal{G}_{\mu}(u)&=(s+t)\langle \mathcal{I}'_{\mu}(u),u\rangle-\mathcal{P}_{\mu}(u)\\ &=\frac{4s+2t-3}{2}\int_{\mathbb{R}^3}|D_su|^2\,{\rm d}x+\frac{2s+2t-3}{2}\mu\int_{\mathbb{R}^3}|u|^2\,{\rm d}x\\ &+\frac{4s+2t-3}{4}\int_{\mathbb{R}^3}\phi_u^tu^2\,{\rm d}x+\int_{\mathbb{R}^3}\Big(3F(u)-(s+t)f(u)u\Big)\,{\rm d}x\\ &+\frac{3-(s+t)2_s^{\ast}}{2_s^{\ast}}\int_{\mathbb{R}^3}(u^{+})^{2_s^{\ast}}\,{\rm d}x.\end{aligned}$$ We define the Nehari-Pohozaev manifold $$\mathcal{M}_{\mu}=\{u\in H^s(\mathbb{R}^3)\backslash\{0\}\,\,\Big|\,\, \mathcal{G}_{\mu}(u)=0\}$$ and set $b_{\mu}=\inf\limits_{u\in\mathcal{M}_{\mu}}\mathcal{I}_{\mu}(u)$. By standard arguments, we can show the following properties of $\mathcal{M}_{\mu}$. \[pro3-2\] The set $\mathcal{M}_{\mu}$ possesses the following properties:\ $(i)$ $0\not\in\partial\mathcal{M}_{\mu}$;\ $(ii)$ for any $u\in H^s(\mathbb{R}^3)\backslash\{0\}$, there exists a unique $\tau_0:=\tau(u)>0$ such that $u_{\tau_0}\in\mathcal{M}_{\mu}$, where $u_{\tau}=\tau^{s+t}u(\tau x)$. Moreover, $$\mathcal{I}_{\mu}(u_{\tau_0})=\max_{\tau\geq0}\mathcal{I}_{\mu}(u_{\tau});$$ Now, it is easy to check that $\mathcal{I}_{\mu}$ satisfies the mountain pass geometry. \[lem3-1\] $(i)$ there exist $\rho_0,\beta_0>0$ such that $\mathcal{I}_{\mu}(u)\geq\beta_0$ for all $u\in H^s(\mathbb{R}^3)$ with $\|u\|=\rho_0$;\ $(ii)$ there exists $u_0\in H^s(\mathbb{R}^3)$ such that $\mathcal{I}_{\mu}(u_0)<0$. From Lemma \[lem3-1\], the mountain-pass level of $\mathcal{I}_{\mu}$ defined as follows $$c_{\mu}=\inf_{\gamma\in\Gamma_{\mu}}\sup_{t\in[0,1]}\mathcal{I}_{\mu}(\gamma(t))$$ where $$\Gamma_{\mu}=\Big\{\gamma\in C([0,1],H^s(\mathbb{R}^3))\,\,\Big|\,\, \gamma(0)=0,\,\, \mathcal{I}_{\mu}(\gamma(1))<0\Big\}$$ satisfies that $c_{\mu}>0$. Furthermore, by $(f_3)$, it is easy to verify that $$\label{equ3-2} f'(\tau)\tau-(q-1)f(\tau)>0\,\,\text{and}\,\, f(\tau)\tau-qF(\tau)>0 \,\,\text{for any}\,\,\tau>0.$$ By using Lemma \[lem3-1\] and , we can show the equivalent characterization of mountain-pass level $c_{\mu}$. \[lem3-2\] $$c_{\mu}=b_{\mu}.$$ In order to obtain the boundedness of $(PS)$ sequence, we will construct a $(PS)$ sequence $\{u_n\}$ for $\mathcal{I}_{\mu}$ at the level $c_{\mu}$ that satisfies $\mathcal{G}_{\mu}(u_n)\rightarrow0$ as $n\rightarrow+\infty$ i.e., \[lem3-3\] There exists a sequence $\{u_n\}$ in $H^s(\mathbb{R}^3)$ such that as $n\rightarrow+\infty$, $$\label{equ3-5} \mathcal{I}_{\mu}(u_n)\rightarrow c_{\mu},\quad\mathcal{I}'_{\mu}(u_n)\rightarrow0,\quad \mathcal{G}_{\mu}(u_n)\rightarrow0.$$ \[lem3-4\] Every sequence $\{u_n\}\subset H^s(\mathbb{R}^3)$ satisfying is bounded in $H^s(\mathbb{R}^3)$. By , and $q>\frac{4s+2t}{s+t}$, we have that $$\begin{aligned} &c_{\mu}+o_n(1)=\mathcal{I}_{\mu}(u_n)-\frac{1}{q(s+t)-3}\mathcal{G}_{\mu}(u_n)\\ &=\frac{(q-4)s+(q-2)t}{2(q(s+t)-3)}\int_{\mathbb{R}^3}|D_su_n|^2\,{\rm d}x+\frac{(q-2)(s+t)}{2(q(s+t)-3)}\mu\int_{\mathbb{R}^3}|u_n|^2\,{\rm d}x\\ &+\frac{(q-4)s+(q-2)t}{4(q(s+t)-3)}\int_{\mathbb{R}^3}\phi_{u_n}^tu_n^2\,{\rm d}x+\frac{(2_s^{\ast}-q)(s+t)}{2_s^{\ast}(q(s+t)-3)}\int_{\mathbb{R}^3}(u_n^{+})^{2_s^{\ast}}\,{\rm d}x\\ &+\frac{s+t}{q(s+t)-3}\int_{\mathbb{R}^3}\Big(f(u_n)u_n-qF(u_n)\Big)\,{\rm d}x.\end{aligned}$$ Hence, sequence $\{u_n\}$ is bounded in $H^s(\mathbb{R}^3)$. For obtaining the compactness of the above bounded sequence $\{u_n\}$, we need the estimate of the Mountain-Pass level $c_{\mu}$ which is given as the following Lemma. \[lem3-4\] $$c_{\mu}<\frac{s}{3}\mathcal{S}_s^{\frac{3}{2s}}$$ if in the case $s>\frac{3}{4}$, $q\in(2_s^{\ast}-2,2_s^{\ast})$ for all $\lambda>0$ or $q\in(\frac{4s+2t}{s+t},2_s^{\ast}-2]$ for $\lambda>0$ large; if in the case $\frac{1}{2}<s\leq\frac{3}{4}$, $q\in(\frac{4s+2t}{s+t},2_s^{\ast})$ for any $\lambda>0$, where $\mathcal{S}_s$ is the best Sobolev consatnt for the embedding $\mathcal{D}^{s,2}(\mathbb{R}^3)\hookrightarrow L^{2_s^{\ast}}(\mathbb{R}^3)$. Let $$u_{\delta}(x)=\psi(x)U_{\delta}(x),\quad x\in\mathbb{R}^3,$$ where $U_{\delta}(x)=\delta^{-\frac{3-2s}{2}}u^{\ast}(x/\delta)$, $u^{\ast}(x)=\frac{\widetilde{u}(x/\mathcal{S}_s^{\frac{1}{2s}})}{\|\widetilde{u}\|_{2_s^{\ast}}}$, $\kappa\in\mathbb{R}\backslash\{0\}$, $\mu>0$ and $x_0\in\mathbb{R}^3$ are fixed constants, $\widetilde{u}(x)=\kappa(\mu^2+|x-x_0|^2)^{-\frac{3-2s}{2}}$, and $\psi\in C^{\infty}(\mathbb{R}^3)$ such that $0\leq\psi\leq1$ in $\mathbb{R}^3$, $\psi(x)\equiv1$ in $B_{R}$ and $\psi\equiv0$ in $\mathbb{R}^3\backslash B_{2R}$. From Proposition 21 and Proposition 22 in [@SV], Lemma 3.3 in [@Teng], we know that $$\label{equ3-8} \int_{\mathbb{R}^{3}}|D_su_{\delta}(x)|^2\,{\rm d}x\leq\mathcal{S}_s^{\frac{3}{2s}}+O(\delta^{3-2s}),$$ $$\label{equ3-9} \int_{\mathbb{R}^{3}}|u_{\delta}(x)|^{2_s^{\ast}}\,{\rm d}x=\mathcal{S}_s^{\frac{3}{2s}}+O(\delta^3),$$ and $$\label{equ3-10} \int_{\mathbb{R}^3}|u_{\delta}(x)|^p\,{\rm d}x=\left\{ \begin{array}{ll} O(\delta^{\frac{(2-p)3+2sp}{2}})&\hbox{$p>\frac{3}{3-2s}$,} \\ O(\delta^{\frac{(2-p)3+2sp}{2}}|\log\delta|) & \hbox{$p=\frac{3}{3-2s}$,} \\ O(\delta^{\frac{3-2s}{2}p}) & \hbox{$p<\frac{3}{3-2s}$.} \end{array} \right.$$ Here $a_{\delta}=O(b_{\delta})$ means that $C_1\leq \frac{a_{\delta}}{b_{\delta}}\leq C_2$ for some $C_1, C_2>0$, independent of $\delta$. Set $u_{\delta}^{\tau}(x)=\tau^{s+t} u_{\delta}(\tau x)$ for any $\tau\geq0$, by $(f_2)$, we deduce that $$\begin{aligned} &\mathcal{I}_{\mu}(u_{\delta}^{\tau})\leq h_{\delta}(\tau):=\frac{\tau^{4s+2t-3}}{2}\int_{\mathbb{R}^3}|D_su_{\delta}|^2\,{\rm d}x+\frac{\tau^{2s+2t-3}}{2}\int_{\mathbb{R}^3}\mu |u_{\delta}|^2\,{\rm d}x\\ &+\frac{\tau^{4s+2t-3}}{4}\int_{\mathbb{R}^3}\phi_{u_{\delta}}^tu_{\delta}^2\,{\rm d}x-\lambda\frac{\tau^{q(s+t)-3}}{q}\int_{\mathbb{R}^3}|u_{\delta}|^q\,{\rm d}x-\frac{\tau^{2_s^{\ast}(s+t)-3}}{2_s^{\ast}}\int_{\mathbb{R}^3}|u_{\delta}|^{2_s^{\ast}}\,{\rm d}x.\end{aligned}$$ Since $h_{\delta}(\tau)\rightarrow-\infty$ as $\tau\rightarrow+\infty$, we have that $\sup\{h_{\delta}(\tau):\,\, \tau\geq0\}=h_{\delta}(\tau_{\delta})$ for some $\tau_{\delta}>0$. Hence, $\tau_{\delta}$ verifies the following equality: $$\begin{aligned} \label{equ3-9} &\frac{4s+2t-3}{2}\tau_{\delta}^{2s}\int_{\mathbb{R}^3}|D_su_{\delta}|^2\,{\rm d}x+\frac{2s+2t-3}{2}\int_{\mathbb{R}^3}\mu |u_{\delta}|^2\,{\rm d}x\nonumber\\ &+\frac{4s+2t-3}{4}\tau_{\delta}^{2s}\int_{\mathbb{R}^3}\phi_{u_{\delta}}^tu_{\delta}^2\,{\rm d}x\nonumber\\ &=\frac{\lambda(q(s+t)-3)}{q}\tau_{\delta}^{(q-2)(s+t)}\int_{\mathbb{R}^3}|u_{\delta}|^q\,{\rm d}x+\frac{(2_s^{\ast}(s+t)-3)}{2_s^{\ast}}\tau_{\delta}^{(2_s^{\ast}-2)(s+t)}\int_{\mathbb{R}^3}|u_{\delta}|^{2_s^{\ast}}\,{\rm d}x.\end{aligned}$$ We claim that $\{\tau_{\delta}\}$ is bounded from below by a positive constant for $\delta$ small. Otherwise, there exists a sequence $\delta_n\rightarrow0$ such that $\tau_{\delta_n}\rightarrow0$ as $n\rightarrow+\infty$. Thus $0<c_{\mu}\leq\sup_{\tau\geq0}\mathcal{I}_{\mu}(u_{\delta_n}^{\tau})\leq\sup_{\tau\geq0}h_{\delta_n}(\tau)=h_{\delta_n}(\tau_{\delta_n})\rightarrow0$ as $n\rightarrow\infty$, a contradiction. So there exists a constant $C_0>0$ independent of $\delta$ such that $\tau_{\delta}\geq C_0$. Using the similar argument in , we can show that the sequence $\{\tau_{\delta}\}$ is bounded from above by a constant $C$ independent of $\delta$. Thus $0<C_0\leq\tau_{\delta}\leq C$ for $\delta$ small. Let $g_{\delta}(\tau)=\frac{\tau^{4s+2t-3}}{2}\int_{\mathbb{R}^3}|D_su_{\delta}|^2\,{\rm d}x-\frac{\tau^{2_s^{\ast}(s+t)-3}}{2_s^{\ast}}\int_{\mathbb{R}^3}|u_{\delta}|^{2_s^{\ast}}\,{\rm d}x$, then we get for some universal constant $C>0$ so that $$\begin{aligned} \label{equ3-10} \mathcal{I}_{\mu}(u_{\delta}^{\tau})&\leq \sup_{\tau\geq0}g_{\delta}(\tau)+C\int_{\mathbb{R}^3}\mu|u_{\delta}|^2\, {\rm d}x+C\int_{\mathbb{R}^3}\phi_{u_{\delta}}^tu_{\delta}^2\,{\rm d}x-C\lambda\int_{\mathbb{R}^3}|u_{\delta}|^{q}\, {\rm d}x.\end{aligned}$$ Directly computation, we get that $\frac{4s+2t-3}{2}\frac{2_s^{\ast}}{2_s^{\ast}(s+t)-3}=1$ and $\frac{2_s^{\ast}(s+t)-3}{(2_s^{\ast}-4)s+(2_s^{\ast}-2)t}=\frac{3}{2s}$. Thus, by , we deduce that $$\begin{aligned} \sup_{\tau\geq0}g_{\delta}(\tau)&=g_{\delta}(\tau_0)=\frac{s}{3}\frac{\Big(\int_{\mathbb{R}^3}|D_su_{\delta}|^2\,{\rm d}x\Big)^{\frac{3}{2s}}}{\Big(\int_{\mathbb{R}^3}|u_{\delta}|^{2_s^{\ast}}\,{\rm d}x\Big)^{\frac{3-2s}{3}}}\leq\frac{s}{3}\frac{(\mathcal{S}_s^{\frac{3}{2s}}+O(\delta^{3-2s}))^{\frac{3}{2s}}}{(\mathcal{S}_s^{\frac{3}{2s}}+O(\delta^{3}))^{\frac{3-2s}{3}}}\\ &\leq\frac{s}{3} \mathcal{S}_s^{\frac{3}{2s}}+O(\delta^{3-2s}),\end{aligned}$$ where $$\begin{aligned} \tau_0&=\Big(\frac{4s+2t-3}{2}\frac{2_s^{\ast}}{2_s^{\ast}(s+t)-3}\frac{\int_{\mathbb{R}^3}|D_su_{\delta}|^2\,{\rm d}x}{\int_{\mathbb{R}^3}|u_{\delta}|^{2_s^{\ast}}\,{\rm d}x}\Big)^{\frac{1}{(2_s^{\ast}-4)s+(2_s^{\ast}-2)t}}\\ &=\Big(\frac{\int_{\mathbb{R}^3}|D_su_{\delta}|^2\,{\rm d}x}{\int_{\mathbb{R}^3}|u_{\delta}|^{2_s^{\ast}}\,{\rm d}x}\Big)^{\frac{1}{(2_s^{\ast}-4)s+(2_s^{\ast}-2)t}}.\end{aligned}$$ By $(i)$ of Lemma \[lem2-1\], and , using the elementary inequality $(a+b)^{\alpha}\leq a^{\alpha}+\alpha (a+b)^{\alpha-1}b$, $\alpha\geq1$ and $a,b\geq0$, we deduce that $$\begin{aligned} \mathcal{I}_{\mu}(u_{\delta}^{\tau})&\leq \frac{s}{3}\mathcal{S}_s^{\frac{3}{2s}}+C \delta^{3-2s}+C\int_{\mathbb{R}^3}\mu|u_{\delta}|^2\, {\rm d}x+C\|u_{\delta}\|_{L^{\frac{12}{3+2t}}}^4-C\lambda\int_{\mathbb{R}^3}|u_{\delta}|^{q}\, {\rm d}x.\end{aligned}$$ $\bullet$ In the case $s>\frac{3}{4}$, by , we deduce that $$\mathcal{I}_{\mu}(u_{\delta}^{\tau})\leq\frac{s}{3}\mathcal{S}_s^{\frac{3}{2s}}+C\delta^{3-2s}+C\|u_{\delta}\|_{L^{\frac{12}{3+2t}}}^4-C\lambda\int_{\mathbb{R}^3}|u_{\delta}|^{q}\, {\rm d}x.$$ In view of , we have that $$\begin{aligned} \lim_{\delta\rightarrow0^{+}}\frac{\Big(\int_{\mathbb{R}^3}|u_{\delta}|^{\frac{12}{3+2t}}\,{\rm d}x\Big)^{\frac{3+2t}{3}}}{\delta^{3-2s}}\leq\left\{ \begin{array}{ll} \lim\limits_{\delta\rightarrow0^{+}}\frac{O(\delta^{2t+4s-3})}{\delta^{3-2s}}=0, & \hbox{$\frac{12}{3+2t}>\frac{3}{3-2s}$,} \\ \lim\limits_{\delta\rightarrow0^{+}}\frac{O(\delta^{2t+4s-3}|\log\delta|^{\frac{3+2t}{3}})}{\delta^{3-2s}}=0, & \hbox{$\frac{12}{3+2t}=\frac{3}{3-2s}$,} \\ \lim\limits_{\delta\rightarrow0^{+}} \frac{O(\delta^{2(3-2s)})}{\delta^{3-2s}}=0, & \hbox{$\frac{12}{3+2t}<\frac{3}{3-2s}$,} \end{array} \right.\end{aligned}$$ and noting that $2s-\frac{3-2s}{2}q<0$ if $\frac{4s}{3-2s}< q<\frac{6}{3-2s}$, we have $$\begin{aligned} \lim_{\delta\rightarrow0^{+}}\lambda\frac{\int_{\mathbb{R}^3}|u_{\delta}|^q\,{\rm d}x}{\delta^{3-2s}}= \left\{ \begin{array}{ll} \lim\limits_{\delta\rightarrow0}\lambda\frac{O(\delta^{3-\frac{3-2s}{2}q})}{\delta^{3-2s}}=+\infty, & \hbox{$\frac{4s}{3-2s}<q<\frac{6}{3-2s}$,} \\ \lim\limits_{\delta\rightarrow0}\lambda\frac{O(\delta^{3-\frac{3-2s}{2}q})}{\delta^{3-2s}}, & \hbox{$\frac{3}{3-2s}<q\leq\frac{4s}{3-2s}$,} \\ \lim\limits_{\delta\rightarrow0}\lambda\frac{O(\delta^{3-\frac{3-2s}{2}q}|\log\delta|)}{\delta^{3-2s}}, & \hbox{$q=\frac{3}{3-2s}$,} \\ \lim\limits_{\delta\rightarrow0}\lambda\frac{O(\delta^{\frac{3-2s}{2}q})}{\delta^{3-2s}}, & \hbox{$\frac{4s+2t}{s+t}<q<\frac{3}{3-2s}$.} \end{array} \right.\end{aligned}$$ We can choosing $\lambda$ large enough such that the above three limit equal to $+\infty$, for instance, $\lambda=\delta^{-2s}$. $\bullet$ If $s=\frac{3}{4}$, it follows from that $$\begin{aligned} \mathcal{I}_{\mu}(u_{\delta}^{\tau})&\leq\frac{s}{3}\mathcal{S}_s^{\frac{3}{2s}}+C\delta^{\frac{3}{2}}+C\delta^{\frac{3}{2}}|\log\delta|+C\|u_{\delta}\|_{L^{\frac{12}{3+2t}}}^4-C\lambda\int_{\mathbb{R}^3}|u_{\delta}|^{q}\, {\rm d}x\\ &\leq\frac{s}{3}\mathcal{S}_s^{\frac{3}{2s}}+C\delta^{\frac{3}{2}}|\log\delta|+C\|u_{\delta}\|_{L^{\frac{12}{3+2t}}}^4-C\lambda\int_{\mathbb{R}^3}|u_{\delta}|^{q}\, {\rm d}x.\end{aligned}$$ Since $\frac{12}{3+2t}>2=\frac{3}{3-2s}$, we get that $$\begin{aligned} \lim_{\delta\rightarrow0^{+}}\frac{\Big(\int_{\mathbb{R}^3}|u_{\delta}|^{\frac{12}{3+2t}}\,{\rm d}x\Big)^{\frac{3+2t}{3}}}{\delta^{2s}|\log\delta|}\leq \lim\limits_{\delta\rightarrow0^{+}}\frac{O(\delta^{2t+4s-3})}{\delta^{2s}|\log\delta|}=0,\quad\frac{12}{3+2t}>\frac{3}{3-2s}=2\end{aligned}$$ and owing to $\frac{3}{3-2s}=2<\frac{4s+2t}{s+t}<q$, then for any $\lambda>0$, we obtain that $$\begin{aligned} \lim_{\delta\rightarrow0^{+}}\lambda\frac{\int_{\mathbb{R}^3}|u_{\delta}|^q\,{\rm d}x}{\delta^{2s}|\log\delta|}= \lim\limits_{\delta\rightarrow0^{+}}\lambda\frac{O(\delta^{3-\frac{3-2s}{2}q})}{\delta^{2s}|\log\delta|}=+\infty,\quad \frac{4s+2t}{s+t}<q<\frac{6}{3-2s}.\end{aligned}$$ $\bullet$ In the case $\frac{1}{2}<s<\frac{3}{4}$, by means of , we get $$\begin{aligned} \mathcal{I}_{\mu}(u_{\delta}^{\tau})&\leq\frac{s}{3}\mathcal{S}_s^{\frac{3}{2s}}+C\delta^{3-2s}+C\delta^{2s}+C\|u_{\delta}\|_{L^{\frac{12}{3+2t}}}^4-C\lambda\int_{\mathbb{R}^3}|u_{\delta}|^{q}\, {\rm d}x\\ &\leq\frac{s}{3}\mathcal{S}_s^{\frac{3}{2s}}+C\delta^{2s}+C\|u_{\delta}\|_{L^{\frac{12}{3+2t}}}^4-C\lambda\int_{\mathbb{R}^3}|u_{\delta}|^{q}\, {\rm d}x.\end{aligned}$$ Observing that $\frac{3}{3-2s}\in(\frac{3}{2},2)$, thus $\frac{12}{3+2t}>\frac{3}{3-2s}$ and $\frac{3}{3-2s}<\frac{4s+2t}{s+t}<q<\frac{6}{3-2s}$. Hence $$\begin{aligned} \lim_{\delta\rightarrow0^{+}}\frac{\Big(\int_{\mathbb{R}^3}|u_{\delta}|^{\frac{12}{3+2t}}\,{\rm d}x\Big)^{\frac{3+2t}{3}}}{\delta^{2s}}\leq \lim\limits_{\delta\rightarrow0^{+}}\frac{O(\delta^{2t+4s-3})}{\delta^{2s}}=0,\quad \frac{12}{3+2t}>\frac{3}{3-2s} \end{aligned}$$ and for any $\lambda>0$, we have $$\begin{aligned} \lim_{\delta\rightarrow0^{+}}\lambda\frac{\int_{\mathbb{R}^3}|u_{\delta}|^q\,{\rm d}x}{\delta^{2s}}= \lim\limits_{\delta\rightarrow0^{+}}\lambda\frac{O(\delta^{3-\frac{3-2s}{2}q})}{\delta^{2s}}=+\infty,\quad \frac{4s+2t}{s+t}<q<\frac{6}{3-2s}.\end{aligned}$$ From the above arguments, we conclude the proof. From the estimate of mountain pass level, using the Vanishing Lemma, it is not difficult to deduce that the bounded sequence $\{u_n\}\subset H^s(\mathbb{R}^3)$ given in is non-vanishing. That is, \[lem3-5\] There exists a sequence $\{x_n\}\subset\mathbb{R}^3$ and $R>0$, $\beta>0$ such that $\int_{B_R(x_n)}|u_n|^2\,{\rm d}x\geq\beta$. Combining with Lemma \[lem3-3\] and Lemma \[lem3-5\], we can show the existence of positive ground state solution for the limiting problem . \[pro3-3\] Problem possesses a positive ground state solution $u\in H^s(\mathbb{R}^3)$. Let $\{u_n\}$ be the sequence given in . Set $\widehat{u}_n(x)=u_n(x+x_n)$, where $\{x_n\}$ is the sequence obtained in Lemma \[lem3-5\]. Thus $\{\widehat{u}_n\}$ is still bounded in $H^s(\mathbb{R}^3)$ and so up to a subsequence, still denoted by $\{\widehat{u}_n\}$, we may assume that there exists $\widehat{u}\in H^s(\mathbb{R}^3)$ such that $$\left\{ \begin{array}{ll} \widehat{u}_n\rightharpoonup \widehat{u} & \hbox{in $H^s(\mathbb{R}^3)$,} \\ \widehat{u}_n\rightarrow \widehat{u} & \hbox{in $L_{loc}^p(\mathbb{R}^3)$ for all $1\leq p<2_s^{\ast}$,}\\ \widehat{u}_n\rightarrow \widehat{u} & \hbox{a.e. $\mathbb{R}^3$.} \end{array} \right.$$ It follows from Lemma \[lem3-5\] that $\widehat{u}$ is nontrivial. Moreover, using $(iv)$ of Lemma \[lem2-1\], it is not difficult to verify that $\widehat{u}$ is a nontrivial solution of problem , and since $f\in C^1(\mathbb{R}^3)$, standard arguments lead to $\mathcal{G}_{\mu}(\widehat{u})=0$. By Fatou’s Lemma and , we have $$\begin{aligned} c_{\mu}&=b_{\mu}\leq\mathcal{I}_{\mu}(\widehat{u})=\mathcal{I}_{\mu}(\widehat{u})-\frac{1}{4s+2t-3}\mathcal{G}_{\mu}(\widehat{u})=\frac{s}{4s+2t-3}\int_{\mathbb{R}^3}\mu|\widehat{u}|^2\,{\rm d}x\\ &+\frac{s+t}{4s+2t-3}\int_{\mathbb{R}^3}\Big(f(\widehat{u})\widehat{u}-\frac{4s+2t}{s+t}F(\widehat{u})\Big)\,{\rm d}x+\frac{(2_s^{\ast}-4)s+(2_s^{\ast}-2)t}{2_s^{\ast}(4s+2t-3)}\int_{\mathbb{R}^3}(\widehat{u}^{+})^{2_s^{\ast}}\,{\rm d}x\\ &\leq\liminf_{n\rightarrow\infty}\Big[\frac{s+t}{4s+2t-3}\int_{\mathbb{R}^3}\Big(f(\widehat{u}_n)\widehat{u}_n-\frac{4s+2t}{s+t}F(\widehat{u}_n)\Big)\,{\rm d}x\\ &+\frac{(2_s^{\ast}-4)s+(2_s^{\ast}-2)t}{2_s^{\ast}(4s+2t-3)}\int_{\mathbb{R}^3}(\widehat{u}_n^{+})^{2_s^{\ast}}\,{\rm d}x+\frac{s}{4s+2t-3}\int_{\mathbb{R}^3}\mu|\widehat{u}_n|^2\,{\rm d}x\Big]\\ &=\liminf_{n\rightarrow\infty}\Big[\mathcal{I}_{\mu}(\widehat{u}_n)-\frac{1}{4s+2t-3}\mathcal{G}_{\mu}(\widehat{u}_n)\Big]\\ &=\liminf_{n\rightarrow\infty}\Big[\mathcal{I}_{\mu}(u_n)-\frac{1}{4s+2t-3}\mathcal{G}_{\mu}(u_n)\Big]=c_{\mu}\end{aligned}$$ which implies that $\widehat{u}_n\rightarrow \widehat{u}$ in $H^s(\mathbb{R}^3)$. Indeed, from the above inequality, we get that $$\int_{\mathbb{R}^3}\widehat{u}_n^2\,{\rm d}x\rightarrow\int_{\mathbb{R}^3}\widehat{u}^2\,{\rm d}x,\quad \int_{\mathbb{R}^3}(\widehat{u}_n^{+})^{2_s^{\ast}}\,{\rm d}x\rightarrow\int_{\mathbb{R}^3}(\widehat{u}^{+})^{2_s^{\ast}}\,{\rm d}x.$$ By virtue of the Brezis-Lieb Lemma and interpolation argument, we conclude that $$\widehat{u}_n\rightarrow\widehat{u}\quad \text{in}\,\, L^r(\mathbb{R}^3)\,\, \text{for all}\,\, 2\leq r\leq 2_s^{\ast}.$$ Hence, from the standard arguments, it follows that $\widehat{u}_n\rightarrow \widehat{u}$ in $H^s(\mathbb{R}^3)$. Therefore, by Lemma \[lem3-2\], we conclude that $\mathcal{I}_{\mu}(\widehat{u})=c_{\mu}$ and $\mathcal{I}'_{\mu}(\widehat{u})=0$. Next, we show that the ground state solution of is positive. Indeed, by standard argument to the proof Proposition 4.4 in [@Teng2], using Lemma \[lem2-3\] two times, we have that $\widehat{u}\in C^{2,\alpha}(\mathbb{R}^3)$ for some $\alpha\in(0,1)$ for $s>\frac{1}{2}$. Using $-\widehat{u}^{-}$ as a testing function, it is easy to see that $\widehat{u}\geq0$. Since $\widehat{u}\in C^{2,\alpha}(\mathbb{R}^3)$, by Lemma 3.2 in [@NPV], we have that $$(-\Delta)^s\widehat{u}(x)=-\frac{1}{2}C_s\int_{\mathbb{R}^3}\frac{\widehat{u}(x+y)+\widehat{u}(x-y)-2\widehat{u}(x)}{|x-y|^{3+2s}}\,{\rm d}x\,{\rm d}y,\quad \forall\,\, x\in\mathbb{R}^3.$$ Assume that there exists $x_0\in\mathbb{R}^3$ such that $\widehat{u}(x_0)=0$, then from $\widehat{u}\geq0$ and $\widehat{u}\not\equiv0$, we get $$(-\Delta)^s\widehat{u}(x_0)=-\frac{1}{2}C_s\int_{\mathbb{R}^3}\frac{\widehat{u}(x_0+y)+\widehat{u}(x_0-y)}{|x_0-y|^{3+2s}}\,{\rm d}x\,{\rm d}y<0.$$ However, observe that $(-\Delta)^s\widehat{u}(x_0)=-\mu \widehat{u}(x_0)-(\phi_{\widehat{u}}^t\widehat{u})(x_0)+f(\widehat{u}(x_0))+\widehat{u}(x_0)^{2_s^{\ast}-1}=0$, a contradiction. Hence, $\widehat{u}(x)>0$, for every $x\in\mathbb{R}^3$. The proof is completed. Let $\mathcal{L}_{\mu}$ be the set of ground state solutions $W$ of satisfying $W(0)=\max\limits_{\mathbb{R}^3}W(x)$. By similar proof of Proposition 3.8 in [@Teng4], we can establish the following compactness of $\mathcal{L}_{\mu}$. \[pro3-4\] $(i)$ For each $\mu>0$, $\mathcal{L}_{\mu}$ is compact in $H^s(\mathbb{R}^3)$.\ $(ii)$ $0<W(x)\leq\frac{C}{1+|x|^{3+2s}}$, for any $x\in\mathbb{R}^3$. The penalization scheme ======================= For the bounded domain $\Lambda$ given in $(V_1)$, $k>2$, $a>0$ such that $f(a)+a^{2_s^{\ast}-1}=\frac{V_0}{k}a$ where $\alpha_0$ is mentioned in $(V_0)$, we consider a new problem $$\label{main-4-1} (-\Delta)^su+V(\varepsilon z)u+\phi_u^t u=g(\varepsilon z,u) \quad \text{in}\,\,\mathbb{R}^3$$ where $g(\varepsilon z,\tau)=\chi_{\Lambda_{\varepsilon}}(\varepsilon z)(f(\tau)+(\tau^{+})^{2_s^{\ast}-1})+(1-\chi_{\Lambda_{\varepsilon}}(\varepsilon z))\tilde{f}(\tau)$ with $$\tilde{f}(\tau)=\left\{ \begin{array}{ll} f(\tau)+(\tau^{+})^{2_s^{\ast}-1} & \hbox{if $\tau\leq a$,} \\ \frac{V_0}{k}\tau & \hbox{if $\tau>a$} \end{array} \right.$$ and $\chi_{\Lambda_{\varepsilon}}(\varepsilon z)=1$ if $z\in\Lambda_{\varepsilon}$, $\chi(z)=0$ if $z\not\in\Lambda_{\varepsilon}$, where $\Lambda_{\varepsilon}=\Lambda/\varepsilon$. It is easy to see that under the assumptions $(f_1)$-$(f_3)$, $g(z,\tau)$ is a Caratheodory function and satisfies the following assumptions:\ $(g_1)$ $g(z,\tau)=o(\tau^3)$ as $\tau\rightarrow0$ uniformly on $z\in\mathbb{R}^3$;\ $(g_2)$ $g(z,\tau)\leq f(\tau)+\tau^{2_s^{\ast}-1}$ for all $\tau\in\mathbb{R}^{+}$ and $z\in\mathbb{R}^3$, $g(z,\tau)=0$ for all $z\in\mathbb{R}^3$ and $\tau<0$, $g(z,\tau)=f(\tau)+(\tau^{+})^{2_s^{\ast}-1}$ for $z\in\mathbb{R}^3$, $\tau\in[0,a]$;\ $(g_3)$ $0<2\tilde{F}(\tau)\leq\tilde{f}(\tau)\tau\leq\frac{V_0}{k}\tau^2\leq\frac{V(x)}{k}\tau^2$ for all $s\geq0$ with the number $k>2$, where $\tilde{F}(\tau)$ is a prime function of $\tilde{f}$;\ $(g_4)$ $0<q G(z,\tau)\leq g(z,\tau)\tau$ for all $z\in\Lambda$, $\tau>0$ or $z\in\mathbb{R}^3\backslash\Lambda$, $\tau\leq a$, where $G(z,\tau)$ is a prime function of $g(z,\tau)$;\ $(g_5)$ $\frac{g(z,s\tau)}{s}$ is nondecreasing in $\tau\in\mathbb{R}^{+}$ uniformly for $z\in\mathbb{R}^3$, $\frac{g(z,s\tau)}{\tau^q}$ is nondecreasing in $\tau\in\mathbb{R}^{+}$ and $z\in\Lambda$, $\frac{g(z,s\tau)}{\tau^q}$ is nondecreasing in $\tau\in(0,a)$ and $z\in\mathbb{R}^3\backslash\Lambda$. Obviously, if $u_{\varepsilon}$ is a solution of satisfying $u_{\varepsilon}(z)\leq a$ for $z\in\mathbb{R}^3$, then $u_{\varepsilon}$ is indeed a solution of the original problem . For $u\in H_{\varepsilon}$, let $$P_{\varepsilon}(u)=\frac{1}{2}\int_{\mathbb{R}^3}(|D_su|^2+V(\varepsilon z)u^2)\,{\rm d}z+\frac{1}{4}\int_{\mathbb{R}^3}\phi_u^tu^2\,{\rm d}z-\int_{\mathbb{R}^3}G(\varepsilon z,u)\,{\rm d}z$$ and $$Q_{\varepsilon}(u)=\Big(\int_{\mathbb{R}^3\backslash\Lambda_{\varepsilon}}u^2\,{\rm d}x-\varepsilon\Big)_{+}^2.$$ Let us define the functional $\mathcal{J}_{\varepsilon}: H_{\varepsilon}\rightarrow\mathbb{R}$ as follows $$\mathcal{J}_{\varepsilon}(u)=P_{\varepsilon}(u)+Q_{\varepsilon}(u).$$ Clearly, $\mathcal{J}_{\varepsilon}\in C^1(H_{\varepsilon},\mathbb{R})$. To find solutions of which concentrates in $\Lambda$ as $\varepsilon\rightarrow0$, we shall search critical points of $\mathcal{J}_{\varepsilon}$ such that $Q_{\varepsilon}$ is zero. Set $$\delta_0=\frac{1}{10}{\rm dist}(\mathcal{M},\mathbb{R}^3\backslash\Lambda),\quad \beta\in(0,\delta_0).$$ Fix a cut-off function $\varphi\in C_0^{\infty}(\mathbb{R}^3)$ such that $0\leq\varphi\leq1$, $\varphi=1$ for $|z|\leq\beta$, $\varphi=0$ for $|z|\geq 2\beta$ and $|\nabla\varphi|\leq C/\beta$. Set $\varphi_{\varepsilon}(z)=\varphi(\varepsilon z)$, for any $W\in \mathcal{L}_{V_0}$ and any point $y\in\mathcal{M}^{\beta}=\{y\in\mathbb{R}^3\,\,|\,\,\inf\limits_{z\in\mathcal{M}}|y-z|\leq\beta\}$, we define $$W_{\varepsilon}^{y}(z)=\varphi_{\varepsilon}(z-\frac{y}{\varepsilon})W(z-\frac{y}{\varepsilon}).$$ For $A\subset H_{\varepsilon}$, we use the notation $$A^{a}=\{u\in H_{\varepsilon}\,\,\Big|\,\,\inf_{v\in A}\|u-v\|_{H_{\varepsilon}}\leq a\}.$$ We want to find a solution near the set $$\mathcal{N}_{\varepsilon}=\{W_{\varepsilon}^{y}(z)\,\,\Big|\,\,y\in\mathcal{M}^{\beta}, \,\, W\in \mathcal{L}_{V_0}\}$$ for $\varepsilon>0$ sufficiently small. Similar arguments as the proof of Lemma 4.1, Lemma 4.2 and Lemma 4.3 in [@Teng4], we can show that\ - $\mathcal{N}_{\varepsilon}$ is uniformly bounded in $H_{\varepsilon}$ and it is compact in $H_{\varepsilon}$ for any $\varepsilon>0$; - $$\sup_{\tau\in[0,\tau_0]}\Big|\mathcal{J}_{\varepsilon}(W_{\varepsilon,\tau})-\mathcal{I}_{V_0}(W^{\ast}_{\tau}(z))\Big|\rightarrow0\quad \text{as}\,\, \varepsilon\rightarrow0;$$ - $$\label{equ4-1} \lim_{\varepsilon\rightarrow0}\mathcal{C}_{\varepsilon}=\lim_{\varepsilon\rightarrow0}\mathcal{D}_{\varepsilon}:=\lim_{\varepsilon\rightarrow0}\max_{\tau\in[0,1]}\mathcal{J}_{\varepsilon}(\gamma_{\varepsilon}(\tau))=c_{V_0}$$ where $$\mathcal{C}_{\varepsilon}:=\inf_{\gamma\in\mathcal{A}_{\varepsilon}}\max_{\tau\geq0}\mathcal{J}_{\varepsilon}(\gamma(\tau)),$$ $\mathcal{A}_{\varepsilon}=\{\gamma\in C([0,1],H_{\varepsilon})\,\,|\,\, \gamma(0)=0,\,\,\gamma(1)=U_{\varepsilon,\tau_0}\}$, $\gamma_{\varepsilon}(\tau)=W_{\varepsilon,\tau\tau_0}$ for $\tau\in[0,1]$ and $c_{V_0}=\mathcal{I}_{V_0}(W^{\ast})$ for $W^{\ast}\in\mathcal{L}_{V_0}$. Moreover, $\mathcal{J}_{\varepsilon}(U_{\varepsilon,\tau})$ possesses the mountain-pass geometry. \[lem4-1\] There exists a small $d_0>0$ such that for any $\{\varepsilon_i\}$, $\{u_{\varepsilon_i}\}$ satisfying $\lim\limits_{i\rightarrow\infty}\varepsilon_i\rightarrow0$, $u_{\varepsilon_i}\in \mathcal{N}_{\varepsilon_i}^{d_0}$ and $$\lim_{i\rightarrow\infty}\mathcal{J}_{\varepsilon_i}(u_{\varepsilon_i})\leq c_{V_0}\quad \text{and}\quad \lim_{i\rightarrow\infty}\mathcal{J}_{\varepsilon_i}'(u_{\varepsilon_i})=0,$$ there exist, up to a subsequence, $\{x_i\}\subset\mathbb{R}^3$, $x_0\in\mathcal{M}$, $W\in \mathcal{L}_{V_0}$ such that $$\lim_{i\rightarrow\infty}|\varepsilon_ix_i-x_0|=0\quad \text{and}\quad \lim_{i\rightarrow\infty}\|u_{\varepsilon_i}-\varphi_{\varepsilon}(\cdot-x_{i})W(\cdot-x_{i})\|_{E_{\varepsilon_i}}=0.$$ In the proof we will drop the index $i$ and write $\varepsilon$ instead of $\varepsilon_i$ for simplicity, and we still use $\varepsilon$ after taking a subsequence. By the definition of $\mathcal{N}_{\varepsilon}^{d_0}$, there exist $\{W_{\varepsilon}\}\subset\mathcal{L}_{V_0}$ and $\{x_{\varepsilon}\}\subset\mathcal{M}^{\beta}$ such that $$\|u_{\varepsilon}-\varphi_{\varepsilon}(\cdot-\frac{x_{\varepsilon}}{\varepsilon})W_{\varepsilon}(\cdot-\frac{x_{\varepsilon}}{\varepsilon})\|_{H_{\varepsilon}}\leq\frac{3}{2}d_0.$$ Since $\mathcal{L}_{V_0}$ and $\mathcal{M}^{\beta}$ are compact, there exist $W_0\in\mathcal{L}_{V_0}$, $x_0\in\mathcal{M}^{\beta}$ such that $W_{\varepsilon}\rightarrow W_0$ in $H^s(\mathbb{R}^3)$ and $x_{\varepsilon}\rightarrow x_0$ as $\varepsilon\rightarrow0$. Thus, for $\varepsilon>0$ small, $$\label{equ4-2} \|u_{\varepsilon}-\varphi_{\varepsilon}(\cdot-\frac{x_{\varepsilon}}{\varepsilon})W_0(\cdot-\frac{x_{\varepsilon}}{\varepsilon})\|_{H_{\varepsilon}}\leq2d_0.$$ [**Step 1.**]{} We claim that $$\label{equ4-3} \lim_{\varepsilon\rightarrow0}\sup_{y\in A_{\varepsilon}}\int_{B_1(y)}|u_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z=0,$$ where $A_{\varepsilon}=B_{3\beta/\varepsilon}(x_{\varepsilon}/\varepsilon)\backslash B_{\beta/2\varepsilon}(x_{\varepsilon}/\varepsilon)$. Suppose by contradiction that there exists $r>0$ such that $$\liminf_{\varepsilon\rightarrow0}\sup_{y\in A_{\varepsilon}}\int_{B_1(y)}|u_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z=2r>0.$$ Thus, there exists $y_{\varepsilon}\in A_{\varepsilon}$ such that $\int_{B_1(y_{\varepsilon})}|u_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z\geq r>0$ for $\varepsilon>0$ small. Since $y_{\varepsilon}\in A_{\varepsilon}$, there exists $y^{\ast}\in\mathcal{M}^{4\beta}\subset\Lambda$ such that $\varepsilon y_{\varepsilon}\rightarrow y^{\ast}$ as $\varepsilon\rightarrow0$. Set $v_{\varepsilon}(z)=u_{\varepsilon}(z+y_{\varepsilon})$, then for $\varepsilon>0$ small, $$\label{equ4-4} \int_{B_1(0)}|v_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z\geq r>0.$$ Thus, up to a subsequence, we may assume that there exists $v\in H^s(\mathbb{R}^3)$ such that $v_{\varepsilon}\rightharpoonup v$ in $H^s(\mathbb{R}^3)$, $v_{\varepsilon}\rightarrow v$ in $L_{loc}^p(\mathbb{R}^3)$ for $1\leq p<2_s^{\ast}$ and $v_{\varepsilon}\rightarrow v$ a.e. in $\mathbb{R}^3$. It is easy to check that $v$ satisfies $$\label{equ4-4-0} (-\Delta)^s v+V(y^{\ast}) v+\phi_{v}^t v=f(v)+(v^{+})^{2_s^{\ast}-1} \quad x\in \mathbb{R}^3.$$ Indeed, by the definition of weakly convergence, we have $$\begin{aligned} \int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\frac{(v_{\varepsilon}(z)-v_{\varepsilon}(y))(\varphi(z)-\varphi(y)}{|z-y|^{3+2s}}\,{\rm d }y\,{\rm d }z+\int_{\mathbb{R}^3}V(y^{\ast})v_{\varepsilon}\varphi\,{\rm d}z\rightarrow\\ \int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\frac{(v(z)-v(y))(\varphi(z)-\varphi(y)}{|z-y|^{3+2s}}\,{\rm d }y\,{\rm d }z+\int_{\mathbb{R}^3}V(y^{\ast})v\varphi\,{\rm d}z\end{aligned}$$ for any $\varphi\in C_0^{\infty}(\mathbb{R}^3)$. Now given $\varphi\in C_0^{\infty}(\mathbb{R}^3)$, we have $\|\varphi(\cdot-y_{\varepsilon})\|_{H_{\varepsilon}}\leq C$ and so $\langle\mathcal{J}_{\varepsilon}'(u_{\varepsilon}),\varphi(\cdot-y_{\varepsilon})\rangle\rightarrow0$ as $\varepsilon\rightarrow0$. Using the fact that $v_{\varepsilon}\rightarrow v$ in $L_{loc}^p(\mathbb{R}^3)$ for $1\leq p<2_s^{\ast}$, the Lebesgue dominated convergence Theorem, the boundedness of ${\rm supp}(\varphi)$ and $(g_0)$–$(g_1)$, it follows that $$\begin{aligned} \int_{\mathbb{R}^3}(V(\varepsilon z+\varepsilon y_{\varepsilon})-V(y^{\ast}))v_{\varepsilon}\varphi\,{\rm d }z\rightarrow0,\quad \int_{\mathbb{R}^3}(\phi_{v_{\varepsilon}}^tv_{\varepsilon}-\phi_v^tv)\varphi\,{\rm d }z\rightarrow0,\end{aligned}$$ $$\begin{aligned} \int_{\mathbb{R}^3\backslash\Lambda_{\varepsilon}}u_{\varepsilon}(z)\varphi(z-y_{\varepsilon})\,{\rm d}z=\int_{\mathbb{R}^3\backslash\Lambda_{\varepsilon}+y_{\varepsilon}}v_{\varepsilon}(z)\varphi(z)\,{\rm d}z\rightarrow0\end{aligned}$$ and $$\begin{aligned} \int_{\mathbb{R}^3}\Big[g(\varepsilon z+\varepsilon y_{\varepsilon},v_{\varepsilon})-f(v)-(v^{+})^{2_s^{\ast}-1}\Big]\varphi\,{\rm d }z\rightarrow0\end{aligned}$$ for any $\varphi\in C_0^{\infty}(\mathbb{R}^3)$. Therefore, we get that $$\begin{aligned} \int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\frac{(v(z)-v(y))(\varphi(z)-\varphi(y))}{|z-y|^{3+2s}}\,{\rm d}y\,{\rm d}z&+\int_{\mathbb{R}^3}V(y^{\ast})v\varphi\,{\rm d}z&+\int_{\mathbb{R}^3}\phi_v^tv\varphi\,{\rm d}z\\ &-\int_{\mathbb{R}^3}g(v)\varphi\,{\rm d}z=0\end{aligned}$$ for any $\varphi\in C_0^{\infty}(\mathbb{R}^3)$. Since $\varphi$ is arbitrary and $C_0^{\infty}(\mathbb{R}^3)$ is dense in $H_{\varepsilon}$, it follows that $v$ satisfies . [**Case 1.**]{} If $v\neq0$, then $$\begin{aligned} &c_{V(y^{\ast})}\leq \mathcal{I}_{V(y^{\ast})}(v)=\mathcal{I}_{V(y^{\ast})}(v)-\frac{1}{4s+2t-3}\mathcal{G}_{V(y^{\ast})}(v)\\ &=s\int_{\mathbb{R}^3}V(y^{\ast})|v|^2\,{\rm d}z+\frac{s+t}{4s+2t-3}\int_{\mathbb{R}^3}[f(v)v-\frac{4s+2t}{s+t}F(v)]\,{\rm d}z\\ &+\frac{(2_s^{\ast}-4)s+(2_s^{\ast}-2)t}{2_s^{\ast}(4s+2t-3)}\int_{\mathbb{R}^3}(v^{+})^{2_s^{\ast}}\,{\rm d}z\\ &\leq s\|V\|_{L^{\infty}(\overline{\Lambda})}\int_{\mathbb{R}^3}|v|^2\,{\rm d}z+\frac{s+t}{4s+2t-3}\int_{\mathbb{R}^3}[f(v)v-\frac{4s+2t}{s+t}F(v)]\,{\rm d}z\\ &+\frac{(2_s^{\ast}-4)s+(2_s^{\ast}-2)t}{2_s^{\ast}(4s+2t-3)}\int_{\mathbb{R}^3}(v^{+})^{2_s^{\ast}}\,{\rm d}z.\end{aligned}$$ Hence, for sufficiently large $R>0$, by Fatou’s Lemma, we have that $$\begin{aligned} &\liminf_{\varepsilon\rightarrow0}\Big[s\|V\|_{L^{\infty}(\overline{\Lambda})}\int_{B_R(y_{\varepsilon})}|u_{\varepsilon}|^2\,{\rm d}z+\frac{s+t}{4s+2t-3}\int_{B_R(y_{\varepsilon})}[f(u_{\varepsilon})u_{\varepsilon}-\frac{4s+2t}{s+t}F(u_{\varepsilon})]\,{\rm d}z\\ &+\frac{(2_s^{\ast}-4)s+(2_s^{\ast}-2)t}{2_s^{\ast}(4s+2t-3)}\int_{B_R(y_{\varepsilon})}(u_{\varepsilon}^{+})^{2_s^{\ast}}\,{\rm d}z\Big]\\ &=\liminf_{\varepsilon\rightarrow0}\Big[s\|V\|_{L^{\infty}(\overline{\Lambda})}\int_{B_R(0)}|v_{\varepsilon}|^2\,{\rm d}z+\frac{s+t}{4s+2t-3}\int_{B_R(0)}[f(v_{\varepsilon})v_{\varepsilon}-\frac{4s+2t}{s+t}F(v_{\varepsilon})]\,{\rm d}z\\ &+\frac{(2_s^{\ast}-4)s+(2_s^{\ast}-2)t}{2_s^{\ast}(4s+2t-3)}\int_{B_R(0)}(v_{\varepsilon}^{+})^{2_s^{\ast}}\,{\rm d}z\Big]\\ &\geq\Big[s\|V\|_{L^{\infty}(\overline{\Lambda})}\int_{B_R(0)}|v|^2\,{\rm d}z+\frac{s+t}{4s+2t-3}\int_{B_R(0)}[f(v)v-\frac{4s+2t}{s+t}F(v)]\,{\rm d}z\\ &+\frac{(2_s^{\ast}-4)s+(2_s^{\ast}-2)t}{2_s^{\ast}(4s+2t-3)}\int_{B_R(0)}(v^{+})^{2_s^{\ast}}\,{\rm d}z\Big]\\ &\geq\frac{1}{2}\Big[s\|V\|_{L^{\infty}(\overline{\Lambda})}\int_{\mathbb{R}^3}|v|^2\,{\rm d}z+\frac{s+t}{4s+2t-3}\int_{\mathbb{R}^3}[f(v)v-\frac{4s+2t}{s+t}F(v)]\,{\rm d}z\\ &+\frac{(2_s^{\ast}-4)s+(2_s^{\ast}-2)t}{2_s^{\ast}(4s+2t-3)}\int_{B_R(0)}(v^{+})^{2_s^{\ast}}\,{\rm d}z\Big]\geq\frac{1}{2}c_{V(x^{\ast})}>0.\end{aligned}$$ On the other hand, by the Sobolev embedding theorem and , one has $$\begin{aligned} &s\|V\|_{L^{\infty}(\overline{\Lambda})}\int_{B_R(y_{\varepsilon})}|u_{\varepsilon}|^2\,{\rm d}z+\frac{s+t}{4s+2t-3}\int_{B_R(y_{\varepsilon})}[f(u_{\varepsilon})u_{\varepsilon}-\frac{4s+2t}{s+t}F(u_{\varepsilon})]\,{\rm d}z\\ &+\frac{(2_s^{\ast}-4)s+(2_s^{\ast}-2)t}{2_s^{\ast}(4s+2t-3)}\int_{B_R(y_{\varepsilon})}(u_{\varepsilon}^{+})^{2_s^{\ast}}\,{\rm d}z\\ &\leq Cd_0+C\int_{B_R(y_{\varepsilon})}\Big|\varphi(\varepsilon z-x_{\varepsilon})W_0(z-\frac{x_{\varepsilon}}{\varepsilon})\Big|^2\,{\rm d}z+C\int_{B_R(y_{\varepsilon})}\Big|\varphi(\varepsilon z-x_{\varepsilon})W_0(z-\frac{x_{\varepsilon}}{\varepsilon})\Big|^{2_s^{\ast}}\,{\rm d}z\\ &\leq Cd_0+C\int_{B_R(y_{\varepsilon}-\frac{x_{\varepsilon}}{\varepsilon})}|W_0|^2\,{\rm d}x+C\int_{B_R(y_{\varepsilon}-\frac{x_{\varepsilon}}{\varepsilon})}|W_0|^{2_s^{\ast}}\,{\rm d}x\end{aligned}$$ Observing that $y_{\varepsilon}\in A_{\varepsilon}$, implies that $|y_{\varepsilon}-\frac{x_{\varepsilon}}{\varepsilon}|\geq\frac{\beta}{2\varepsilon}$, then for $\varepsilon>0$ small enough, there hold $$\int_{B_R(y_{\varepsilon}-\frac{x_{\varepsilon}}{\varepsilon})}|W_0|^2\,{\rm d}z+\int_{B_R(y_{\varepsilon}-\frac{x_{\varepsilon}}{\varepsilon})}|W_0|^{2_s^{\ast}}\,{\rm d}z=o(1),$$ where $o(1)\rightarrow0$ as $\varepsilon\rightarrow0$. Thus, we have proved that $$\begin{aligned} &\frac{1}{2}c_{V(y^{\ast})}\leq s\|V\|_{L^{\infty}(\overline{\Lambda})}\int_{B_R(y_{\varepsilon})}|u_{\varepsilon}|^2\,{\rm d}z+\frac{s+t}{4s+2t-3}\int_{B_R(y_{\varepsilon})}[f(u_{\varepsilon})u_{\varepsilon}\\ &-\frac{4s+2t}{s+t}F(u_{\varepsilon})]\,{\rm d}z+\frac{(2_s^{\ast}-4)s+(2_s^{\ast}-2)t}{2_s^{\ast}(4s+2t-3)}\int_{B_R(y_{\varepsilon})}(u_{\varepsilon}^{+})^{2_s^{\ast}}\,{\rm d}z\\ &\leq Cd_0+o(1).\end{aligned}$$ This leads to a contradiction if $d_0$ is small enough. [**Case 2.**]{} If $v=0$, i.e., $v_{\varepsilon}\rightharpoonup0$ in $H^s(\mathbb{R}^3)$, $v_{\varepsilon}\rightarrow 0$ in $L_{loc}^p(\mathbb{R}^3)$ for $1\leq p<2_s^{\ast}$ and $v_{\varepsilon}\rightarrow 0$ a.e. in $\mathbb{R}^3$. Now we claim that $$\label{equ4-5} \lim_{\varepsilon\rightarrow0}\sup_{\varphi\in C_0^{\infty}(B_2(0)),\|\varphi\|=1}|\langle\rho_{\varepsilon},\varphi\rangle|=0,$$ where $\rho_{\varepsilon}=-(-\Delta)^sv_{\varepsilon}+(v_{\varepsilon}^{+})^{2_s^{\ast}-1}\in \Big(H^s(\mathbb{R}^3)\Big)'$. It is easy to check that for $\varepsilon>0$ small, $\int_{\mathbb{R}^3\backslash\Lambda_{\varepsilon}}u_{\varepsilon}\varphi(z-y_{\varepsilon})\,{\rm d}z=0$ uniformly for any $\varphi\in C_0^{\infty}(B_2(0))$. Thus, for any $\varphi\in C_0^{\infty}(B_2(0))$ with $\|\varphi\|=1$, we deduce that $$\begin{aligned} &\langle\rho_{\varepsilon},\varphi\rangle=-\int_{\mathbb{R}^3}\frac{(v_{\varepsilon}(z)-v_{\varepsilon}(y))(\varphi(z)-\varphi(y))}{|z-y|^{3+2s}}\,{\rm d}y\,{\rm d}z+\int_{\mathbb{R}^3}(v_{\varepsilon}^{+})^{2_s^{\ast}-1}\varphi\,{\rm d}z\\ &=-\langle\mathcal{J}'_{\varepsilon}(u_{\varepsilon}),\varphi(\cdot-y_{\varepsilon})\rangle+\int_{\mathbb{R}^3}V(\varepsilon z)u_{\varepsilon}\varphi(z-y_{\varepsilon})\,{\rm d}z+\int_{\mathbb{R}^3}\phi_{u_{\varepsilon}}^tu_{\varepsilon}\varphi(z-y_{\varepsilon})\,{\rm d}z\\ &+\int_{\mathbb{R}^3}[(u_{\varepsilon}^{+})^{2_s^{\ast}-1}-g(\varepsilon z, u_{\varepsilon})]\varphi(z-y_{\varepsilon})\,{\rm d}z\\ &:=A_1+A_2+A_3+A_4.\end{aligned}$$ By the facts that $\lim\limits_{\varepsilon\rightarrow0}\mathcal{J}'_{\varepsilon}(u_{\varepsilon})=0$, ${\rm supp}\varphi\subset B_2$, $\sup\limits_{z\in B_2(0)}V(\varepsilon z+\varepsilon y_{\varepsilon})\leq C$ uniformly for all $\varepsilon>0$ small, $v_{\varepsilon}\rightarrow0$ in $L_{loc}^p(\mathbb{R}^3)$ for $1\leq p<2_s^{\ast}$, $\frac{12}{3+2t}<2_s^{\ast}$, and using Hölder’s inequality, we deduce that $$\begin{aligned} |A_1|\leq\|\mathcal{J}'_{\varepsilon}(u_{\varepsilon})\|_{(H_{\varepsilon})'}\|\varphi(\cdot-y_{\varepsilon})\|_{H_{\varepsilon}}\leq o(1)\|\varphi(\cdot-y_{\varepsilon})\|=o(1)\|\varphi\|\rightarrow0,\end{aligned}$$ $$\begin{aligned} |A_2|\leq\sup_{x\in B_2(0)}V(\varepsilon x+\varepsilon y_{\varepsilon})\Big(\int_{B_2(0)}|v_{\varepsilon}|^2\,{\rm d}z\Big)^{\frac{1}{2}}\Big(\int_{B_2(0)}|\varphi|^2\,{\rm d}z\Big)^{\frac{1}{2}}\rightarrow0,\end{aligned}$$ $$\begin{aligned} |A_3|&\leq\Big(\int_{\mathbb{R}^3}|\phi_{v_{\varepsilon}}^t|^{2_t^{\ast}}\,{\rm d}z\Big)^{\frac{1}{2_t^{\ast}}}\Big(\int_{B_2(0)}|v_{\varepsilon}|^{\frac{12}{3+2t}}\,{\rm d}z\Big)^{\frac{3+2t}{12}}\Big(\int_{B_2(0)}|\varphi|^{\frac{12}{3+2t}}\,{\rm d}z\Big)^{\frac{3+2t}{12}}\rightarrow0,\end{aligned}$$ uniformly for $\varphi\in C_0^{\infty}(B_2(0))$ with $\|\varphi\|=1$. By $(f_0)$ and $(f_1)$, for any $\eta>0$, there exists $C_{\eta}>0$ such that $$f(\tau)\leq C_{\eta}|\tau|+\eta|\tau|^{2_s^{\ast}-1}\quad \text{for any}\,\, \tau\geq0$$ and $$\lim_{\tau\rightarrow+\infty}\frac{g(\varepsilon z+\varepsilon y_{\varepsilon},\tau)-f(\tau)-(\tau^{+})^{2_s^{\ast}-1}}{(\tau^{+})^{2_s^{\ast}-1}}=0$$ uniformly for $z\in B_2(0)$ and for $\varepsilon$ small enough and then, we have $$\begin{aligned} |A_4|&=\Big|\int_{\mathbb{R}^3}[(v_{\varepsilon}^{+})^{2_s^{\ast}-1}-g(\varepsilon z+\varepsilon y_{\varepsilon}, v_{\varepsilon})]\varphi\,{\rm d}z\Big|=\Big|-\int_{\mathbb{R}^3}f( v_{\varepsilon})\varphi\,{\rm d}z\\ &+\int_{\mathbb{R}^3}[(v_{\varepsilon}^{+})^{2_s^{\ast}-1}+f(v_{\varepsilon})-g(\varepsilon z+\varepsilon y_{\varepsilon}, v_{\varepsilon})]\varphi\,{\rm d}z\Big|\\ &\leq\eta\Big(\int_{\mathbb{R}^3}|v_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z\Big)^{\frac{2_s^{\ast}-1}{2_s^{\ast}}}\Big(\int_{\mathbb{R}^3}|\varphi|^{2_s^{\ast}}\,{\rm d}z\Big)^{\frac{1}{2_s^{\ast}}}+C_{\eta}\Big(\int_{B_2(0)}|v_{\varepsilon}|^2\,{\rm d}z\Big)^{\frac{1}{2}}\Big(\int_{B_2(0)}|\varphi|^{2}\,{\rm d}z\Big)^{\frac{1}{2}}\\ &+\Big|\int_{B_2(0)}[(v_{\varepsilon}^{+})^{2_s^{\ast}-1}+f(v_{\varepsilon})-g(\varepsilon z+\varepsilon y_{\varepsilon}, v_{\varepsilon})]\varphi\,{\rm d}z\Big|\\ &\leq2\eta\Big(\int_{\mathbb{R}^3}|v_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z\Big)^{\frac{2_s^{\ast}-1}{2_s^{\ast}}}\Big(\int_{\mathbb{R}^3}|\varphi|^{2_s^{\ast}}\,{\rm d}z\Big)^{\frac{1}{2_s^{\ast}}}+2C_{\eta}\Big(\int_{B_2(0)}|v_{\varepsilon}|^2\,{\rm d}z\Big)^{\frac{1}{2}}\Big(\int_{B_2(0)}|\varphi|^{2}\,{\rm d}z\Big)^{\frac{1}{2}}.\end{aligned}$$ for $\varepsilon$ sufficiently small. Letting $\varepsilon\rightarrow0$ and then $\eta\rightarrow0$ in the above inequality, we see that $A_4\rightarrow0$ as $\varepsilon\rightarrow0$ uniformly for $\varphi\in C_0^{\infty}(B_2(0))$ with $\|\varphi\|=1$. Hence holds. By Proposition \[pro2-3\], taking $Q=B_1(0)$ and $V=B_2(0)$, from and , it follows that there exists $\widetilde{z}_{\varepsilon}\in\mathbb{R}^3$ and $\sigma_{\varepsilon}>0$ with $\widetilde{z}_{\varepsilon}\rightarrow \widetilde{z}\in\overline{B_1(0)}$, $\sigma_{\varepsilon}\rightarrow0$ as $\varepsilon\rightarrow0$, such that $$\widetilde{v}_{\varepsilon}(z):=\sigma_{\varepsilon}^{\frac{3-2s}{2}}v_{\varepsilon}(\sigma_{\varepsilon}z+\widetilde{z}_{\varepsilon})\rightharpoonup \widetilde{v}\quad \text{in}\,\, \mathcal{D}^{s,2}(\mathbb{R}^3)$$ and $\widetilde{v}\geq0$ is a nontrivial solution of $$\label{equ4-6} (-\Delta)^sv=v^{2_s^{\ast}-1},\quad v\in\mathcal{D}^{s,2}(\mathbb{R}^3).$$ By Theorem 1.1 in [@CKT] and Claim 6 in [@SV], we see that $$\widetilde{v}(z)=\frac{\bar{u}(z/\mathcal{S}_s^{\frac{1}{2s}})}{\|\bar{u}\|_{L^{2_s^{\ast}}}},\quad \bar{u}(z)=\kappa(\mu^2+|z-x_0|^2)^{-\frac{3-2s}{2}}$$ for some $\kappa>0$, $\mu>0$, $x_0\in\mathbb{R}^3$, and $$\label{equ4-6-0} \int_{\mathbb{R}^3}|D_s\widetilde{v}|^2\,{\rm d}z=\int_{\mathbb{R}^3}|\widetilde{v}|^{2_s^{\ast}}\,{\rm d}z=\mathcal{S}_s^{\frac{3}{2s}}.$$ Thus, there exists $R>0$ such that $$\int_{B_R(0)}|\widetilde{v}|^{2_s^{\ast}}\,{\rm d}z\geq\frac{1}{2}\int_{\mathbb{R}^3}|\widetilde{v}|^{2_s^{\ast}}\,{\rm d}z=\frac{1}{2}\mathcal{S}_s^{\frac{3}{2s}}>0.$$ On the other hand, using the facts that $\sigma_{\varepsilon}\rightarrow0$ and $\widetilde{z}_{\varepsilon}\rightarrow \widetilde{z}\in\overline{B_1(0)}$ (imply that $B_{\sigma_{\varepsilon}R}(\widetilde{z}_{\varepsilon}+z_{\varepsilon})\subset B_2(z_{\varepsilon})$ for $\varepsilon$ small), we have that $$\begin{aligned} \label{equ4-7} &\int_{B_R(0)}\widetilde{v}^{2_s^{\ast}}\,{\rm d}z\leq\liminf_{\varepsilon\rightarrow0}\int_{B_R(0)}\widetilde{v}_{\varepsilon}^{2_s^{\ast}}\,{\rm d}z=\liminf_{\varepsilon\rightarrow0}\int_{B_{\sigma_{\varepsilon}R}(\widetilde{z}_{\varepsilon})}v_{\varepsilon}^{2_s^{\ast}}\,{\rm d}z\nonumber\\ &=\liminf_{\varepsilon\rightarrow0}\int_{B_{\sigma_{\varepsilon}R}(\widetilde{z}_{\varepsilon}+y_{\varepsilon})}u_{\varepsilon}^{2_s^{\ast}}\,{\rm d}z\leq\liminf_{\varepsilon\rightarrow0}\int_{B_2(y_{\varepsilon})}u_{\varepsilon}^{2_s^{\ast}}\,{\rm d}z.\end{aligned}$$ But, by the Sobolev imbedding Theorem and , we get $$\begin{aligned} \int_{B_2(y_{\varepsilon})}u_{\varepsilon}^{2_s^{\ast}}\,{\rm d}z&\leq Cd_0+C\int_{B_2(y_{\varepsilon})}(\varphi(\varepsilon x-x_{\varepsilon})W_0(x-\frac{x_{\varepsilon}}{\varepsilon}))^{2_s^{\ast}}\,{\rm d}x\\ &\leq Cd_0+C\int_{B_2(y_{\varepsilon}-\frac{x_{\varepsilon}}{\varepsilon})}W_0^{2_s^{\ast}}\,{\rm d}x=cd_0+o(1)\end{aligned}$$ since $|y_{\varepsilon}-\frac{x_{\varepsilon}}{\varepsilon}|\geq\frac{\beta}{2\varepsilon}$. This leads to a contradiction with for $d_0>0$ small. Hence the claim holds. From , by Proposition \[pro2-1\], we conclude that $$\label{equ4-8} \lim_{\varepsilon\rightarrow0}\int_{A_{\varepsilon}^1}|u_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z=0,$$ where $A_{\varepsilon}^1=B_{2\beta/\varepsilon}(\frac{x_{\varepsilon}}{\varepsilon})\backslash B_{\beta/\varepsilon}(\frac{x_{\varepsilon}}{\varepsilon})$. Indeed, taking a smooth cut-off function $\psi_{\varepsilon}\in C_0^{\infty}(\mathbb{R}^3)$ such that $\psi_{\varepsilon}=1$ on $B_{2\beta/\varepsilon}(\frac{x_{\varepsilon}}{\varepsilon})\backslash B_{\beta/\varepsilon}(\frac{x_{\varepsilon}}{\varepsilon})$, $\psi_{\varepsilon}=0$ on $A_{\varepsilon}^2=B_{3\beta/\varepsilon-1}(\frac{x_{\varepsilon}}{\varepsilon})\backslash B_{\beta/2\varepsilon+1}(\frac{x_{\varepsilon}}{\varepsilon})$. Since $u_{\varepsilon}\in H_{\varepsilon}$ and using $(V_0)$, it is easy to check that $u_{\varepsilon}\psi_{\varepsilon}\in H^s(\mathbb{R}^3)$. Moreover, $$\sup_{y\in A_{\varepsilon}}\int_{B_1(y)}|u_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z\geq\sup_{y\in\mathbb{R}^3}\int_{B_1(y)}|u_{\varepsilon}\psi_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z.$$ By Proposition \[pro2-1\], we have $$\int_{\mathbb{R}^3}|u_{\varepsilon}\psi_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z\rightarrow0\quad \text{as}\,\, \varepsilon\rightarrow0.$$ Since $A_{\varepsilon}^1\subset A_{\varepsilon}^2$ for $\varepsilon>0$ small, so holds. Therefore, by the interpolation inequality, it is not difficult to verify that $$\label{equ4-9} \lim_{\varepsilon\rightarrow0}\int_{A_{\varepsilon}^1}|u_{\varepsilon}|^p\,{\rm d}z=0\quad \text{for all} \,\, p\in(2,2_s^{\ast}].$$ Set $u_{\varepsilon,1}(z)=\varphi(\varepsilon z-x_{\varepsilon})u_{\varepsilon}(z)$, $u_{\varepsilon,2}(z)=(1-\varphi(\varepsilon z-x_{\varepsilon}))u_{\varepsilon}(z)$. As the same proof of (26) in [@Teng3], we obtain that $$\begin{aligned} \label{equ4-9-0} \int_{\mathbb{R}^3}|D_s u_{\varepsilon}|^2\,{\rm d}z&=\int_{\mathbb{R}^3}|D_s u_{\varepsilon,1}|^2\,{\rm d}z+\int_{\mathbb{R}^3}|D_s u_{\varepsilon,2}|^2\,{\rm d}z\nonumber\\ &+2\int_{\mathbb{R}^3}\frac{(u_{\varepsilon,1}(x)-u_{\varepsilon,1}(y))(u_{\varepsilon,2}(x)-u_{\varepsilon,2}(y))}{|x-y|^{3+2s}}\,{\rm d}y\,{\rm d}z\nonumber\\ &\geq\int_{\mathbb{R}^3}|D_su_{\varepsilon,1}|^2\,{\rm d}z+\int_{\mathbb{R}^3}|D_su_{\varepsilon,2}|^2\,{\rm d}z+o(1).\end{aligned}$$ By , we deduce that $$\begin{aligned} \int_{\mathbb{R}^3}V(\varepsilon z)|u_{\varepsilon}|^2\,{\rm d}z\geq\int_{\mathbb{R}^3}V(\varepsilon z)|u_{\varepsilon,1}|^2\,{\rm d}z+\int_{\mathbb{R}^3}V(\varepsilon z)|u_{\varepsilon,2}|^2\,{\rm d}z\end{aligned}$$ $$\begin{aligned} \int_{\mathbb{R}^3}\phi_{u_{\varepsilon}}^t|u_{\varepsilon}|^2\,{\rm d}z\geq\int_{\mathbb{R}^3}\phi_{u_{\varepsilon,1}}^t|u_{\varepsilon,1}|^2\,{\rm d}z+\int_{\mathbb{R}^3}\phi_{u_{\varepsilon,2}}^t|u_{\varepsilon,2}|^2\,{\rm d}z\end{aligned}$$ $$\begin{aligned} \int_{\mathbb{R}^3}G(\varepsilon z, u_{\varepsilon})\,{\rm d}z=\int_{\mathbb{R}^3}G(\varepsilon z, u_{\varepsilon,1})\,{\rm d}z+\int_{\mathbb{R}^3}G(\varepsilon z, u_{\varepsilon,2})\,{\rm d}z+o(1)\,\, \text{as}\,\,\varepsilon\rightarrow0\end{aligned}$$ $$\begin{aligned} \int_{\mathbb{R}^3}(u_{\varepsilon}^{+})^{2_s^{\ast}}\,{\rm d}z=\int_{\mathbb{R}^3}(u_{\varepsilon,1}^{+})^{2_s^{\ast}}\,{\rm d}z+\int_{\mathbb{R}^3}(u_{\varepsilon,2}^{+})^{2_s^{\ast}}\,{\rm d}z+o(1)\,\, \text{as}\,\,\varepsilon\rightarrow0\end{aligned}$$ and $$Q_{\varepsilon}(u_{\varepsilon,1})=0, \quad Q_{\varepsilon}(u_{\varepsilon,2})=Q_{\varepsilon}(u_{\varepsilon})\geq0.$$ Hence, we get $$\label{equ4-10} \mathcal{J}_{\varepsilon}(u_{\varepsilon})\geq P_{\varepsilon}(u_{\varepsilon,1})+P_{\varepsilon}(u_{\varepsilon,2})+o(1),$$ where $o(1)\rightarrow0$ as $\varepsilon\rightarrow0$. We now estimate $P_{\varepsilon}(u_{\varepsilon,2})$. It follows from that $$\begin{aligned} \|u_{\varepsilon,2}\|_{H_{\varepsilon}}&\leq\|u_{\varepsilon,1}-\varphi_{\varepsilon}(\cdot-\frac{x_{\varepsilon}}{\varepsilon})W_0(\cdot-\frac{x_{\varepsilon}}{\varepsilon})\|_{H_{\varepsilon}}+2d_0\\ &=\|u_{\varepsilon,1}-\varphi_{\varepsilon}(\cdot-\frac{x_{\varepsilon}}{\varepsilon})W_0(\cdot-\frac{x_{\varepsilon}}{\varepsilon})\|_{H_{\varepsilon}(B_{2\beta/\varepsilon}(x_{\varepsilon}/\varepsilon))}\\ &+\|u_{\varepsilon,1}-\varphi_{\varepsilon}(\cdot-\frac{x_{\varepsilon}}{\varepsilon})W_0(\cdot-\frac{x_{\varepsilon}}{\varepsilon})\|_{H_{\varepsilon}(\mathbb{R}^3\backslash B_{2\beta/\varepsilon}(x_{\varepsilon}/\varepsilon))}+2d_0\\ &=\|u_{\varepsilon,1}-\varphi_{\varepsilon}(\cdot-\frac{x_{\varepsilon}}{\varepsilon})W_0(\cdot-\frac{x_{\varepsilon}}{\varepsilon})\|_{H_{\varepsilon}(B_{2\beta/\varepsilon}(x_{\varepsilon}/\varepsilon))}+2d_0+o(1)\\ &\leq\|u_{\varepsilon,2}\|_{H_{\varepsilon}(B_{2\beta/\varepsilon}(x_{\varepsilon}/\varepsilon))}+4d_0+o(1)\\ &=\|u_{\varepsilon,2}\|_{H_{\varepsilon}(B_{2\beta/\varepsilon}(x_{\varepsilon}/\varepsilon)\backslash B_{\beta/\varepsilon}(x_{\varepsilon}/\varepsilon))}+\|u_{\varepsilon,2}\|_{H_{\varepsilon}(B_{\beta/\varepsilon}(x_{\varepsilon}/\varepsilon))}+4d_0+o(1)\\ &=\|u_{\varepsilon,2}\|_{H_{\varepsilon}(B_{2\beta/\varepsilon}(x_{\varepsilon}/\varepsilon)\backslash B_{\beta/\varepsilon}(x_{\varepsilon}/\varepsilon))}+4d_0+o(1)\\ &\leq C\|u_{\varepsilon}\|_{H_{\varepsilon}(B_{2\beta/\varepsilon}(x_{\varepsilon}/\varepsilon)\backslash B_{\beta/\varepsilon}(x_{\varepsilon}/\varepsilon))}+4d_0+o(1)\\ &\leq C\|\varphi_{\varepsilon}(\cdot-\frac{x_{\varepsilon}}{\varepsilon})W_0(\cdot-\frac{x_{\varepsilon}}{\varepsilon})\|_{H_{\varepsilon}(B_{2\beta/\varepsilon}(x_{\varepsilon}/\varepsilon)\backslash B_{\beta/\varepsilon}(x_{\varepsilon}/\varepsilon))}+6d_0+o(1)\\ &\leq\|W_0(\cdot-\frac{x_{\varepsilon}}{\varepsilon})\|_{H_{\varepsilon}(B_{2\beta/\varepsilon}(x_{\varepsilon}/\varepsilon)\backslash B_{\beta/\varepsilon}(x_{\varepsilon}/\varepsilon))}+6d_0+o(1)\\ &=\|W_0\|_{H_{\varepsilon}(B_{2\beta/\varepsilon}(0)\backslash B_{\beta/\varepsilon}(0))}+6d_0+o(1)\\ &\leq6d_0+o(1),\end{aligned}$$ where $o(1)\rightarrow0$ as $\varepsilon\rightarrow0$ and using the similar arguments as , we can prove that $$\|u_{\varepsilon,1}-\varphi_{\varepsilon}(\cdot-\frac{x_{\varepsilon}}{\varepsilon})W_0(\cdot-\frac{x_{\varepsilon}}{\varepsilon})\|_{H_{\varepsilon}(\mathbb{R}^3\backslash B_{2\beta/\varepsilon}(x_{\varepsilon}/\varepsilon))}=o(1)$$ and $$\|u_{\varepsilon,2}\|_{H_{\varepsilon}(B_{\beta/\varepsilon}(x_{\varepsilon}/\varepsilon))}=o(1).$$ Furthermore, the above inequality implies that $$\label{equ4-11} \limsup\limits_{\varepsilon\rightarrow0}\|u_{\varepsilon,2}\|_{H_{\varepsilon}}\leq 6d_0.$$ Then, we get $$\begin{aligned} \label{equ4-12} P_{\varepsilon}(u_{\varepsilon,2})&\geq\frac{1}{2}\|u_{\varepsilon,2}\|_{H_{\varepsilon}}^2-\int_{\mathbb{R}^3}F(u_{\varepsilon,2})\,{\rm d}z-\frac{1}{2_s^{\ast}}\int_{\mathbb{R}^3}(u_{\varepsilon,2}^{+})^{2_s^{\ast}}\,{\rm d}z\nonumber\\ &\geq\frac{1}{4}\|u_{\varepsilon,2}\|_{H_{\varepsilon}}^2-C\|u_{\varepsilon,2}\|_{H_{\varepsilon}}^{2_s^{\ast}}\nonumber\\ &=\|u_{\varepsilon,2}\|_{H_{\varepsilon}}^2(\frac{1}{4}-C\|u_{\varepsilon,2}\|_{H_{\varepsilon}}^{2_s^{\ast}-2})\nonumber\\ &\geq\|u_{\varepsilon,2}\|_{H_{\varepsilon}}^2(\frac{1}{4}-C(6d_0)^{2_s^{\ast}-2}).\end{aligned}$$ In particular, taking $d_0>0$ small enough, we can assume that $P_{\varepsilon}(u_{\varepsilon,2})\geq0$. Hence, from , it holds $$\label{equ4-13} \mathcal{J}_{\varepsilon}(u_{\varepsilon})\geq P_{\varepsilon}(u_{\varepsilon,1})+o(1).$$ Furthermore, by , it is easy to check that $$\begin{aligned} \int_{\mathbb{R}^3}\phi_{u_{\varepsilon}}^tu_{\varepsilon,1}u_{\varepsilon,2}\,{\rm d}z\leq\int_{\mathcal{T}_{\varepsilon}^1}\phi_{u_{\varepsilon}}^t|u_{\varepsilon}|^2\,{\rm d}z\leq\|\phi_{u_{\varepsilon}}^t\|_{2_t^{\ast}}\|u_{\varepsilon}\|_{L^{\frac{12}{3+2t}}(\mathcal{T}_{\varepsilon}^1)}^2\rightarrow0\end{aligned}$$ and $$\begin{aligned} \int_{\mathbb{R}^3}\frac{(u_{\varepsilon,1}(z)-u_{\varepsilon,1}(y))(u_{\varepsilon,2}(z)-u_{\varepsilon,2}(y))}{|x-y|^{3+2s}}\,{\rm d}y\,{\rm d}z\geq o(1).\end{aligned}$$ Hence, using the facts that $\langle\mathcal{J}_{\varepsilon}'(u_{\varepsilon}),u_{\varepsilon,2}\rangle\rightarrow0$ as $\varepsilon\rightarrow0$, $\langle Q_{\varepsilon}'(u_{\varepsilon}),u_{\varepsilon,2}\rangle\geq0$, we have that $$\begin{aligned} &\|u_{\varepsilon,2}\|_{H_{\varepsilon}}^2+o(1)\\ &\leq\|u_{\varepsilon,2}\|_{H_{\varepsilon}}^2+\int_{\mathbb{R}^3}\frac{(u_{\varepsilon,1}(z)-u_{\varepsilon,1}(y))(u_{\varepsilon,2}(z)-u_{\varepsilon,2}(y))}{|x-y|^{3+2s}}\,{\rm d}y\,{\rm d}z\\ &+\int_{\mathbb{R}^3}V(\varepsilon x)u_{\varepsilon,1}u_{\varepsilon,2}\,{\rm d}z+\int_{\mathbb{R}^3}\phi_{u_{\varepsilon}}^tu_{\varepsilon,1}u_{\varepsilon,2}\,{\rm d}z\\ &\leq\int_{\mathbb{R}^3}\frac{(u_{\varepsilon,1}(z)-u_{\varepsilon,1}(y))(u_{\varepsilon,2}(z)-u_{\varepsilon,2}(y))}{|x-y|^{3+2s}}\,{\rm d}y\,{\rm d}z+\int_{\mathbb{R}^3}V(\varepsilon x)u_{\varepsilon}u_{\varepsilon,2}\,{\rm d}z+\langle Q_{\varepsilon}'(u_{\varepsilon}),u_{\varepsilon,2}\rangle\\ &+\int_{\mathbb{R}^3}\phi_{u_{\varepsilon}}^tu_{\varepsilon}u_{\varepsilon,2}\,{\rm d}z+o(1)=\int_{\mathbb{R}^3}g(\varepsilon z, u_{\varepsilon})u_{\varepsilon,2}\,{\rm d}x+o(1)\\ &\leq \eta\int_{\mathbb{R}^3}|u_{\varepsilon}u_{\varepsilon,2}|\,{\rm d}z+C\int_{\mathbb{R}^3}|u_{\varepsilon}|^{2_s^{\ast}-1}|u_{\varepsilon,2}|\,{\rm d}z+o(1)\\ &\leq\eta\|u_{\varepsilon,2}\|_{L^2}^2+C\int_{\mathbb{R}^3}\Big(|u_{\varepsilon,2}|^{2_s^{\ast}}+|u_{\varepsilon,1}|^{2_s^{\ast}-1}|u_{\varepsilon,2}|\Big)\,{\rm d}x+o(1)\\ &\leq\eta\|u_{\varepsilon,2}\|_{H_{\varepsilon}}^2+C\|u_{\varepsilon,2}\|_{H_{\varepsilon}}^{2_s^{\ast}}+o(1)\end{aligned}$$ which yields to $$\|u_{\varepsilon,2}\|_{H_{\varepsilon}}^2\leq C\|u_{\varepsilon,2}\|_{H_{\varepsilon}}^{2_s^{\ast}}+o(1).$$ Combining with , we get that for $d>0$ sufficiently small, $$\label{equ4-13-0} \lim\limits_{\varepsilon\rightarrow0}\|u_{\varepsilon,2}\|_{H_{\varepsilon}}=0.$$ We next estimate $P_{\varepsilon}(u_{\varepsilon,1})$. Denote $\widehat{u}_{\varepsilon}(z)=u_{\varepsilon,1}(z+\frac{x_{\varepsilon}}{\varepsilon})=\varphi(\varepsilon z)u_{\varepsilon}(z+\frac{x_{\varepsilon}}{\varepsilon})$, then $\{\widehat{u}_{\varepsilon}\}$ is bounded in $H^s(\mathbb{R}^3)$ by virtue of $(V_0)$. Thus, up to a subsequence, we may assume that there exists a $\widehat{u}\in H^s(\mathbb{R}^3)$ such that $\widehat{u}_{\varepsilon}\rightharpoonup\widehat{u}$ in $H^s(\mathbb{R}^3)$, $\widehat{u}_{\varepsilon}\rightarrow\widehat{u}$ in $L_{loc}^p(\mathbb{R}^3)$ for $1\leq p<2_s^{\ast}$ and $\widehat{u}_{\varepsilon}\rightarrow\widehat{u}$ a.e. in $\mathbb{R}^3$. We now claim that $$\label{equ4-14} \widehat{u}_{\varepsilon}\rightarrow\widehat{u}\quad \text{in}\,\, L^{2_s^{\ast}}(\mathbb{R}^3).$$ In view of Proposition \[pro2-1\], suppose the contrary that there exists $r>0$ such that $$\liminf_{\varepsilon\rightarrow0}\sup_{y\in\mathbb{R}^3}\int_{B_1(y)}|\widehat{u}_{\varepsilon}-\widehat{u}|^{2_s^{\ast}}\,{\rm d}z=2r>0.$$ Thus, for $\varepsilon>0$ small, there exists $\widehat{y}_{\varepsilon}\in\mathbb{R}^3$ such that $$\label{equ4-15} \int_{B_1(\widehat{y}_{\varepsilon})}|\widehat{u}_{\varepsilon}-\widehat{u}|^{2_s^{\ast}}\,{\rm d}z\geq r>0.$$ $\bullet$ $\{\widehat{y}_{\varepsilon}\}$ is bounded in $\mathbb{R}^3$, then there exists $r_0>0$ such that $|\widehat{y}_{\varepsilon}|\leq r_0$. Let $\widehat{v}_{\varepsilon}=\widehat{u}_{\varepsilon}-\widehat{u}$, then $\widehat{v}_{\varepsilon}\rightharpoonup0$ in $H^s(\mathbb{R}^3)$, for $\varepsilon>0$ small, by , it holds $$\label{equ4-16} \int_{B_{r_0+1}(0)}|\widehat{v}_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z\geq r>0.$$ We now are to prove that $$\label{equ4-17} \lim_{\varepsilon\rightarrow0}\sup_{\widehat{\varphi}\in C_0^{\infty}(B_{r_0+2}(0)),\|\widehat{\varphi}\|=1}|\langle \widehat{\rho}_{\varepsilon},\widehat{\varphi}\rangle|=0,$$ where $\widehat{\rho}_{\varepsilon}=-(-\Delta)^s\widehat{v}_{\varepsilon}+(\widehat{v}_{\varepsilon}^{+})^{2_s^{\ast}-1}\in (H^s(\mathbb{R}^3))'$. For $\varepsilon>0$ small, $\int_{\mathbb{R}^3\backslash\Lambda_{\varepsilon}}u_{\varepsilon}\widehat{\varphi}(z-\frac{x_{\varepsilon}}{\varepsilon})\,{\rm d}z=0$ uniformly for all $\widehat{\varphi}\in C_0^{\infty}(B_{r_0+2}(0))$. Hence, by virtue of $\lim\limits_{\varepsilon\rightarrow0}\|u_{\varepsilon,2}\|_{H_{\varepsilon}}=0$, we have $$\begin{aligned} o(1)&=\langle\mathcal{J}_{\varepsilon}'(u_{\varepsilon}),\widehat{\varphi}(\cdot-\frac{x_{\varepsilon}}{\varepsilon})\rangle=\int_{\mathbb{R}^3}\frac{(u_{\varepsilon}(z+\frac{x_{\varepsilon}}{\varepsilon})-u_{\varepsilon}(y+\frac{x_{\varepsilon}}{\varepsilon}))(\widehat{\varphi}(z)-\widehat{\varphi}(y))}{|x-y|^{3+2s}}\,{\rm d}y\,{\rm d}z\\ &+\int_{\mathbb{R}^3}V(\varepsilon z+x_{\varepsilon})u_{\varepsilon}(z+\frac{x_{\varepsilon}}{\varepsilon})\widehat{\varphi}\,{\rm d}z+\int_{\mathbb{R}^3}\phi_{u_{\varepsilon}(z+\frac{x_{\varepsilon}}{\varepsilon})}^tu_{\varepsilon}(z+\frac{x_{\varepsilon}}{\varepsilon})\widehat{\varphi}\,{\rm d}z\\ &-\int_{\mathbb{R}^3}g(\varepsilon z+x_{\varepsilon},u_{\varepsilon}(z+\frac{x_{\varepsilon}}{\varepsilon}))\widehat{\varphi}\,{\rm d}z\\ &=\int_{\mathbb{R}^3}\frac{(\widehat{u}_{\varepsilon}(z)-\widehat{u}_{\varepsilon}(y))(\widehat{\varphi}(z)-\widehat{\varphi}(y))}{|x-y|^{3+2s}}\,{\rm d}y\,{\rm d}z+\int_{\mathbb{R}^3}V(\varepsilon z+x_{\varepsilon})\widehat{u}_{\varepsilon}\widehat{\varphi}\,{\rm d}z\\ &+\int_{\mathbb{R}^3}\phi_{\widehat{u}_{\varepsilon}}^t\widehat{u}_{\varepsilon}\widehat{\varphi}\,{\rm d}z-\int_{\mathbb{R}^3}g(\varepsilon z+x_{\varepsilon},\widehat{u}_{\varepsilon})\widehat{\varphi}\,{\rm d}z+o(1).\end{aligned}$$ Combining the above estimate with $x_{\varepsilon}\rightarrow x_0\in\mathcal{M}^{\beta}$ , we see that $\widehat{u}\geq0$ is a solution of $$\label{equ4-18} (-\Delta)^s \widehat{u}+V(x_0) \widehat{u}+\phi_{\widehat{u}}^t \widehat{u}=f(\widehat{u})+(\widehat{u}^{+})^{2_s^{\ast}-1} \quad \text{in}\,\,\mathbb{R}^3.$$ On the other hand, the following Brezis-Lieb splitting properties hold: $$\begin{aligned} \int_{\mathbb{R}^3}\frac{(\widehat{u}_{\varepsilon}(z)-\widehat{u}_{\varepsilon}(y))-(\widehat{v}_{\varepsilon}(z)-\widehat{v}_{\varepsilon}(y))-(\widehat{u}(z)-\widehat{u}(y))}{|x-y|^{3+2s}}(\widehat{\varphi}(z)-\widehat{\varphi}(y))\,{\rm d}y\,{\rm d}z=o(1),\end{aligned}$$ $$\begin{aligned} \int_{\mathbb{R}^3}\Big(V(\varepsilon z+x_{\varepsilon})\widehat{u}_{\varepsilon}-V(\varepsilon z+x_{\varepsilon})\widehat{v}_{\varepsilon}-V(x_0)\widehat{u}\Big)\widehat{\varphi}\,{\rm d}z=o(1),\end{aligned}$$ $$\begin{aligned} \int_{\mathbb{R}^3}\Big(\phi_{\widehat{u}_{\varepsilon}}^t\widehat{u}_{\varepsilon}-\phi_{\widehat{v}_{\varepsilon}}^t\widehat{v}_{\varepsilon}-\phi_{\widehat{u}}^t\widehat{u}\Big)\widehat{\varphi}\,{\rm d}z=o(1),\quad \int_{\mathbb{R}^3}\Big(f(\widehat{u}_{\varepsilon})-f(\widehat{v}_{\varepsilon})-f(\widehat{u})\Big)\widehat{\varphi}\,{\rm d}z=o(1)\end{aligned}$$ and $$\begin{aligned} \int_{\mathbb{R}^3}\Big(\widehat{u}_{\varepsilon}^{+})^{2_s^{\ast}-1}-(\widehat{v}_{\varepsilon}^{+})^{2_s^{\ast}-1}-(\widehat{u}^{+})^{2_s^{\ast}-1}\Big)\widehat{\varphi}\,{\rm d}z=o(1)\end{aligned}$$ uniformly for all $\widehat{\varphi}\in C_0^{\infty}(B_{r_0+2}(0))$ with $\|\widehat{\varphi}\|=1$. Thus, is proved. By Proposition \[pro2-3\], there exist $\widehat{z}_{\varepsilon}\in\mathbb{R}^3$ and $\widehat{\sigma}_{\varepsilon}>0$ such that $\widehat{z}_{\varepsilon}\rightarrow \widehat{z}\in B_{r_0+1}(0)$, $\widehat{\sigma}_{\varepsilon}\rightarrow0$ and $$\overline{w}_{\varepsilon}(z)=\widehat{\sigma}_{\varepsilon}^{\frac{3-2s}{2}}\widehat{v}_{\varepsilon}(\widehat{\sigma}_{\varepsilon} z+\widehat{z}_{\varepsilon})\rightharpoonup\overline{w} \quad \text{in}\,\, \mathcal{D}^{s,2}(\mathbb{R}^3),$$ where $\overline{w}\geq0$ is a nontrivial solution of and satisfies . Since $$\begin{aligned} \int_{\mathbb{R}^3}|\overline{w}|^{2_s^{\ast}}\,{\rm d}z&\leq\liminf_{\varepsilon\rightarrow0}|\overline{w}_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z=\liminf_{\varepsilon\rightarrow0}\int_{\mathbb{R}^3}|\widehat{v}_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z\\ &=\liminf_{\varepsilon\rightarrow0}\int_{\mathbb{R}^3}|\widehat{u}_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z-\int_{\mathbb{R}^3}|\widehat{u}|^{2_s^{\ast}}\,{\rm d}z\\ &\leq\liminf_{\varepsilon\rightarrow0}\int_{\mathbb{R}^3}|u_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z,\end{aligned}$$ but by , we get $$\begin{aligned} \int_{\mathbb{R}^3}|u_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z&\leq Cd_0+\int_{\mathbb{R}^3}|\varphi(\varepsilon z-x_{\varepsilon})W_0(z-\frac{x_{\varepsilon}}{\varepsilon})|^{2_s^{\ast}}\,{\rm d}z\\ &=Cd_0+\int_{B_{2\beta/\varepsilon}(\frac{x_{\varepsilon}}{\varepsilon})}|\varphi W_0|^{2_s^{\ast}}\,{\rm d}z.\end{aligned}$$ Thus, $$\label{equ4-20} \int_{\mathbb{R}^3}|\overline{w}|^{2_s^{\ast}}\,{\rm d}z\leq Cd_0+\int_{\mathbb{R}^3}|W_0|^{2_s^{\ast}}\,{\rm d}z.$$ On the other hand, by , we have $$\begin{aligned} \int_{\mathbb{R}^3}|D_su_{\varepsilon}|^2\,{\rm d}z&\leq cd_0+\int_{\mathbb{R}^3}\Big|D_s\Big(\varphi(\varepsilon z-x_{\varepsilon}) W_0(z-\frac{x_{\varepsilon}}{\varepsilon})\Big)\Big|^2\,{\rm d}z\\ &\leq cd_0+\int_{\mathbb{R}^3}|D_sW_0|^2\,{\rm d}z+o(1).\end{aligned}$$ $$\begin{aligned} \int_{\mathbb{R}^3}|D_s\overline{w}|^2\,{\rm d}z&\leq\liminf_{\varepsilon\rightarrow0}\int_{\mathbb{R}^3}|D_s\overline{w}_{\varepsilon}|^2\,{\rm d}z=\liminf_{\varepsilon\rightarrow0}\int_{\mathbb{R}^3}|D_s\widehat{v}_{\varepsilon}|^2\,{\rm d}z\\ &=\liminf_{\varepsilon\rightarrow0}\int_{\mathbb{R}^3}|D_s\widehat{u}_{\varepsilon}|^2\,{\rm d}z-\int_{\mathbb{R}^3}|D_s\widehat{u}|^2\,{\rm d}z\leq\liminf_{\varepsilon\rightarrow0}\int_{\mathbb{R}^3}|D_s\widehat{u}_{\varepsilon}|^2\,{\rm d}z,\end{aligned}$$ hence, we get $$\label{equ4-21} \int_{\mathbb{R}^3}|D_s\overline{w}|^2\,{\rm d}z\leq Cd_0+\int_{\mathbb{R}^3}|D_sW_0|^2\,{\rm d}z+o(1).$$ From the $W_0\in\mathcal{L}_{V_0}$, and by , , we have that $$\begin{aligned} c_{V_0}&=\mathcal{I}_{V_0}(W_0)-\frac{1}{q(s+t)-3}\mathcal{G}_{\mu}(W_0)>\frac{(q-4)s+(q-2)t}{2(q(s+t)-3)}\int_{\mathbb{R}^3}|D_s W_0|^2\,{\rm d}z\\ &+\frac{(2_s^{\ast}-q)(s+t)}{2_s^{\ast}(q(s+t)-3)}\int_{\mathbb{R}^3}|W_0|^{2_s^{\ast}}\,{\rm d}z\\ &>\frac{(q-4)s+(q-2)t}{2(q(s+t)-3)}\int_{\mathbb{R}^3}|D_s\overline{w}|^2\,{\rm d}z+\frac{(2_s^{\ast}-q)(s+t)}{2_s^{\ast}(q(s+t)-3)}\int_{\mathbb{R}^3}|\overline{w}|^{2_s^{\ast}}\,{\rm d}z-Cd_0\\ &>\frac{s}{3}\mathcal{S}_s^{\frac{3}{2s}}-Cd_0,\end{aligned}$$ where we have used the fact that $\frac{(q-4)s+(q-2)t}{2(q(s+t)-3)}+\frac{(2_s^{\ast}-q)(s+t)}{2_s^{\ast}(q(s+t)-3)}=\frac{s}{3}$. For $d_0$ sufficient small, we get a contradiction with Lemma \[lem3-4\]. $\bullet$ $\{\widehat{z}_{\varepsilon}\}$ is unbounded. Without loss of generality, we may assume that $\lim\limits_{\varepsilon\rightarrow0}|\widehat{z}_{\varepsilon}|=+\infty$. Then, by , we have that $$\label{equ4-22} \liminf_{\varepsilon\rightarrow0}\int_{B_1(\widehat{z}_{\varepsilon})}|\widehat{u}_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z\geq r>0,$$ i.e., $$\liminf_{\varepsilon\rightarrow0}\int_{B_1(\widehat{z}_{\varepsilon})}|\varphi(\varepsilon z)u_{\varepsilon}(z+\frac{x_{\varepsilon}}{\varepsilon})|^{2_s^{\ast}}\,{\rm d}z\geq r>0.$$ Since $\varphi(z)=0$ for $|z|\geq2\beta$, so $|\widehat{z}_{\varepsilon}|\leq\frac{3\beta}{\varepsilon}$ for $\varepsilon$ small. If $|\widehat{z}_{\varepsilon}|\geq\frac{\beta}{2\varepsilon}$, then $\widehat{z}_{\varepsilon}\in B_{3\beta/\varepsilon}(0)\backslash B_{\beta/2\varepsilon}(0)$, and by , we get $$\begin{aligned} \liminf_{\varepsilon\rightarrow0}\int_{B_1(\widehat{z}_{\varepsilon})}|\widehat{u}_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z&\leq\liminf_{\varepsilon\rightarrow0}\sup_{y\in B_{3\beta/\varepsilon}(0)\backslash B_{\beta/2\varepsilon}(0)}\int_{B_1(y)}|u_{\varepsilon}(z+\frac{x_{\varepsilon}}{\varepsilon})|^{2_s^{\ast}}\,{\rm d}z\\ &\leq\liminf_{\varepsilon\rightarrow0}\sup_{y\in\mathcal{T}_{\varepsilon}}\int_{B_1(y)}|\widehat{u}_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z=0\end{aligned}$$ which contradicts with . Thus $|\widehat{z}_{\varepsilon}|\leq\frac{\beta}{2\varepsilon}$ for $\varepsilon>0$ small. Without loss of generality, we may assume that $\varepsilon\widehat{z}_{\varepsilon}\rightarrow z_0\in \overline{B_{\beta/2}(0)}$ and $\widetilde{u}_{\varepsilon}\rightharpoonup\widetilde{u}$ in $H^s(\mathbb{R}^3)$, where $\widetilde{u}_{\varepsilon}(z):=\widehat{u}_{\varepsilon}(z+\widehat{z}_{\varepsilon})$. If $\widetilde{u}\neq0$, it is easy to check that $\widetilde{u}$ satisfies that $$(-\Delta)^s v+V(x_0+z_0) v+\phi_{v}^tv=f(v)+v^{2_s^{\ast}-1}\quad \text{in}\,\,\mathbb{R}^3.$$ Similarly as in the proof of the case $v\neq0$ of the claim , we can get a contradiction for $d_0$ sufficient small. Thus $\widetilde{u}=0$. Similarly as the proof of the case $u=0$ of the claim (where using Proposition ), we find that there exist $\widetilde{x}_{\varepsilon}\in\mathbb{R}^3$ and $\widetilde{\sigma}_{\varepsilon}>0$ such that $\widetilde{x}_{\varepsilon}\rightarrow \widetilde{x}\in \overline{B_1(0)}$, $\widetilde{\sigma}_{\varepsilon}\rightarrow0$ and $$u_{\varepsilon}^{\ast}(\cdot):=\widetilde{\sigma}_{\varepsilon}^{\frac{3-2s}{2}}\widetilde{u}_{\varepsilon}(\widetilde{\sigma}_{\varepsilon} \cdot+\widetilde{x}_{\varepsilon})\rightharpoonup u^{\ast}\quad \text{in}\,\,\mathcal{D}^{s,2}(\mathbb{R}^3),$$ where $u^{\ast}$ is a nontrivial of solution of and satisfies . Thus, there exists $R>0$ such that $$\int_{B_R(0)}|u^{\ast}|^{2_s^{\ast}}\,{\rm d}z\geq\frac{1}{2}\int_{\mathbb{R}^3}|u^{\ast}|^{2_s^{\ast}}\,{\rm d}z=\frac{1}{2}\mathcal{S}_s^{\frac{3}{2s}}>0.$$ On the other hand, we have that $$\begin{aligned} &\int_{B_R(0)}|u^{\ast}|^{2_s^{\ast}}\,{\rm d}z\leq\liminf_{\varepsilon\rightarrow0}\int_{B_R(0)}|u_{\varepsilon}^{\ast}|^{2_s^{\ast}}\,{\rm d}z=\liminf_{\varepsilon\rightarrow0}\int_{B_{\widetilde{\sigma}_{\varepsilon}R}(\widetilde{x}_{\varepsilon})}|\widetilde{u}_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z\\ &=\liminf_{\varepsilon\rightarrow0}\int_{B_{\widetilde{\sigma}_{\varepsilon}R}(\widetilde{x}_{\varepsilon}+\widehat{z}_{\varepsilon}+\frac{x_{\varepsilon}}{\varepsilon})}|u_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z\leq\liminf_{\varepsilon\rightarrow0}\int_{B_2(\widehat{z}_{\varepsilon}+\frac{x_{\varepsilon}}{\varepsilon})}|u_{\varepsilon}|^{2_s^{\ast}}\,{\rm d}z.\end{aligned}$$ which contradicts to for $d_0$ small enough. Hence, the claim holds and so using the interpolation inequality, we deduce that $$\label{equ4-23} \widehat{u}_{\varepsilon}\rightarrow \widehat{u}\quad \text{in}\,\, L^p(\mathbb{R}^3), \,\, p\in(2,2_s^{\ast}].$$ By , recalling that $\widehat{u}_{\varepsilon}(z)=u_{\varepsilon,1}(z+\frac{x_{\varepsilon}}{\varepsilon})$, we have $$\begin{aligned} P_{\varepsilon}(\widehat{u}_{\varepsilon})\leq c_{V_0}+o(1).\end{aligned}$$ Letting $\varepsilon\rightarrow0$, and using , $(V_0)$, we get $$\begin{aligned} \mathcal{I}_{V(x_0)}(\widehat{u})\leq c_{V_0}.\end{aligned}$$ On the other hand, in view of $\langle\mathcal{J}'_{\varepsilon}(u_{\varepsilon}),u_{\varepsilon,1}\rangle\rightarrow0$ and , and $\langle Q_{\varepsilon}'(u_{\varepsilon}),u_{\varepsilon,1}\rangle=0$, we deduce that $$\begin{aligned} \int_{\mathbb{R}^3}|D_s\widehat{u}_{\varepsilon}|^2\,{\rm d}z&+\int_{\mathbb{R}^3}V(\varepsilon z+x_{\varepsilon})|\widehat{u}_{\varepsilon}|^2\,{\rm d}z+\int_{\mathbb{R}^3}\phi_{\widehat{u}_{\varepsilon}}^t|\widehat{u}_{\varepsilon}|^2\,{\rm d}z\\ &=\int_{\mathbb{R}^3}g(\varepsilon z+x_{\varepsilon},\widehat{u}_{\varepsilon})\widehat{u}_{\varepsilon}\,{\rm d}z+o(1),\end{aligned}$$ then by Fatou’s Lemma, and , we have that $$\begin{aligned} &\int_{\mathbb{R}^3}|D_s\widehat{u}|^2\,{\rm d}z+\int_{\mathbb{R}^3}V(x_0)|\widehat{u}|^2\,{\rm d}z+\int_{\mathbb{R}^3}\phi_{\widehat{u}}^t|\widehat{u}|^2\,{\rm d}z\\ &\leq\liminf_{\varepsilon\rightarrow0}\Big(\int_{\mathbb{R}^3}|D_s\widehat{u}_{\varepsilon}|^2\,{\rm d}z+\int_{\mathbb{R}^3}V(\varepsilon z+x_{\varepsilon})|\widehat{u}_{\varepsilon}|^2\,{\rm d}z+\int_{\mathbb{R}^3}\phi_{\widehat{u}_{\varepsilon}}^t|\widehat{u}_{\varepsilon}|^2\,{\rm d}z\Big)\\ &=\liminf_{\varepsilon\rightarrow0}\Big(\int_{\mathbb{R}^3}g(\varepsilon z+x_{\varepsilon},\widehat{u}_{\varepsilon})\widehat{u}_{\varepsilon}\,{\rm d}z\Big)=\int_{\mathbb{R}^3}f(\widehat{u})\widehat{u}\,{\rm d}z+\int_{\mathbb{R}^3}(\widehat{u}^{+})^{2_s^{\ast}}\,{\rm d}z\\ &=\int_{\mathbb{R}^3}|D_s\widehat{u}|^2\,{\rm d}z+\int_{\mathbb{R}^3}V(x_0)|\widehat{u}|^2\,{\rm d}z+\int_{\mathbb{R}^3}\phi_{\widehat{u}}^t|\widehat{u}|^2\,{\rm d}z,\end{aligned}$$ which implies that $$\int_{\mathbb{R}^3}|D_s\widehat{u}_{\varepsilon}|^2\,{\rm d}z\rightarrow\int_{\mathbb{R}^3}|D_s\widehat{u}|^2\,{\rm d}z,$$ and $$\int_{\mathbb{R}^3}V(\varepsilon z+x_{\varepsilon})|\widehat{u}_{\varepsilon}|^2\,{\rm d}z\rightarrow\int_{\mathbb{R}^3}V(x_0)|\widehat{u}|^2\,{\rm d}z.$$ Hence, by $(V_0)$, we can deduce that $$\label{equ4-24} \widehat{u}_{\varepsilon}\rightarrow\widehat{u}\quad \text{in}\,\, H^s(\mathbb{R}^3).$$ By , , it is easy to check that $\widehat{u}\neq0$. It follows from that $\mathcal{I}_{V(x_0)}(\widehat{u})\geq c_{V(x_0)}$. Hence, $\mathcal{I}_{V(x_0)}(\widehat{u})= c_{V(x_0)}$ is proved. In view of $x_0\in\mathcal{M}^{\beta}\subset\Lambda$, we have that $V(x_0)=V_0$ and $x_0\in\mathcal{M}$. As a consequence, $\widehat{u}$ is, up to a translation in the $x-$variable, an element of $\mathcal{L}_{V_0}$, namely there exists $W\in\mathcal{L}_{V_0}$ and $z_0\in\mathbb{R}^3$ such that $\widehat{u}(z)=W(z-z_0)$. Consequently, from , and , we have that $$\|u_{\varepsilon}-\varphi_{\varepsilon}(\cdot-\frac{x_{\varepsilon}}{\varepsilon}-z_0)W(\cdot-\frac{x_{\varepsilon}}{\varepsilon}-z_0)\|_{H_{\varepsilon}}\rightarrow0\quad \text{as}\,\, \varepsilon\rightarrow0.$$ Observing that $\varepsilon(\frac{x_{\varepsilon}}{\varepsilon}+z_0)\rightarrow x_0\in\mathcal{M}$ as $\varepsilon\rightarrow0$, so the proof is completed. For $a\in\mathbb{R}$ we define the sublevel set of $\mathcal{J}_{\varepsilon}$ as follows $$\mathcal{J}_{\varepsilon}^a=\{u\in H_{\varepsilon}\,\,\Big|\,\, \mathcal{J}_{\varepsilon}(u)\leq a\}.$$ We observe that the result of Lemma \[lem4-1\] holds for $d_0>0$ sufficiently small independently of the sequences satisfying the assumptions. \[lem4-2\] Let $d_0$ be the number given in Lemma \[lem4-1\]. Then for any $d\in(0,d_0)$, there exist positive constants $\varepsilon_d>0$, $\rho_d>0$ and $\alpha_d>0$ such that $$\|\mathcal{J}'_{\varepsilon}(u)\|_{(H_{\varepsilon})'}\geq\alpha_d>0\quad \text{for every}\,\, u\in\mathcal{J}_{\varepsilon}^{c_{V_0}+\rho_d}\cap(\mathcal{N}_{\varepsilon}^{d_0}\backslash\mathcal{N}_{\varepsilon}^d)\,\,\text{and}\,\,\varepsilon\in(0,\varepsilon_d).$$ We recall the definition of $\gamma_{\varepsilon}(\tau)$. The following Lemma holds. \[lem4-3\] There exists $M_0>0$ such that for any $\delta>0$ small, there exists $\alpha_{\delta}>0$ and $\varepsilon_{\delta}>0$ such that if $\mathcal{J}_{\varepsilon}(\gamma_{\varepsilon}(\tau))\geq c_{V_0}-\alpha_{\delta}$ and $\varepsilon\in(0,\varepsilon_{\delta})$, then $\gamma_{\varepsilon}(\tau)\in\mathcal{N}_{\varepsilon}^{M_0\delta}$. We are now ready to show that the penalized functional $\mathcal{J}_{\varepsilon}$ possesses a critical point for every $\varepsilon>0$ sufficiently small. Choose $\delta_1>0$ such that $M_0\delta_1<\frac{d_0}{4}$ in Lemma \[lem4-3\], and fixing $d=\frac{d_0}{4}:=d_1$ in Lemma \[lem4-2\]. Similar to the proof of Lemma 4.6 in [@HL], we can prove the following result. \[lem4-4\] There exists $\overline{\varepsilon}>0$ such that for each $\varepsilon\in(0,\overline{\varepsilon})$, there exists a sequence $\{u_{\varepsilon,n}\}\subset \mathcal{J}_{\varepsilon}^{\widetilde{\mathcal{C}}_{\varepsilon}+\varepsilon}\cap\mathcal{N}_{\varepsilon}^{d_0}$ such that $\mathcal{J}'_{\varepsilon}(u_{\varepsilon,n})\rightarrow0$ in $(H_{\varepsilon})'$ as $n\rightarrow\infty$. \[lem4-5\] $\mathcal{J}_{\varepsilon}$ possesses a nontrivial critical point $u_{\varepsilon}\in\mathcal{N}_{\varepsilon}^{d_0}\cap\mathcal{J}_{\varepsilon}^{\mathcal{D}_{\varepsilon}+\varepsilon}$ for $\varepsilon\in(0,\bar{\varepsilon}]$. By Lemma \[lem4-4\], there exists $\bar{\varepsilon}>0$ such that for each $\varepsilon\in(0,\bar{\varepsilon}]$, there exists a sequence $\{u_{\varepsilon,n}\}\subset \mathcal{J}_{\varepsilon}^{\mathcal{D}_{\varepsilon}+\varepsilon}\cap\mathcal{N}_{\varepsilon}^{d_0}$ such that $\mathcal{J}_{\varepsilon_n}'(u_{\varepsilon,n})\rightarrow0$ as $n\rightarrow\infty$ in $(H_{\varepsilon})'$. Since $\mathcal{N}_{\varepsilon}^{d_0}$ is bounded, then $\{u_{\varepsilon,n}\}$ is bounded in $H_{\varepsilon}$ and up to a subsequence, we may assume that there exists $u_{\varepsilon}\in H_{\varepsilon}$ such that $u_{\varepsilon,n}\rightharpoonup u_{\varepsilon}$ in $H_{\varepsilon}$, $u_{\varepsilon,n}\rightarrow u_{\varepsilon}$ in $L_{loc}^p(\mathbb{R}^3)$ for $1\leq p<2_s^{\ast}$ and $u_{\varepsilon,n}\rightarrow u_{\varepsilon}$ a.e. in $\mathbb{R}^3$. It is easy to check that $u_{\varepsilon}$ satisfies $$\label{equ4-26} (-\Delta)^su_{\varepsilon}+V(\varepsilon z)u_{\varepsilon}+\phi_{u_{\varepsilon}}^tu_{\varepsilon}=-4\mu_{\varepsilon}\chi_{\mathbb{R}^3\backslash\Lambda_{\varepsilon}}u_{\varepsilon}+g(\varepsilon z,u_{\varepsilon})\quad \text{in}\,\,\mathbb{R}^3,$$ where $\mu_{\varepsilon,n}=\Big(\int_{\mathbb{R}^3\backslash\Lambda_{\varepsilon}}u_{\varepsilon,n}^2\,{\rm d}z-\varepsilon\Big)_{+}\rightarrow\mu_{\varepsilon}$ as $n\rightarrow\infty$. We claim that $$\label{equ4-27} \lim_{R\rightarrow\infty}\sup_{n\geq1}\int_{|x|\geq R}(|D_su_{\varepsilon,n}|^2+V(\varepsilon z)|u_{\varepsilon,n}|^2)\,{\rm d}z=0.$$ Indeed, Choosing a cutoff function $\psi_{\rho}\in C^{\infty}(\mathbb{R}^3)$ such that $\psi_{\rho}(z)=1$ on $\mathbb{R}^3\backslash B_{2\rho}(0)$, $\psi_{\rho}(z)=0$ on $B_{\rho}(0)$, $0\leq\psi_{\rho}\leq1$ and $|\nabla \psi_{\rho}|\leq\frac{C}{\rho}$. Since $\psi_{\rho}u_{\varepsilon,n}\in H_{\varepsilon}$, then $\langle\mathcal{J}_{\varepsilon_n}'(u_{\varepsilon,n}),\psi_{\rho}u_{\varepsilon,n}\rangle\rightarrow0$ as $n\rightarrow\infty$. Thus, for sufficiently large $\rho$ such that $\Lambda_{\varepsilon}\subset B_{\rho}(0)$, we have $$\begin{aligned} &\int_{\mathbb{R}^3}(|D_su_{\varepsilon,n}|^2+V(\varepsilon z)|u_{\varepsilon,n}|^2)\psi_{\rho}\,{\rm d}z+\int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\frac{(u_{\varepsilon,n}(z)-u_{\varepsilon,n}(y))(\psi_{\rho}(z)-\psi_{\rho}(y))u_{\varepsilon, n}(y)}{|z-y|^{3+2s}}\,{\rm d}y\,{\rm d}z\\ &=\int_{\mathbb{R}^3}g(\varepsilon z,u_{\varepsilon,n})u_{\varepsilon,n}\psi_{\rho}\,{\rm d}z-\int_{\mathbb{R}^3}\phi_{u_{\varepsilon,n}}^t|u_{\varepsilon,n}|^2\psi_{\rho}\,{\rm d}z-4\Big(\int_{\mathbb{R}^3\backslash\Lambda_{\varepsilon}}|u_{\varepsilon,n}|^2\,{\rm d}z-\varepsilon\Big)_{+}\int_{\mathbb{R}^3\backslash\Lambda_{\varepsilon}}|u_{\varepsilon,n}|^2\psi_{\rho}\,{\rm d}z\\ &\leq\int_{\mathbb{R}^3}g(\varepsilon z,u_{\varepsilon,n})u_{\varepsilon,n}\psi_{\rho}\,{\rm d}z\leq\frac{V_0}{k}\int_{\mathbb{R}^3}|u_{\varepsilon,n}|^2\psi_{\rho}\,{\rm d}z.\end{aligned}$$ In view of the fact that $|D_s\psi_{\rho}|^2\leq\frac{C}{\rho^{2s}}$ for any $z\in\mathbb{R}^3$ and Hölder’s inequality, we deduce that $$\begin{aligned} &\int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\frac{(u_{\varepsilon,n}(z)-u_{\varepsilon,n}(y))(\psi_{\rho}(z)-\psi_{\rho}(y))u_{\varepsilon, n}(y)}{|z-y|^{3+2s}}\,{\rm d}y\,{\rm d}z\\ &\leq\Big(\int_{\mathbb{R}^3}|D_su_{\varepsilon,n}|^2\,{\rm d}z\Big)^{\frac{1}{2}}\Big(\int_{\mathbb{R}^3}|D_s\psi_R|^2|u_{\varepsilon,n}|^2\,{\rm d}z\Big)^{\frac{1}{2}}\leq \frac{C}{\rho^s}\|u_{\varepsilon,n}\|_2\leq\frac{C}{\rho^s}.\end{aligned}$$ Therefore, from the estimates above, we obtain $$\int_{\mathbb{R}^3\backslash B_{2\rho}(0)}(|D_su_{\varepsilon,n}|^2+V(\varepsilon z)|u_{\varepsilon,n}|^2)\,{\rm d}z\leq\frac{C}{\rho^s}.$$ Thus, the claim follows. From , we see that $u_{\varepsilon,n}\rightarrow u_{\varepsilon}$ in $L^2(\mathbb{R}^3)$. Next, we claim that $u_{\varepsilon,n}\rightarrow u_{\varepsilon}$ in $L^{2_s^{\ast}}(\mathbb{R}^3)$ as $n\rightarrow\infty$. Indeed, from Lemma \[lem2-2\], we may assume that $$|D_su_{\varepsilon,n}|^2\rightharpoonup\mu\,\,\text{and}\,\, (u_{\varepsilon,n})^{2_s^{\ast}}\rightharpoonup\nu\,\, \text{weakly-}\ast\,\, \text{in}\,\, \mathcal{M}(\mathbb{R}^3)\,\,\text{as}\,\, n\rightarrow\infty$$ and there exist a (at most countable) set of distinct points $\{x_j\}_{j\in J}\subset\mathbb{R}^3$, $\mu_j\geq0$, $\nu_j\geq0$ with $\mu_j+\nu_j>0$ ($j\in J$) such that $$\label{equ4-28} \mu\geq|D_su_{\varepsilon}|^2+\sum_{j\in J}\mu_j\delta_{x_j},\quad \nu=(u_{\varepsilon})^{2_s^{\ast}}+\sum_{j\in J}\nu_j\delta_{x_j},\quad\nu_j\leq(\mathcal{S}_s^{-1}\mu_j)^{\frac{2_s^{\ast}}{2}}\,\, \text{for any}\,\, j\in J.$$ We are going to show that $J={\O}$. Suppose by contradiction that $J\neq{\O}$, i.e., there exists $x_{j_0}\in \mathbb{R}^3$ for some $j_0\in J$. Similar to the arguments in Proposition \[pro2-3\], we get $\nu_{j_0}\geq\mathcal{S}_s^{\frac{3}{2s}}$. On the other hand, since $\{u_{\varepsilon,n}\}\subset \mathcal{N}_{\varepsilon}^{d_0}$, by the definition of $\mathcal{N}_{\varepsilon}^{d_0}$, there exist $\{W_n\}\subset \mathcal{L}_{V_0}$, $\{x_n\}_{n=1}^{\infty}\subset\mathcal{M}^{\beta}$ such that $$\|u_{\varepsilon,n}-\varphi_{\varepsilon}(\cdot-\frac{x_n}{\varepsilon})W_n(\cdot-\frac{x_n}{\varepsilon})\|_{H_{\varepsilon}}\leq \frac{3}{2}d_0.$$ Since $\mathcal{L}_{V_0}$ and $\mathcal{M}^{\beta}$ are compact, there exist $W_0\in\mathcal{L}_{V_0}$, $x'\in\mathcal{M}^{\beta}$ such that $W_n\rightarrow W_0$ in $H^s(\mathbb{R}^3)$ and $x_n\rightarrow x'$ as $n\rightarrow\infty$. Thus, for $\varepsilon>0$ small, $$\label{equ4-29} \|u_{\varepsilon,n}-\varphi_{\varepsilon}(\cdot-\frac{x'}{\varepsilon})W_0(\cdot-\frac{x'}{\varepsilon})\|_{H_{\varepsilon}}\leq2d_0.$$ It follows from , , $(f_3)$, $v_{j_0}\geq\mathcal{S}_s^{\frac{3}{2s}}$ and $W_0\in\mathcal{L}_{V_0}$ that $$\begin{aligned} \widetilde{\mathcal{C}}_{\varepsilon}+\varepsilon&\geq \mathcal{J}_{\varepsilon}(u_{\varepsilon,n})=\mathcal{J}_{\varepsilon}(u_{\varepsilon,n})-\frac{1}{q(s+t)-3}\mathcal{G}_{V_0}(W_0)\\ &\geq\frac{(q-4)s+(q-2)t}{2(q(s+t)-3)}\int_{\mathbb{R}^3}|D_su_{\varepsilon,n}|^2\,{\rm d}z\\ &+\frac{4s+2t-3}{2(q(s+t)-3)}\Big(\int_{\mathbb{R}^3}|D_su_{\varepsilon,n}|^2\,{\rm d}z-\int_{\mathbb{R}^3}|D_sW_0|^2\,{\rm d}z\Big)\\ &+\frac{2s+2t-3}{2(q(s+t)-3)}\Big(\int_{\mathbb{R}^3}V(\varepsilon z)u_{\varepsilon,n}^2\,{\rm d}z-\int_{\mathbb{R}^3}V_0W_0^2\,{\rm d}z\Big)\\ &+\frac{4s+2t-3}{4(q(s+t)-3)}\Big(\int_{\mathbb{R}^3}\phi_{u_{\varepsilon,n}}^tu_{\varepsilon,n}^2\,{\rm d}z-\int_{\mathbb{R}^3}\phi_{W_0}^tW_0^2\,{\rm d}z\Big)\\ &+\frac{(2_s^{\ast}-q)(s+t)}{2_s^{\ast}(q(s+t)-3)}\int_{\mathbb{R}^3}u_{\varepsilon,n}^{2_s^{\ast}}\,{\rm d}z+\frac{2_s^{\ast}(s+t)-3}{2_s^{\ast}(q(s+t)-3)}\Big(\int_{\mathbb{R}^3}u_{\varepsilon,n}^{2_s^{\ast}}\,{\rm d}z-\int_{\mathbb{R}^3}W_0^{2_s^{\ast}}\,{\rm d}z\Big)\\ &-\Big(\int_{\mathbb{R}^3}F(u_{\varepsilon,n})\,{\rm d}z-\int_{\mathbb{R}^3}F(W_0)\,{\rm d}z\Big)\\ &\geq\frac{(q-4)s+(q-2)t}{2(q(s+t)-3)}\int_{\mathbb{R}^3}|D_su_{\varepsilon,n}|^2\,{\rm d}z+\frac{(2_s^{\ast}-q)(s+t)}{2_s^{\ast}(q(s+t)-3)}\int_{\mathbb{R}^3}u_{\varepsilon,n}^{2_s^{\ast}}\,{\rm d}z-Cd_0+o(1)\\ &\geq\frac{(q-4)s+(q-2)t}{2(q(s+t)-3)}\mu_{j_0}+\frac{(2_s^{\ast}-q)(s+t)}{2_s^{\ast}(q(s+t)-3)}\nu_{j_0}-Cd_0+o(1)\\ &\geq\frac{(q-4)s+(q-2)t}{2(q(s+t)-3)}\mathcal{S}_s\nu_{j_0}^{2/2_s^{\ast}}+\frac{(2_s^{\ast}-q)(s+t)}{2_s^{\ast}(q(s+t)-3)}\nu_{j_0}-Cd_0+o(1)\\ &\geq\Big(\frac{(q-4)s+(q-2)t}{2(q(s+t)-3)}+\frac{(2_s^{\ast}-q)(s+t)}{2_s^{\ast}(q(s+t)-3)}\Big)\mathcal{S}_s^{\frac{3}{2s}}-Cd_0+o(1)\\ &=\frac{s}{3}\mathcal{S}_s^{\frac{3}{2s}}-Cd_0+o(1)\end{aligned}$$ where $o(1)\rightarrow0$ as $\varepsilon\rightarrow0$. Taking $\varepsilon\rightarrow0$ and $d_0\rightarrow0$, we have that $c_{V_0}\geq\frac{s}{3}\mathcal{S}_s^{\frac{3}{2s}}$, contradicts with Lemma \[lem3-4\]. Therefore, $u_{\varepsilon,n}^{+}\rightarrow u_{\varepsilon}^{+}$ in $L^{2_s^{\ast}}(\mathbb{R}^3)$. Together with $u_{\varepsilon,n}\rightarrow u_{\varepsilon}$ in $L^2(\mathbb{R}^3)$, Lebesgue Dominated Convergence Theorem implies that for $\varepsilon>0$ small, $$\label{equ4-30} \int_{\mathbb{R}^3}g(\varepsilon z,u_{\varepsilon,n})u_{\varepsilon,n}\,{\rm d}z\rightarrow\int_{\mathbb{R}^3}g(\varepsilon z,u_{\varepsilon})u_{\varepsilon}\,{\rm d}z,\quad n\rightarrow\infty.$$ From and $\mathcal{J}_{\varepsilon}'(u_{\varepsilon,n})\rightarrow0$ as $n\rightarrow\infty$, we get that $$\begin{aligned} \label{equ4-31} \int_{\mathbb{R}^3}|D_su_{\varepsilon}|^2\,{\rm d}z&+\int_{\mathbb{R}^3}V(\varepsilon z)|u_{\varepsilon}|^2\,{\rm d}z+\int_{\mathbb{R}^3}\phi_{u_{\varepsilon}}^t|u_{\varepsilon}|^2\,{\rm d}z\nonumber\\ &+4\mu_{\varepsilon}\int_{\mathbb{R}^3}\chi_{\mathbb{R}^3\backslash\Lambda_{\varepsilon}}|u_{\varepsilon}|^2\,{\rm d}z=\int_{\mathbb{R}^3}g(\varepsilon z,u_{\varepsilon})u_{\varepsilon}\,{\rm d}z\end{aligned}$$ and $$\begin{aligned} \label{equ4-32} \int_{\mathbb{R}^3}|D_su_{\varepsilon,n}|^2\,{\rm d}z&+\int_{\mathbb{R}^3}V(\varepsilon z)|u_{\varepsilon,n}|^2\,{\rm d}z+\int_{\mathbb{R}^3}\phi_{u_{\varepsilon,n}}^t|u_{\varepsilon,n}|^2\,{\rm d}z\nonumber\\ &+4\mu_{\varepsilon,n}\int_{\mathbb{R}^3}\chi_{\mathbb{R}^3\backslash\Lambda_{\varepsilon}}|u_{\varepsilon,n}|^2\,{\rm d}z=\int_{\mathbb{R}^3}g(\varepsilon z,u_{\varepsilon,n})u_{\varepsilon,n}\,{\rm d}z+o(1),\end{aligned}$$ where $o(1)\rightarrow0$ as $n\rightarrow\infty$. From , and , we can deduce that $$u_{\varepsilon,n}\rightarrow u_{\varepsilon} \quad \text{in}\,\, H_{\varepsilon}\quad \text{and}\quad \mu_{\varepsilon}=\Big(\int_{\mathbb{R}^3\backslash\Lambda_{\varepsilon}}u_{\varepsilon}^2\,{\rm d}z-\varepsilon\Big)_{+}.$$ Since $0\not\in\mathcal{N}_{\varepsilon}^{d_0}$, $u_{\varepsilon}\neq0$ and $u_{\varepsilon}\in\mathcal{N}_{\varepsilon}^{d_0}\cap\mathcal{J}_{\varepsilon}^{\mathcal{D}_{\varepsilon}+\varepsilon}$. The proof is completed. [**Proof of Theorem \[thm1-1\].**]{} Using Lemma \[lem4-5\] and by similar arguments as the proof the Theorem 1.1 in [@Teng4], we can complete the proof of Theorem \[thm1-1\]. [**Acknowledgements.**]{} The work is supported by NSFC grant 11501403. [10]{} C. O. Alves, O. H. Miyagaki, Existence and concentration of solution for a class of fractional elliptic equation in $\mathbb{R}^N$ via penalization method, Calc. Var. Partial Differential Equations (2016) 55: 19. A. Ambrosetti and D. Ruiz, Multiple bound states for the Schrödinger-Poisson problem, Comm. Contemp. Math. 10 (2008) 391–404. V. Ambrosio, Multiplicity of positive solutions for a class of fractional Schrödinger equations via penalization method, Annali di Matematica Pura e Applicata, 196 (2017) 2043–2062. V. I. Bogachev, Measure Theory, Vol. II. Springer-Verlag: Berlin, 2007. C. Brandle, E. Colorado, A. de Pablo and U. Sánchez, A concave-convex elliptic problem involving the fractional Laplacian, Proc. Roy. Soc. Edinburgh Sect. A 143 (2013) 39–71. V. Benci and D. Fortunato, An eigenvalue problem for the Schrödinger-Maxwell equations, Top. Methods. Nonlinear Anal. 11 (1998) 283–293. J. Byeon and L. Jeanjean, Standing waves for nonlinear Schrödinger equations with a general nonlinearity, Arch. Ration. Mech. Anal. 185 (2007) 185–200. J. Byeon and L. Jeanjean, Multi-peak standing waves for nonlinear Schrödinger equations with a general nonlinearity, Discrete Contin. Dyn. Syst. 19 (2007) 255–269. C. Bucur and E. Valdinoci, Nonlocal diffusion and applications, arXiv:1504.08292v1. J. Byeon and Z. Q. Wang, Standing waves witha criticak frequency for nonlinear Schrödinger equations II, Calc. Var. Partial Differential Equations, 18 (2003) 207–219. A. Cotsiolis and N. K. Tavoularis, Best constants for Sobolev inequalities for higher order fractional derivatives, J. Math. Anal. Appl. 295 (2004) 225–236. S. Y. A. Chang and M. del Mar González, Fractional Laplacian in conformal geometry, Adv. Math. 226 (2011) 1410–1432. L. Caffarelli and L. Silvestre, An extension problem related to the fractional Laplacian, Comm. Partial Differential Equations 32 (2007) 1245–1260. X. Cabré and Y. Sire, Nonlinear equations for fractional Laplacians, I: Regularity, maximum principles, and Hamiltonian estimates, Ann. I. H. Poincaré-AN 31 (2014) 23–53. R. Cont and P. Tankov, Financial modeling with jump processes, Chapman Hall/CRC Financial Mathematics Series, Boca Raton, 2004. X. Chang and Z. Wang, Ground state of scalar field equations involving a fractional Laplacian with general nonlinearity, Nonlinearity 26 (2013) 479–494. M. del Pino and P. L. Felmer, Local mountain pass for semilinear elliptic problems in unbounded domains, Calc. Var. Partial Differential Equations 4 (1996) 121–137. S. Dipierro, M. Medina and E. Valdinoci, Fractional elliptic problems with critical growth in the whole of $\mathbb{R}^n$, arXiv:1506.01748v1. J. Davila, M. del Pino, S. Dipierro and E. Valdinoci, Concentration phenomena for the nonlocal Schrödinger equation with Dirichlet datum, Anal. PDE, 8 (2015) 1165–1235. J. Dávila, M. Del Pino and J. C. Wei, Concentrating standing waves for fractional nonlinear Schrödinger equation, J. Differerntial Equations, 256 (2014) 858–892. T. D’Aprile and J. C. Wei, On bound states concentrating on spheres for the Maxwell-Schrödinger equation, SIAM J. Math. Anal. 37 (2005) 321–342. R. Frank and E. Lenzmann, Uniqueness of ground states for fractional Laplacians in $\mathbb{R}$, Acta Math. 210 (2013) 261–318. R. L. Frank, E. Lenzmann and L. Silvestre, Uniqueness of radial solutions for the fractional Laplacian, Commu. Pure Appl. Math. LXIX (2016) 1671–1726. P. Felmer, A. Quaas and J. Tan, Positive solutions of nonlinear Schrödinger equation with the fractional Laplacian, Proc. Royal Soc. Edinburgh A 142 (2012) 1237–1262. D. Gilbarg and N. S. Trudinger, Elliptic partial equations of second order, Springer-Verlag, New York, 1998. X. He, Multiplicity and concentration of positive solutions for the Schrödinger-Poisson equations, Z. Angew. Math. Phys. 62 (2011) 869–889. X. He and W. Zou, Existence and concentration of ground states for Schrödinger-Poisson equations with critical growth, J. Math. Phy. 53 (2012) 023702. X. He and W. Zou, Existence and concentration result for the fractional Schrödinger equations with critical nonlinearities, Calc. Var. Partial Differential Equations (2016) 55: 91. Y. He and G. B. Li, Standing waves for a class of Schrödinger-Poisson equations in $\mathbb{R}^3$ involving critical Sobolev exponents, Ann. Acad. Sci. Fenn. Math. 40 (2015) 729–766. I. Ianni and G. Vaira, Solutions of the Schrödinger-Poisson problem concentrating on spheres, Part I: Necessary conditions, Math. Models Meth. Appl. Sci. 19 (2009) 707–720. I. Ianni and G. Vaira, On concentration of positive bound states for the Schrödinger-Poisson problem with potentials, Adv. Nonlinear Stud. 8 (2008) 573–595. T. Jin, Y. Li and J. Xiong, On a fractional Nirenberg problem, part I: blow up analysis and compactness of solutions, J. Eur. Math. Soc. 16 (2014) 1111–1171. N. Laskin, Fractional quantum mechanics and Lévy path integrals, Physics Letters A 268 (2000) 298–305. N. Laskin, Fractional Schrödinger equation, Physical Review 66 (2002) 56–108. J. L. Lions and E. Magenes, Non-homogeneous boundary value problems and applications. Vol. I. Translated from the French by P. Kenneth. Die Grundlehren der mathematischen Wissenschaften, Band 181. Springer-Verlag, New York-Heidelberg, 1972. Z. Liu, S. Guo and Y. Fang, Multiple semiclassical states for coupled Schrödinger-Poisson equations with critical exponential growth, J. Math. Phys. 56 (2015) 041505. E. H. Lieb and B. Simon, The Thomas-Fermi theory of atoms, molecules and solids, Adv. Math. 23 (1977) 22–116. Z. S. Liu and J. J. Zhang, Multiplicity and concentration of positive solutions for the fractional Schrödinger-Poisson systems with critical growth, ESAIM: Control, Optim. Calc. Var., DOI: 10.1051/cocv/2016063, (2016). R. Metzler and J. Klafter, The random walks guide to anomalous diffusion: a fractional dynamics approach, Phys. Rep. 339 (2000) 1–77. E. G. Murcia and G. Siciliano, Positive semiclassical states for a fractional Schrödinger-Poisson system, arXiv:1601.00485v1. P. Markowich, C. Ringhofer and C. Schmeiser, Semiconductor Equations, Springer-Verlag, Vienna, 1990. E. Di Nezza, G. Palatucci and E. Valdinoci, Hitchhiker’s guide to the fractional sobolev spaces, Bulletin des Sciences Mathematiques 136 (2012) 521–573. G. Palatucci and A. Pisante, Improved Sobolev embeddings,profile decomposition, and concentration-compactness for fractional Sobolev spaces, Calc. Var. Partial Differential Equations 50 (2014) 799–829. R. Benguria, H. Brézis and E. H. Lieb, The Thomas-Fermi-von Weizsäcker theory of atoms and molecules, Comm. Math. Phys. 79 (1981) 167–180. D. Ruiz, The Schrödinger-Poisson equation under the effect of a nonlinear local term, J. Func. Anal. 237 (2006) 655–674. D. Ruiz, Semiclassical states for coupled Schrödinger-Maxwell equations: Concentration around a sphere, Math. Models Methods Appl. Sci. 15 (2005) 141–164. D. Ruiz and G. Vaira, Cluster solutions for the Schrödinger-Poinsson-Slater problem around a local minimum of potential, Rev. Mat. Iberoamericana 27 (2011) 253–271. L. Silvestre, Regularity of the obstacle problem for a fractional power of the Laplace operator, Comm. Pure Appl. Math. 60 (2007) 67–112. X. D. Shang and J. H. Zhang, Ground states for fractional Schrödinger equations with critical growth, Nonlinearity 27 (2014) 187–207. R. Servadei and E. Valdinoci, The Brezis–Nirenberg result for the fractional Laplacian, Trans. Amer. Math. Soc. 367 (2015) 67–102. K. M. Teng, Multiple solutions for a class of fractional Schrödinger equation in $\mathbb{R}^N$, Nonlinear Anal. Real World Appl. 21 (2015) 76–86. K. M. Teng and X. M. He, Ground state solution for fractional Schrödinger equations with critical Sobolev exponent, Commu. Pure Appl. Anal. 15 (2016) 991–1008. K. M. Teng, Existence of ground state solutions for the nonlinear fractional Schrödinger-Poisson system with critical Sobolev exponent, J. Differential Equations 261 (2016) 3061–3106. K. M. Teng, Ground state solutions for the nonlinear fractional Schrödinger-Poisson system, Applicable Analysis, (2018) doi.org/10.1080/00036811.2018.1441998. K. M. Teng and R. P. Agarwal, Existence and concentration of positive ground state solutions for nonlinear fractional Schrödinger-Poisson system with critical growth, Math. Meth. Appl. Sci. 41 (2018) 8258–8293. K. M. Teng, Concentrating bounded states for fractional Schrödinger-Poisson system involving critical Sobolev exponent, arXiv:1906.10802. K. M. Teng, Concentrating bounded states for fractional Schrödinger-Poisson system, arXiv:1710.03495. J. Wang, L. X. Tian, J. X. Xu and F. B. Zhang, Existence and concentration of positive solutions for semilinear Schrödinger-Poisson systems in $\mathbb{R}^3$, Calc. Var. Partial Differential Equations, 48 (2013) 243–273. J. Wang, L. X. Tian, J. X. Xu and F. B. Zhang, Existence of multiple positive solutions for Schrödinger-Poisson systems with critical growth, Z. Angew. Math. Phys. 66 (2015) 2441–2471. Y. Yu, F. Zhao and L. Zhao, The concentration behavior of ground state solutions for a fractional Schrödinger-Poisson system, Calc. Var. Partial Differential Equations (2017) 56: 116. J. Zhang, The existence and concentration of positive solutions for a nonlinear Schrödinger-Poisson system with critical growth, J. Math. Phys. 55 (2014) 031507. J. Zhang, J. M. DO Ó and M. Squassina, Fractional Schrödinger-Poisson system with a general subcritical or critical nonlinearity, Adv. Nonlinear Stud. 16 (2016) 15–30. J. Zhang, Z. Chen, W. Zou, Standing waves for nonlinear Schr¡§odinger equations involving critical growth, J. London Math. Soc. 90 (2014) 827–844. L. G. Zhao H. D. Liu and F. K. Zhao, Existence and concentration of solutions for the Schrödinger-Poisson equations with steep well potential, J. Differential Equations. 255 (2013) 1–23. X. Zhang, B. L. Zhang and M. Q. Xiang, Ground states for fractional Schrödinger equations involving a critical nonlinearity, Adv. Nonlinear Anal. 5 (2016) 293–314.
--- abstract: | We present a general relativistic accretion disc model and its application to the soft-state X-ray spectra of black hole binaries. The model assumes a flat, optically thick disc around a rotating Kerr black hole. The disc locally radiates away the dissipated energy as a blackbody. Special and general relativistic effects influencing photons emitted by the disc are taken into account. The emerging spectrum, as seen by a distant observer, is parametrized by the black hole mass and spin, the accretion rate, the disc inclination angle and the inner disc radius. We fit the [*ASCA*]{} soft state X-ray spectra of LMC X-1 and GRO J1655–40 by this model. We find that having additional limits on the black hole mass and inclination angle from optical/UV observations, we can constrain the black hole spin from X-ray data. In LMC X-1 the constrain is weak, we can only rule out the maximally rotating black hole. In GRO J1655–40 we can limit the spin much better, and we find $0.68 \leq a \leq 0.88$. Accretion discs in both sources are radiation pressure dominated. We don’t find Compton reflection features in the spectra of any of these objects. author: - | Marek Gierliński$^{1,2}$, Andrzej Macio[ł]{}ek-Nied[ź]{}wiecki$^{3}$ and Ken Ebisawa$^{4}$\ $^1$Astronomical Observatory, Jagiellonian University, Orla 171, 30-244 Cracow, Poland\ $^2$N. Copernicus Astronomical Center, Bartycka 18, 00-716 Warsaw, Poland\ $^3$[Ł]{}[ó]{}d[ź]{} University, Department of Physics, Pomorska 149/153, 90-236 [Ł]{}[ó]{}d[ź]{}, Poland\ $^4$Code 660.2, Laboratory for High Energy Astrophysics, NASA/Goddard Space Flight Center, Greenbelt, MD 20771, USA\ (also at Universities Space Research Association)\ date: Accepted on 2001 March 21 title: 'Application of a relativistic accretion disc model to X-ray spectra of LMC X-1 and GRO J1655–40' --- = -0.5cm \[firstpage\] accretion, accretion discs – relativity – stars: individual (LMC X-1) – stars: individual (GRO J1655-40) – X-rays: stars Introduction {#sec:introduction} ============ Black hole binaries (BHB) are generally observed in one of the two spectral states, as determined by their X-ray/$\gamma$-ray energy spectra. In the hard state most of energy is radiated above $\sim$ 10 keV. The spectrum is dominated by a power law with the photon spectral index $\Gamma < 2.0$ and a high-energy cutoff above 100 keV (Grove et al. 1998). A common interpretation of this emission is in terms of Comptonization of soft seed photons by thermal, hot, optically-thin electron plasma. Cyg X-1 (Gierli[ń]{}ski et al. 1997) and GX339–4 (Zdziarski et al. 1998) are typical examples of the hard state BHB. In the soft state most of the energy is radiated in the soft X-rays, below $\sim$ 10 keV. This energy range is dominated by a blackbody-like component with characteristic temperature $\sim$ 1 keV. This component is usually attributed to the thermal emission of a cold, optically-thick accretion disc, extending down to the marginally stable orbit (Shakura & Sunyaev 1973). The disc spectrum is often accompanied by a high-energy tail, which can be described as a power law with a typical photon spectral index $\Gamma \sim 2.0$–$2.5$, extending into $\gamma$-rays without apparent break or cutoff (Grove et al. 1998). LMC X-1 (Schlegel et al. 1994), LMC X-3 (Ebisawa et al. 1993), GS 1124–68 just after the outburst (Ebisawa et al. 1994) or Cyg X-1 between May and August 1996 (Gierli[ń]{}ski et al. 1999) are the examples of the soft state BHB. At first approximation the soft component in the soft state can be described by a multicolour disc model (hereafter MCD; Mitsuda et al.  1984), which spectrum is a sum of blackbodies with radial temperature distribution $T(r) \propto r^{-3/4}$. This model has been commonly used for spectral fitting by many authors. Though MCD model approximates the disc spectral shape well, it ignores the inner torque-free boundary condition and parameters derived through it are incorrect. An improvement of the MCD model is a multicolour disc model in the pseudo-Newtonian potential (Gierli[ń]{}ski et al. 1999), taking properly into account the boundary condition. In vicinity of the black hole relativistic effects become important. The effects of relativity on the emerging disc spectrum have been studied by Cunningham (1975), Laor, Netzer & Piran (1990), Asaoka (1989), Hanawa (1989), Fu & Taam (1990), Yamada et al. (1994) and others. Hanawa (1989) calculated disc spectra around a non-rotating black hole, creating a fast-computing model that have been used for spectral fitting several times (e.g. Ebisawa, Mitsuda & Hanawa 1991; Makishima et al. 2000). However, this model is limited to Schwarzschild metric only and effects of light bending have been neglected. More advanced model was developed by Laor et al. (1990), who studied accretion discs around rotating black hole, taking into account both relativistic effects affecting radiation and vertical structure of the disc. Laor (1990) applied this model to the AGN spectra. In this paper we show an application of the general relativistic (hereafter GR) disc model to the soft state X-ray spectra of BHB. We focus on estimating the black hole spin. The model thoroughly treats relativistic effects in the deep gravitational potential of the black hole. In Section \[sec:model\] we give description of the model and show examples of the model spectra. Then, we fit the X-ray spectral data of LMC X-1 (Section \[sec:lmc\_x-1\]) and GRO J1655–40 (Section \[sec:gro\_j1655-40\]) with the GR model. In Section \[sec:discussion\] we discuss the obtained results. Model description {#sec:model} ================= We consider an accretion disc around a black hole (e.g. Page & Thorne 1974). The model is based upon certain assumptions. We assume that the disc rotates in the equatorial plane around the Kerr black hole; that it is in a steady state; that it is axially symmetric; that it is flat and geometrically thin; that it is optically thick; that it locally radiates away the dissipated gravitational energy as a blackbody; that there is no emission below the marginally stable orbit. A black hole is characterized by its mass, $M$, and angular momentum, $J$. We use dimensionless spin $a \equiv Jc/GM^2$, where $0 \leq a \leq 1$ and $a = 0$ corresponds to the non-rotating (Schwarzschild) black hole. In accreting systems the radiation emitted by the disc produces a counteracting torque and the black hole cannot be spun-up beyond $a = 0.998$ (Thorne 1974). Therefore we do not exceed this value in our model and refer to it as to the maximally rotating black hole. The efficiency of accretion, $\eta$, varies from $\eta \approx 0.057$ for the non-rotating black hole to $\eta \approx 0.32$ for $a = 0.998$. The spectrum of the disc, $F_\nu$, seen at infinity is computed by means of a photon transfer function, $\cal T$, which describes travel of photons from the point of origin to the distant observer. The photon of frequency $\nu_e$ is emitted at radius $r_e$, at cosine angle $\mu_e \equiv \cos \theta_e$ and then perceived at frequency $\nu_o$ by the observer at cosine angle $\mu_o \equiv \cos i$. The observed flux is $$\begin{aligned} \lefteqn{F_{\nu_o}(\mu_o) = \left( R_g \over D \right)^2 \nu_o \! \int \!\! {\rm d}g_{\rm eff} \!\! \int \!\! {\rm d}r_e \!\! \int \!\! {\rm d}\mu_e \,} \nonumber \\ \lefteqn{~~~~\times r \,{\cal T}(a, r_e, \mu_e, g_{\rm eff}, \mu_o) \, N(r_e, \mu_e, {\nu_o \over g_{\rm eff}}).} \label{eq:trans_integ}\end{aligned}$$ Subscript ‘$e$’ denotes quantities measured in the local frame co-moving with the disc and subscript ‘$o$’ denotes quantities measured by the observer at infinity. $N(r_e, \mu_e, {\nu_o \over g_{\rm eff}}) \equiv N_{\nu_e}(r_e, \mu_e)$ is the specific photon number intensity of the disc emission, $g_{\rm eff} \equiv \nu_o / \nu_e$ is the effective redshift of the photon, $D$ is the distance to the source and $R_g \equiv GM/c^2$ is the gravitational radius. Numerical constants are included in the transfer function. The transfer function treats the special and general relativistic effects affecting the spectrum. It takes into account the Doppler energy shift from the fast moving matter in the disc, the gravitational shift and the light bending near the massive object. It includes integration over the azimuthal angle $\phi_e$. Each element of the transfer function, ${\cal T}(a, r_e, \mu_e, g_{\rm eff}, \mu_o)$, was computed by summing all photon trajectories for all angles $\phi_e$, at given ($r_e$, $\mu_e$), for which required ($g_{\rm eff}$, $\mu_o$) were obtained. The details of the transfer function computation are given in Appendix \[app:transfer\_function\]. A similar approach to computing the black hole disc spectra has been applied, e.g., by Laor et al. (1990) and Asaoka (1989). Note that our construction of $\cal T$ is slightly different than that of Cunningham (1975), as the latter one involves the relation of the specific intensities at the emitter and the observer, respectively, given by Liouville’s theorem, while our model is based on counting individual photons. The local gravitational energy release per unit area of the disc, per unit time is (Page & Thorne 1974) $$\begin{aligned} \lefteqn{Q(x) = {3 \dot{M}_{\rm d} c^6 \over 8 \pi G^2 M^2} \, {1 \over x^4(x^3 - 3x + 2a)} \left[ x - x_0 - {3 \over 2} a \ln \left({x \over x_0}\right) \right.} \nonumber \\ & & - {3(x_1 - a)^2 \over x_1(x_1 - x_2)(x_1 - x_3)} \ln \left({x - x_1 \over x_0 - x_1}\right) \nonumber \\ & & - {3(x_2 - a)^2 \over x_2(x_2 - x_1)(x_2 - x_3)} \ln \left({x - x_2 \over x_0 - x_2}\right) \nonumber \\ & & \left. - {3(x_3 - a)^2 \over x_3(x_3 - x_1)(x_3 - x_2)} \ln \left({x - x_3 \over x_0 - x_3}\right) \right],\end{aligned}$$ where $x = r_e^{1/2}$, $x_0 = r_{\rm ms}^{1/2}$, $x_1 = 2 \cos({1 \over 3} \arccos a - \pi / 3)$, $x_2 = 2 \cos({1 \over 3} \arccos a + \pi / 3)$ and $x_3 = -2 \cos({1 \over 3} \arccos a)$. $$r_{\rm ms} = 3 + A_2 - {\rm sign} \, a \, \left[(3 - A_1)(3 + A_1 + 2A_2)\right]^{1/2} \label{eq:rms}$$ is the marginally stable orbit radius, where $A_1 = 1 + (1 - a^2)^{1/3}[(1 + a)^{1/3} + (1 - a)^{1/3}]$ and $A_2 = (3a^2 + A_1^2)^{1/2}$. Both $r_e$ and $r_{\rm ms}$ are expressed in units of $R_g$. The photon number intensity, $N_{\nu_e}$, can be derived from the energy release rate, $Q$. However, the relation between these two quantities is not simply $Q(r_e)=\int h \nu_e N_{\nu_e}(r_e,\mu_e) {\rm d} \nu_e {\rm d} \mu_e$, as assumed in previous similar calculations (e.g., Laor et al. 1990). The photon number intensity which is used in equation (\[eq:trans\_integ\]), is defined in terms of the distant observer coordinate frame. On the other hand, $Q$ is defined as the energy dissipated per unit proper time per unit proper area, as measured by the observer orbiting with the disc. The corresponding dissipation rate measured by the distant observer will be affected by kinematic and gravitational time dilation and length contraction. We emphasize that these effects are not included in the calculation of the single photon trajectory, and are treated separately when applying the transfer function formalism. In Appendix \[app:intensity\_transformation\] we derive transformation of the time and the disc surface area between the disc rest frame and the frame of the distant observer: $${\rm d}t'=\beta_t {\rm d}t,~~~~~~~~~~~{\rm d} r' {\rm d} \phi'=\beta_S {\rm d} r {\rm d} \phi,$$ where $(t,r,\phi)$ and $(t',r',\phi')$ are the coordinates of the reference frame of the distant observer and the disc, respectively. In order to find a locally emitted spectrum we would need to compute the vertical structure of the disc. However, this problem did not find a satisfactory general solution yet. Instead, we simply assume that each point of the disc radiates like a blackbody and introduce two corrections. First, we note that the Thomson scattering dominates over absorption in the inner part of the disc and the local spectrum is affected by Comptonization. At a given radius, $r_e$, we approximate the spectrum by a diluted blackbody, $$\begin{aligned} B_{\nu_e}^{\rm db} = {1 \over f_{\rm col}^4} B_{\nu_e}(f_{\rm col} T_{\rm eff}),\end{aligned}$$ where $B_{\nu_e}$ is the Planck function, $f_{\rm col} = T_{\rm col}/T_{\rm eff}$ is the spectral hardening factor, $T_{\rm col}$ is locally observed colour temperature and $T_{\rm eff}$ is the effective temperature, related to the locally dissipated energy, $Q$, as $\sigma T_{\rm eff}^4 = Q$. Following Shimura & Takahara (1995) we use $f_{\rm col} = 1.7$. The second correction concerns angular distribution of local emission. We assume a linear limb darkening in form: $$\begin{aligned} I_{\nu_e}(\mu_e) = I_0 {1 + \delta\mu_e \over 1 + \delta} B_{\nu_e}^{\rm db},\end{aligned}$$ where $\delta = 2.06$ in classical electron scattering limit (e.g. Phillip & M[é]{}sz[á]{}ros 1986). $I_{\nu_e}$ is the specific intensity in the local frame, co-rotating with the disc. Factor $I_0$ is found from the requirement of energy conservation. Namely, the power emitted by a limb-darkened surface element d$A$, $2\pi \int_0^1 I_{\nu_e}(\mu_e) \mu_e $d$\mu_e$d$A$, should be equal to the power emitted by the same surface element without limb darkening, $\pi B_{\nu_e}^{\rm db}\,$d$A$. From this we find $$\begin{aligned} I_0 = 3 {1 + \delta \over 3 + 2\delta},\end{aligned}$$ and $$\begin{aligned} I_{\nu_e}(\mu_e) = 3{1 + \delta\mu_e \over 3 + 2\delta} B_{\nu_e}^{\rm db}.\end{aligned}$$ Then, the specific photon number intensity of the disc, measured by the distant observer, is given by $$N_{\nu_e}(\mu_e) = \beta_S \beta_t {I_{\nu_e}(\mu_e) \over h \nu_e}.$$ There is, however, one effect that we have not taken into account. Due to gravitational focusing some of the emitted photons return to the disc, where they are reprocessed or scattered. Since the returning photons alter the locally emitted spectrum recurrently, taking them into account would require enormous computing time and would make spectral fitting practically impossible. In our model returning photons are lost, so the disc luminosity and temperature are underestimated for the spin close to maximum. This effect was studied by Cunningham (1976). He found that returning radiation is negligible for $a < 0.9$. The difference between the actual locally generated flux and that in the absence of returning radiation is only a few per cent over most of the inner disc for the maximally rotating black hole. At lower spins, the effect is much less pronounced. In this paper we compute the GR disc spectra for $a$ = 0, 0.25, 0.5, 0.75 and 0.998. Though the $a = 0.998$ model underestimates temperature and luminosity, the interpolation between the models would give accurate results for $a < 0.9$, and slightly overestimated spin measurement otherwise. Figure \[fig:disc\_a\] shows the GR disc spectra for different values of the black hole spin. We clearly see how the accretion efficiency grows with increasing spin, when energy is dissipated deeper in the gravitational potential. On Figure \[fig:discr\_vs\_pn\] we compare spectra of the GR model (with $a = 0$) and the pseudo-Newtonian (PN) model (Gierli[ń]{}ski et al. 1999). The PN model approximates temperature distribution with good accuracy, but effects of Doppler and gravitational shifts, light bending and focusing are neglected and light propagates through flat space. With increasing inclination angle both energy of the peak and normalization of the relativistic disc spectrum (dashed curve) increase in compare to the PN disc spectrum (dotted curve), mostly due to Doppler shifts and increase of the apparent disc area. A similar effect was found by Fu & Taam (1990). On the same figure we also show the effect of the limb darkening (solid curve), enhancing the spectrum for lower and diminishing it for higher inclination angles. Next, we check how the parameters obtained with the PN model relate to these from the GR model. To do this, we created fake [*ASCA*]{} spectra using the GR model with the following parameters: $M = 10$M$_\odot$, $a = 0$, $\dot{M}_{\rm d} = 10^{18}$ g s$^{-1}$, $D$ = 1 kpc, $r_{\rm in} = 6$, $r_{\rm out} = 10^4$, $f_{\rm col} = 1.7$. The spectra were computed for $\delta = 0$ and 2.06, for several inclination angles. Then, we fitted the PN disc model to the spectra finding the black hole mass and the accretion rate. The results are presented on Figure \[fig:pn\_corr\]. If the limb darkening is taken into account in the generated data, the accretion rate obtained with the PN model is underestimated by factor $\sim$ 0.8, almost independently of the inclination angle. The mass estimation is exact at inclination $i \approx 45^\circ$, underestimated for higher and overestimated for lower inclination angles. We stress that these fits were performed for a non-rotating black hole only. Zhang, Cui and Chen (1997a) investigated accretion onto a rotating black hole in an approximate way. They used the MCD model and applied GR corrections due to fractional change in the colour temperature and due to change in the integrated flux. As a result, they were able to constrain the black hole spin of a few BHB, including GRO J1655–40, which we will analyze in details in Section \[sec:gro\_j1655-40\]. For comparison, we have applied their method to the spectra created by our GR model. The result depends on the GR corrections applied. Since Zhang et al. (1997) tabulated these corrections for $a$ = 0 and 0.998 only, the discrepancies are largest for $a \sim 0.5$. In particular, for spectra generated by the GR model with $a$ = 0.998, 0.75 and 0.5, the approximate treatment yields $a$ = 0.95, 0.58 and 0.24, respectively. We however note, that with GR corrections calculated precisely for all spin and inclination angle values the approximate method can provide with reasonably accurate results. The relativistic disc model parameters are: the black hole mass, $M$, and its spin, $a$, the inner and outer disc radii, $r_{\rm in}$ and $r_{\rm out}$, respectively (in units of $R_g$), the accretion rate, $\dot{M}_{\rm d}$ and the inclination angle, $i$. Subscript ‘d’ denotes the accretion rate in the disc only, excluding all possible energy dissipation outside the disc. In our fits we always keep $r_{\rm out} = 10^4$, so the model has at most 5 free parameters. We underline that the overall model spectral shape does not change significantly, when these parameters vary. At first approximation, there are only two quantities fixing the spectrum: the energy of the peak in the spectrum (or the temperature) and the total flux (normalization). They translate into five model parameters, therefore only two out of them can be established uniquely. In most of our fits we fix $i$, $a$ and $r_{\rm in}$, leaving $M$ and $\dot{M}_{\rm d}$ as free parameters. As we show later, the inner disc radius as a free parameter can be also constrained to some extent. However, the model, though unable to establish all parameters simultaneously, yields tight correlations between them. In particular, for a given spectrum and $r_{\rm in} = r_{\rm ms}$, as $a$ increases, $\dot{M}_{\rm d}$ decreases (to compensate the increased accretion efficiency) but $M$ increases (to keep the absolute inner disc radius $R_{\rm in} \propto r_{\rm in} M$ when $r_{\rm in}$ decreases). For a given spectrum and $a$ fixed, as $r_{\rm in}$ increases, $\dot{M}_{\rm d}$ increases but $M$ decreases. Having any additional restrictions for black hole mass and inclination angle we can hope to estimate the black hole spin. Application to the data {#disc_data} ======================= For spectral fits, we use [xspec]{} v10 (Arnaud 1996). The confidence range of each model parameter is given for a 90 per cent confidence interval, i.e., $\Delta \chi^2= 2.7$ (e.g. Press et al. 1992). The X-ray spectra of both sources can be decomposed into two components: a soft, thermal emission peaking around a few keV and a high-energy tail. We interpret them in terms of an optically thick cold accretion disc and optically thin hot plasma. Therefore we fit the data by a model consisting of the GR disc and a high-energy tail, which we model either by a power law or by thermal Comptonization (see below). The model spectra are absorbed by the interstellar medium with a column density $N_{\rm H}$, for which we use the opacities of Ba[ł]{}uci[ń]{}ska-Church & McCammon (1992) and the abundances of Anders & Grevesse (1989). While constructing the disc model, we have tabulated the transfer function, $\cal T$, for five values of the black hole spin: $a$ = 0, 0.25, 0.5, 0.75 and 0.998. Since the radius of the marginally stable orbit, which is the lower limit of $r_{\rm in}$, is a function of $a$, we cannot interpolate between transfer functions tabulated for different values of $a$, when $r_{\rm in}$ is close to $r_{\rm ms}$. Therefore, we fit the data only for these five fixed values of $a$. In all the fits we keep the spectral hardening factor, $f_{\rm col} = 1.7$ and the limb darkening factor $\delta$ = 2.06. LMC X-1 {#sec:lmc_x-1} ------- ### Source properties {#sec:lmc_x-1_properties} LMC X-1 is a luminous X-ray source in the Large Magellanic Cloud. It had been a long-standing controversy about the optical counterpart of the X-ray source, since the position of LMC X-1 established by [*Einstein*]{} was between two stars separated by only 6". Cowley et al. (1995) improved the position of LMC X-1 from [ *ROSAT*]{}-HRI observations and confirmed that the optical counterpart is a peculiar O7 III Star \#32 with visual magnitude $V \sim 14.8$. Using optical spectroscopy Hutchings et al. (1987) showed that Star \#32 is in a binary system with orbital period of 4.2288 days. They found the mass function, $f_M = 0.14\pm0.05$M$_\odot$, and the lower limit for the mass of the compact object, $M > 4$M$_\odot$, which makes LMC X-1 a good candidate for the stellar-mass black hole. The inclination angle of the binary is constrained by the lack of X-ray eclipses to be $i < 63^\circ.5$. The mass ratio, $q \equiv M_{\ast} / M$ is greater than 2, which, together with the mass function yields $$M \sin^3 i > 0.8{\rm M}_\odot.$$ The upper limit for the companion star is 25M$_\odot$, therefore $M < 12.5$M$_\odot$ and $i > 24^\circ$. The distance to the Large Magellanic Cloud (LMC) is uncertain. It is usually published in terms of the distance modulus, $m - {\cal M} = 5\log D_{\rm pc} - 5$, where $m$ and ${\cal M}$ are apparent and absolute visual magnitudes and $D_{\rm pc}$ is the distance expressed in parsecs. Different determinations of the distance modulus result in conflicting results, from $18.065\pm0.12$ (Stanek, Zaritsky & Harris 1998) to $18.7\pm0.1$ (Feast & Catchpole 1997). This corresponds to the span in distance from 38.8 kpc to 57.5 kpc. Since the compact object mass determined from the disc observations depends linearly on $D$, this will significantly increase the uncertainty of mass determination. The X-ray spectrum of LMC X-1 resembles the soft state spectrum of Cyg X-1 (Gierli[ń]{}ski et al. 1999). At first approximation it can be represented by a blackbody with temperature $\sim 0.7$ keV and a high-energy power-law tail beyond the blackbody (e.g. Ebisawa, Mitsuda & Inoue 1989; Schlegel et al. 1994; Treves at al. 2000; Schmidtke, Ponder & Cowley 1999; Nowak et al. 2001). No transition to the hard state has been ever observed. ### Spectral fits {#sec:lmc_x-1_fits} LMC X-1 was observed by [*ASCA*]{} on April 2–3, 1995. We have extracted the SIS spectrum with the net exposure time of 23.6 ks. In our fits we use the 0.7–10 keV spectrum, except the channels between 1.8–2.2 keV, which are strongly affected by the instrumental gold feature. Since we are interested in the shape of the disc continuum, this small gap in energy channels will not affect our results. Both the distance to LMC X-1, $D$, and the inclination angle of the system, $i$, are uncertain. In most of our fits we use $D = 50$ kpc and $i = 50^\circ$, but we also check our results within the acceptable range of $38.8$ kpc $< D < 57.5$ kpc and $24^\circ < i < 63^\circ.5$. We do not impose any constraints on the black hole mass, $M$, during the fits, but check its consistency with the limits obtained from the optical observations, $4.0$M$_\odot \leq M \leq 12.5$M$_\odot$, [*a posteriori*]{}. For the high-energy tail beyond the disc spectrum we chose a thermal Comptonization model (Zdziarski, Johnson & Magdziarz 1996). This model is fast and has few parameters, which makes it easy to use. We would like to stress that one important difference between the Comptonization model and a power law is the low-energy cutoff. The cutoff should be present in the Comptonized spectrum around the maximum disc temperature if the seed photons come from the disc. The soft power law, extending down towards lower energies without cutoff, may significantly affect measurements of the disc parameters. The Comptonization model has the following parameters: the electron temperature, $kT_e$, the asymptotic power-law photon spectral index, $\Gamma$, and the seed photons temperature, $kT_{\rm s}$. We have found that the electron temperature has a negligible effect on the fit results, so we keep it at 50 keV during our fits. We note, that due to lack of the data above 10 keV we cannot constrain plasma parameters, and we treat the Comptonization model only as a phenomenological component supplementary to the disc model. It is not our intention to study the hot plasma properties in this paper. ------------------------------------------- ------------------------ ------------------------ ------------------------ ------------------------ ------------------------ $a$ 0.00 0.25 0.50 0.75 0.998 $N_{\rm H}$ ($10^{21}$ cm$^{-2}$) $5.35_{-0.19}^{+0.15}$ $5.33_{-0.18}^{+0.16}$ $5.34_{-0.19}^{+0.15}$ $5.35_{-0.19}^{+0.16}$ $5.41_{-0.20}^{+0.16}$ $\Gamma$ $3.29_{-0.50}^{+0.37}$ $3.30_{-0.44}^{+0.38}$ $3.36_{-0.39}^{+0.35}$ $3.43_{-0.34}^{+0.31}$ $3.48_{-0.22}^{+0.28}$ $kT_{\rm s}$ (keV) $0.53_{-0.07}^{+0.04}$ $0.52_{-0.06}^{+0.05}$ $0.52_{-0.05}^{+0.04}$ $0.51_{-0.04}^{+0.05}$ $0.51_{-0.04}^{+0.03}$ $M$ (M$_\odot$) $9.7_{-1.6}^{+1.5}$ $11.3_{-1.7}^{+2.1}$ $14.2_{-2.3}^{+2.7}$ $20.0_{-3.4}^{+3.8}$ $45_{-8}^{+9}$ $\dot{M}_{\rm d}$ ($10 ^{18}$ g s$^{-1}$) $3.7_{-0.8}^{+0.8}$ $3.1_{-0.7}^{+0.7}$ $2.5_{-0.5}^{+0.5}$ $1.9_{-0.4}^{+0.4}$ $1.0_{-0.19}^{+0.12}$ $\chi^2$/126 d.o.f. 155.1 155.5 155.7 155.7 155.8 ------------------------------------------- ------------------------ ------------------------ ------------------------ ------------------------ ------------------------ \[tab:lmc\_x-1\_main\_fits\] We fit the data by a model consisting of the GR disc, Comptonization and interstellar absorption. First, we assume that the disc extends down to the marginally stable orbit, $r_{\rm in} = r_{\rm ms}$. The fit results are presented in Table \[tab:lmc\_x-1\_main\_fits\]. We notice that goodness of the fit does not change with the black hole spin. However, the best fitting mass varies between $\sim$ 10M$_\odot$ for the non-rotating black hole to $\sim$ 40M$_\odot$ for the maximally rotating black hole. As we see, only the fits with $a \la 0.5$ are consistent with the upper mass limit, $M < 12.5$M$_\odot$ (Section \[sec:lmc\_x-1\]). In order to better estimate the black hole spin we fit the data in a wide range of inclination angles and compute 90 per cent confidence contours for inclination-mass relation. The results are presented on Figure \[fig:lmc\_x-1\_cont\]. We find the fits with $a$ = 0, 0.25, 0.5 and 0.75 consistent with limits on the mass and the inclination angle (shaded area), and only $a = 0.998$ contour lies outside the acceptable area. Even if we take into account the significant uncertainty in the LMC X-1 distance estimation, this contour is still outside the limits. Thus, though we cannot precisely find the upper limit for the black hole spin, we conclude that the spin close to maximal is ruled-out. We stress, however, that this holds only when the disc extends down to the marginally stable orbit. =9.2cm If we allow for free $r_{\rm in} > r_{\rm ms}$ we cannot limit the black hole spin. In case of a non-rotating black hole the spectral fits do not constrain the inner disc radius. The best-fitting value is $r_{\rm in} \approx 100$, but with the fit improvement by $\Delta\chi^2 = 1.9$ only in compare to the disc extending down to $r_{\rm ms}$. This value of $r_{\rm in}$, however, would require a very small mass of the compact object, $M \approx 1.5$M$_\odot$ and a super-Eddington accretion rate, $\dot{M}_{\rm d} \approx 80L_{\rm Edd} c^{-2}$. From the lower mass limit, $M > 4$M$_\odot$, we find the upper limit on the inner disc radius, $r_{\rm in} \la 30$. Since at this radius the black hole spin is not relevant, this limit holds for any spin, and we find no constraints on $a$ this time. Within the acceptable range of $M$, $a$, $D$ and $i$ the best-fitting relative accretion rate in the disc, $\dot{m}_{\rm d} \equiv \dot{M}_{\rm d}c^2/L_{\rm Edd}$, lies between $\sim$ 1 and $\sim$ 5. In terms of the radiated power it corresponds to a fraction $\eta\dot{m}_{\rm d} \sim$ 0.06–0.3 of the Eddington luminosity, where $\eta$ is the accretion efficiency at a given spin. The total accretion rate, including the hard tail luminosity, is $\eta\dot{m} \sim$ 0.1–0.5. The unabsorbed bolometric luminosity of LMC X-1, inferred from the model, is (2.5–3.7)$\times 10^{38}$ erg s$^{-1}$ ($3.2 \times 10^{38}$ erg s$^{-1}$ at $D$ = 50 kpc), from which $\sim 0.6$ is in the disc and the rest in the tail. The parameters of the Comptonizing tail are worth a word of comment. As it can be seen from Table \[tab:lmc\_x-1\_main\_fits\], a photon spectral index we found, $\Gamma \sim$ 3.3–3.5, appears to be significantly softer, than typical values of 2.1–2.4 reported by Ebisawa et al. (1989) and Schlegel et al. (1994). However, the discrepancy emerges from the different model applied. When we fit our [*ASCA*]{} data with the simple model of the MCD and a power law, we find $\Gamma = 2.26_{-0.16}^{+0.13}$ and $kT_{\rm in} = 0.807\pm{0.007}$ keV, which confirms that LMC X-1 was in the same spectral state during all these observations. A different result of softer spectrum was reported by Nowak et al. (2001), who fitted the 1996 December 6–8 [*RXTE*]{} observation with the multicolour disc + power law model and found $\Gamma \approx$ 2.9–3.1 and $kT_{\rm in} \approx 0.9$ keV. GRO J1655–40 {#sec:gro_j1655-40} ------------ ### Source properties {#sec:gro_j1655-40_properties} The X-ray transient GRO J1655–40 was discovered by BATSE detector on board [ *CGRO*]{} on July 27, 1994 (Zhang et al. 1994). The optical counterpart was soon discovered by Bailyn et al. (1995a). The inclination of the binary system is in the range 63$^\circ$.7 to 70$^\circ$.7 (van der Hooft et al. 1998). The distance to the source, $D = 3.2\pm0.2$ kpc was derived from jet kinematics (Hjellming & Rupen 1995). The mass function was first obtained by Bailyn et al. (1995b) and then improved by Orosz & Bailyn (1997), who found $f_M = 3.24\pm0.09$M$_\odot$ and classified the companion star as F3 [iv]{}–F6 [iv]{}. In the same paper Orosz & Bailyn determined the mass of the compact object with unprecedented accuracy, finding $M_X = 7.02\pm0.22$M$_\odot$. However, Shahbaz et al. (1999) pointed out that in calculating the radial velocity semi-amplitude, Orosz & Bailyn (1997) used observations both during X-ray quiescence and during an X-ray outburst, which could lead to an incorrect result. Shahbaz et al. (1999) used only the X-ray quiescence data, and found different value of the mass function, $f_M = 2.73\pm0.09$M$_\odot$, the mass ratio, $q$ = 0.337–0.436, and the compact object mass, $M$ = 5.5–7.9M$_\odot$ (95 per cent confidence). With this mass, GRO J1655–40 is one of the most firmly established black hole candidates. Several authors have tried to estimate the black hole spin in GRO J1655–40, however they came up with different and conflicting conclusions. Zhang et al. (1997a) fitted the August 1995 [*ASCA*]{} data of the source with the MCD model with relativistic corrections. They found the inner disc radius of 23 km for a 7M$_\odot$ black hole, significantly smaller than the marginally stable orbit radius in the Schwarzschild metric, $R_{\rm ms} = 62$ km. Their conclusion is that GRO J1655–40 contains a Kerr black hole rotating at 0.7–1.0 of the maximum rate. Sobczak et al. (1999) applied a similar approach to soft state [*RXTE*]{} observations, finding average inner disc radius of $R_{\rm in} = 4.2R_g$. They associated this radius with the marginally stable orbit corresponding to the spin of $a = 0.5$. Taking into account uncertainties in the spectral hardening factor, $f_{\rm col}$, and the distance to the source, they concluded that $a < 0.7$. Makishima et al. (2000) applied the relativistic accretion disc model in Schwarzschild metric to the same data and found that if the black hole is not rotating, its mass, $M = 2.9\pm0.1$M$_\odot$, is incompatible with optical constraints, therefore there must be a rotating black hole in GRO J1655–40. An alternative approach to estimating the black hole spin is analysis of quasi-periodic oscillations (QPOs) observed in the power spectra. Remillard et al.  (1999) analysed [*RXTE*]{} observations of GRO J1655–40 and found four characteristic QPOs. Three of them occupy relatively stable frequencies of about 0.1, 9 and 300 Hz. Cui, Zhang & Chen (1997) associated the highest frequency QPO with the nodal precession frequency of the tilted disc and derived $a = 0.95$. On the other hand, Stella, Vietri & Morsink (1999) interpreted the highest frequency QPO in terms of the periastron precession frequency, and the $\sim$ 9 Hz QPO as a second harmonic of the nodal precession frequency. This interpretation yielded $a \sim 0.1$. Gruzinov (1999) linked the $\sim$ 300 Hz QPO with the emission of the bright spot at the radius of the maximal proper radiation flux, and inferred $a \sim 0.6$. In X-rays GRO J1655–40 is highly variable and undergoes spectral transitions similar to that of classical X-ray novae. Ueda et al.  (1998) analysed the four [*ASCA*]{} observations between August 1994 and March 1996. They distinguished four distinct states, named “high", “low", “dip" and “off". The high state was observed by BATSE during the outburst in the energy range of 20–100 keV (Tavani et al. 1996; Zhang et al. 1997b), while the low state corresponds to times when the source was weak also in the BATSE range. This high state can be associated with the soft state, since its spectrum is dominated by the ultra-soft disc component with additional high-energy power law. Zhang et al. (1997b) found the power-law photon spectral index of $\Gamma = 2.43\pm0.3$ from the BATSE data simultaneous to the [*ASCA*]{} observation analysed in this paper. A long-time [ *RXTE*]{} monitoring (Sobczak et al. 1999) shows that the soft state can be described by the MCD model with $kT_{\rm in} \sim 0.7$ keV plus a power law with photon spectral index of $\Gamma \sim$ 2–3. ### Spectral fits {#sec:gro_j1655-40_fits} GRO J1655–40 was observed by [*ASCA*]{} on August 15–16, 1995. This is the observation from epoch III from Ueda et al. (1998), who associated it the high X-ray state of the source. From this observation we have extracted the GIS spectrum with the net exposure of 3810 s after dead-time correction. We use the data in the 1.0–10 keV range. We assume the distance to the source, $D = 3.2$ kpc (Hjellming & Rupen 1995), and the inclination angle of $70^\circ$ (Orosz & Bailyn 1997; van der Hooft et al.  1998), unless stated otherwise. We do not impose any constraints on the black hole mass during the fits, but check its consistency with the limits obtained from the optical observations, $5.5$M$_\odot \leq M \leq 7.9$M$_\odot$ (Shahbaz et al. 1999; see Section \[sec:gro\_j1655-40\]), [*a posteriori*]{}. The high-energy tail beyond the disc spectrum is less significant than in the case of LMC X-1 (see Figures \[fig:lmc\_x-1\_fit\] and \[fig:gro\_j1655\_fit\]), and we find that the particular choice between a Comptonization model and a power law does not affect the disc fit results, so we chose a power law with the photon spectral index fixed at $\Gamma = 2.4$ (Zhang et al. 1997b). On the other hand, the tail cannot be neglected. A fit without the tail is worse by $\Delta\chi^2 = +90$ (at 188 d.o.f.) in compare to our best fit (see below) and shows increasing positive residuals towards higher end of the energy band. We fit the data by a model consisting of the GR disc, power law and interstellar absorption. First we assume the disc extends down to the marginally stable orbit, $r_{\rm in} = r_{\rm ms}$. The model fits the data fairly well, with $\chi^2 \sim 230/187$, however there is a strong residual pattern below $\sim$ 3 keV (see panel $(b)$ on Figure \[fig:gro\_j1655\_fit\]). The pattern is likely to be generated by atomic processes. However, the interstellar absorption, both on neutral and ionized matter, cannot explain this pattern. Varying elemental abundances in the absorber does not improve the fit. Also, the pattern cannot be fitted by ionized Compton reflection. We notice that relativistically broadened edge is required to explain the observed spectrum. Therefore, we included a smeared absorption edge ([smedge]{} model in [xspec]{}; Ebisawa 1991) in our model spectrum. [smedge]{} is only a phenomenological formula invented to reproduce the relativistic smearing and its parameters should not be taken as real physical quantities. In particular, when the smearing is introduced the model edge threshold energy is [*lower*]{} than the rest frame threshold energy of the physical edge. We have found that inclusion of the absorption edge improves the fit considerably, yielding much better $\chi^2 \sim$ 165/185. The smearing width of the edge is poorly constrained around $\sim 0.5$ keV, so we fix it at this value during all fits. The threshold energy, typically $1.25\pm0.05$ keV, might be consistent with the Ne-like Fe L-shell absorption edge (at 1.26 keV), originating in the ionized disc. Detailed analysis of this feature is however beyond the scope of this paper. =8.4cm Finally, our best-fitting model consists of the GR disc, the power law, the interstellar absorption and the absorption edge. Fit results for five values of the black hole spin are presented in Table \[tab:gro\_j1655\_main\_fits\]. The data and the best-fitting model for $a = 0.75$ are shown on Figure \[fig:gro\_j1655\_fit\]. ------------------------------------------- ------------------------ ------------------------ ------------------------ ------------------------ --------------------------- $a$ 0.00 0.25 0.50 0.75 0.998 $N_{\rm H}$ ($10^{21}$ cm$^{-2}$) $7.48_{-0.18}^{+0.17}$ $7.48_{-0.17}^{+0.17}$ $7.50_{-0.18}^{+0.17}$ $7.58_{-0.18}^{+0.16}$ $7.70_{-0.17}^{+0.15}$ $E_{\rm edge}$ (keV) $1.25_{-0.05}^{+0.05}$ $1.25_{-0.05}^{+0.05}$ $1.25_{-0.05}^{+0.05}$ $1.25_{-0.05}^{+0.04}$ $1.23_{-0.04}^{+0.04}$ $\tau_{\rm edge}$ $0.13_{-0.04}^{+0.03}$ $0.13_{-0.03}^{+0.04}$ $0.14_{-0.03}^{+0.04}$ $0.16_{-0.03}^{+0.04}$ $0.17_{-0.03}^{+0.03}$ $M$ (M$_\odot$) $2.65_{-0.02}^{+0.03}$ $3.19_{-0.03}^{+0.03}$ $4.07_{-0.04}^{+0.04}$ $5.84_{-0.05}^{+0.06}$ $16.2_{-0.2}^{+0.1}$ $\dot{M}_{\rm d}$ ($10 ^{18}$ g s$^{-1}$) 3.21$_{-0.03}^{+0.04}$ 2.71$_{-0.04}^{+0.04}$ 2.15$_{-0.02}^{+0.03}$ 1.51$_{-0.02}^{+0.02}$ 0.560$_{-0.007}^{+0.007}$ $\chi^2$/185 d.o.f. 166.1 165.3 164.3 163.4 165.0 ------------------------------------------- ------------------------ ------------------------ ------------------------ ------------------------ --------------------------- \[tab:gro\_j1655\_main\_fits\] --------------------- --------------------- --------------------- --------------------- --------------------- --------------------- $a$ 0.00 0.25 0.50 0.75 0.998 $r_{\rm in}$ $7.6_{-1.6}^{+12}$ $6.6_{-1.4}^{+9.0}$ $5.5_{-1.3}^{+6.1}$ $3.8_{-0.6}^{+4.7}$ $3.7_{-2.5}^{+4.4}$ $M$ (M$_\odot$) $2.7_{-1.1}^{+0.0}$ $3.2_{-0.4}^{+0.0}$ $4.0_{-1.4}^{+0.1}$ $5.9_{-1.9}^{+0.0}$ $10_{-6.4}^{+6.3}$ $\chi^2$/184 d.o.f. 165.7 165.0 164.1 163.2 163.7 --------------------- --------------------- --------------------- --------------------- --------------------- --------------------- \[tab:gro\_j1655\_rin\_fits\] As it was in the case of LMC X-1, spectral fits do not constrain the black hole spin. However, the spin can be constrained taking into account mass limitations. Only the fit with $a = 0.75$, yielding $M = 5.84_{-0.05}^{+0.06}$M$_\odot$, is consistent with the mass constraint of (5.5–7.9)M$_\odot$, inferred from the optical observations by Shahbaz et al. (1999). Again, we fit the data in a range of inclination angles and compute 90 per cent confidential contours for inclination-mass relation. The result is presented on Figure \[fig:gro\_j1655-40\_cont\]. Due to better quality of the data and due to the fact that GRO J1655–40 spectrum is dominated by the disc emission, these contours are much narrower than in the case of LMC X-1. Only the $a = 0.75$ contour overlaps with the shaded area indicating allowable values of the inclination angle and the mass. More precise limits on the spin can be found by interpolating between the values found from the fits. If the GR effects are neglected, for the given spectrum the disc must keep constant inner radius while varying $M$ and $a$. Providing that $r_{\rm in} = r_{\rm ms}$, this leads to a simple relation between the mass and the spin $M \propto r_{\rm ms}^{-1}(a)$, where $r_{\rm ms}(a)$ is given by equation (\[eq:rms\]). The GR effects modify this relation and we have found that a function $M = C_1 + C_2 r_{\rm ms}^{-1}(a)$ approximates the data well, so we use it for interpolation. Taking into account uncertainties in the distance to the source, we have found that for given limits $5.5 \leq M/$M$_\odot \leq 7.9$ and $64^\circ \leq i \leq 71^\circ$ the black hole spin is $0.68 \leq a \leq 0.88$. In this range of spin values returning radiation is negligible (see Section \[sec:model\]), and does not affect our result. =9.2cm Next, we fit the data allowing the inner disc radius, $r_{\rm in}$, to be a free parameter. The results are presented in Table \[tab:gro\_j1655\_rin\_fits\]. In all fits we find that the inner disc radius is consistent with $r_{\rm ms}$, however larger values are possible. Now the $a = 0.998$ fit became acceptable with $M = 10_{-6.4}^{+6.3}$M$_\odot$. Since the upper limits for the mass remained virtually the same after freeing $r_{\rm in}$ (see Tables \[tab:gro\_j1655\_main\_fits\] and \[tab:gro\_j1655\_rin\_fits\]), the lower limit on the spin also remained the same, $a \geq 0.68$, this time. Within the acceptable range of $M$, $D$ and $i$ (assuming $r_{\rm in} = r_{\rm ms}$) the relative accretion rate, $\dot{m}_{\rm d}$, is between $\sim$ 0.6 and $\sim$ 1.9. In terms of the radiated power it corresponds to a fraction $\eta\dot{m}_{\rm d} \sim$ 0.1–0.2 of the Eddington luminosity. The unabsorbed bolometric luminosity of the disc in GRO J1655–40, computed from the model, is $(1.2$–$1.6)\times 10^{38}$ erg s$^{-1}$ ($1.4 \times 10^{38}$ erg s$^{-1}$ at $D = 3.2$ kpc). In the observed energy range only a small fraction of the luminosity is radiated in the tail. Discussion and conclusions {#sec:discussion} ========================== Black hole mass and spin ------------------------ Optical and UV observations of black hole binaries can provide with the mass function, which in turn can help estimating, to some extent, the black hole mass and the inclination angle of the binary. We know about a dozen of X-ray Galactic binaries for which the mass function implies the compact object mass larger than 3M$_\odot$, making them good candidates for black holes. However, the black hole spin cannot be estimated this way. In this paper we have shown an application of the GR disc model to the soft state X-ray spectra of LMC X-1 and GRO J1655–40 and demonstrated how it can restrict the black hole spin. The model depends on five basic parameters: the black hole mass and spin, the disc inclination angle, the accretion rate and the inner disc radius. Generally, the spectral shape changes only slightly with these parameters, and only two out of them can be established uniquely by fitting the observational data. Additional assumptions about some of the parameters are necessary. In most of our fits we have fixed $i$ and assumed $r_{\rm in} = r_{\rm ms}$. Then, for several fixed values of $a$ we have successfully fitted $M$ and $\dot{M}$. Results presented in Tables \[tab:lmc\_x-1\_main\_fits\] and \[tab:gro\_j1655\_main\_fits\] show that $\chi^2$ remained virtually the same through the whole range of $a$ and the black hole spin cannot be reckoned this way. On the other hand, at a given spectral shape there is strong correlation between the parameters, in particular between $M$ and $a$ when $i$ and $r_{\rm in}$ are fixed. With an independent estimation of the mass and the inclination angle, we can constrain the black hole spin. The tighter constraints on the mass and the inclination, the more accurate spin estimation can be obtained. The mass and the inclination angle of LMC X-1 are evaluated rather poorly. Therefore, it is not possible to give a good constrain on the black hole spin. We can only rule out a black hole which is close to, or maximally rotating. GRO J1655–40 gave us better chance. The mass and the inclination angle are measured with relatively high accuracy. Also, due to better quality of the X-ray data, we have obtained smaller statistical errors on $M$. As a result, we could limit the spin, $0.68 \leq a \leq 0.88$ (when $r_{\rm in} = r_{\rm ms}$). This result is only weakly affected by neglecting returning photons, which are negligible for spin less then 0.9. We underline, that when we allow for free $r_{\rm in} > r_{\rm ms}$, no upper limit for the black hole spin can be imposed. This is because for a given spectral shape increase of $r_{\rm in}$ with fixed $M$ and $i$ leads to increase of $a$. However, there are clues that the accretion disc in BHB extends down to the marginally stable orbit all the time in the soft (high or very high) state. Long-term 1996-1997 [*RXTE*]{} monitoring of GRO J1655–40 (Sobczak et al. 1999) shows remarkable constancy of the MCD inner disc radius over the period of a few hundred days, while the source went through the high and very high spectral states. There is an exception of a few observations during the very high state, where the inner radius dropped suddenly by factor $\sim 3$. Merloni, Fabian & Ross (2000) interpret it as a drop in the disc accretion rate, which caused increase of the hardening factor and decrease of the [*apparent*]{} inner radius, while the actual $r_{\rm in}$ has not changed. If the inner disc radius remained constant indeed, it is reasonable to assume that the disc extended down to the marginally stable orbit all the time when the source was in the high or very high state, including also the [*ASCA*]{} observation analysed in this paper. Long-term 1996-1998 [*RXTE*]{} monitoring of LMC X-1 (Wilms et al. 2001) shows in turn significant variations of the MCD inner disc radius. However, it is not clear whether these variations can be attributed to real changes of $r_{\rm in}$. They might be, at least in part, due to systematic errors in the model. So, does $r_{\rm in}$ vary during the soft state or not? If the cold disc in either of the above observations was indeed truncated, it should turn into an optically thin inner flow below $r_{\rm in}$. Hot plasma in the inner flow could be a source of the high-energy tail beyond the disc spectrum. However, optically thin solution can exist only for luminosities lower than a few per cent of the Eddington luminosity. At higher accretion rates (and higher densities) Coulomb transfer of energy from the protons to the electrons becomes efficient, the optically thin flow becomes radiatively efficient and collapses to the optically thick Shakura-Sunyaev disc (Chen et al.  1995; Esin, McClintock & Narayan 1997). In both observations analysed in this paper luminosity exceeded 10 per cent of $L_{\rm Edd}$, therefore we should not expect any hot, optically thin flow. Instead, we propose that the cold disc extends all the way down to the marginally stable orbit, and the high-energy tail emission is produced in active regions above the disc. Hence, we find the assumption of $r_{\rm in} = r_{\rm ms}$ as well founded, and our best constraint on the black hole spin in GRO J1655–40 is $0.68 \leq a \leq 0.88$ and in LMC X-1 we rule a black hole rotating with the spin close to maximum. The most spectacular feature of GRO J1655–40 is its radio jets (Tingay et al. 1995; Hjellming & Rupen 1995), observed also in another high-spin black hole candidate, GRS 1915+105 (Mirabel & Rodríguez 1994). Co-existence of relativistic jets and rotating black holes in BHB makes and interesting link to the problem of radio dichotomy of quasars, and supports the black hole “spin paradigm", according to which jets in quasars are powered by rotating black holes (e.g. Moderski, Sikora & Lasota 1998). Since no strong radio jets were found in LMC X-1, the non-rotating black hole should be expected in this system. Unfortunately, our results do not allow us to make such claims. Further systematic studies of several jet and non-jet sources could provide with more evidences for the spin paradigm. Compton reflection ------------------ The X-ray spectra of LMC X-1 and GRO J1655–40 are similar to the soft state spectrum of Cyg X-1 (Gierli[ń]{}ski et al. 1999), though the first two sources are significantly brighter. An important difference is that we do not find Compton reflection features here, while Cyg X-1 in the soft state showed strong reflection component with covering angle $\Omega/2\pi \sim 0.7 $ and a broad Fe K$\alpha$ line. This can be explained by the instrumental limitations or/and by the intrinsic difference in the source geometry and energetics. The Compton reflection from the cold matter can be identified by Fe K$\alpha$ features around 7 keV and by reflection continuum, which peaks around 30 keV in the $E F_E$ spectrum. Since [*ASCA*]{} can observe only up to $\sim$ 10 keV, the detection of the reflection continuum with [ *ASCA*]{} data only might be difficult, if not impossible. The iron K edge and line parameters are sensitive to the shape of the underlying continuum, which cannot be properly established without the high-energy data. For example, Dotani et al. (1997) analysed the [*ASCA*]{} data of Cyg X-1 and did not find any Fe line. Gierli[ń]{}ski et al. (1999) re-analysed the same data jointly with the simultaneous [*RXTE*]{} observation, and found the presence of the iron line with high statistical significance. LMC X-1 spectrum has poor statistics above $\sim$ 5 keV, so the reflection features can escape the detection. The high-energy tail in GRO J1655–40 is much weaker than in Cyg X-1. At 7 keV the disc radiates $\sim 3.5$ times more energy than the tail. Therefore, the Fe K-shell features will be very weak in the total spectrum, and difficult to detect. In the soft state of Cyg X-1 the situation is opposite: the disc emission at 7 keV is negligible (see Figure 8 in Gierli[ń]{}ski et al. 1999 and Figure \[fig:lmc\_x-1\_fit\] in this paper). Moreover, the Fe K-shell features coming from the disc around the fast-spinning black hole are significantly smeared, which additionally hinders the detection. We note, that [*ASCA*]{} and [*RXTE*]{} detected characteristic iron features in GRO J1655–40, at different periods (Ueda et al.  1998; Tomsick et al. 1999; Ba[ł]{}uci[ń]{}ska-Church & Church 2000; Yamaoka et al. 2000). However, precise [*ASCA*]{}/[*RXTE*]{} observations show that these features can be consistently explained by resonant absorption lines in the corona above the disc [*without*]{} reflection components (Yamaoka et al. 2000). Disc stability -------------- An unclear issue is stability of the cold disc. Both sources are very bright with luminosities being a significant fraction of the Eddington luminosity ($\dot{m}_{\rm d} \sim$ 1–5 is a very conservative limit for LMC X-1). A radiation pressure dominated region, which is thought to be unstable, arises in the disc above the critical accretion rate, $\dot{m}_{\rm crit}$. In the pseudo-Newtonian approximation, for a 10M$_\odot$ black hole and $\alpha = 0.1$, $\dot{m}_{\rm crit} \approx 0.64$ (Gierli[ń]{}ski et al. 1999; taking into account accretion in the cold disc only, i.e. $f = 0$). Therefore, the discs in both sources seem to be dominated by the radiation pressure. Since we consider here rotating black holes, we checked this result by calculating the ratio of the radiation pressure to the gas pressure in the Kerr metric (Novikov & Thorne 1972), for all acceptable fit parameters in Tables \[tab:lmc\_x-1\_main\_fits\] and \[tab:gro\_j1655\_main\_fits\]. We found that in both sources the inner part of the cold accretion disc below several tens of $R_g$ is dominated by the radiation pressure indeed. Thus, both discs should be unstable against secular and thermal instabilities. However, despite this inconvenience the discs in LMC X-1 and GRO J1655–40 apparently [*do*]{} exist. This riddle probably arises from our lack of understanding of microscopic mechanisms of accretion. The instability develops in the standard $\alpha$-prescription theory, where $r\phi$-element of the viscous stress tensor, $T_{r\phi}$, is proportional to the total pressure, i.e. to the sum of the gas pressure and the radiation pressure. The viscosity parameter $\alpha$ includes not well understood physics of viscosity, which is supposed to be due to chaotic magnetic fields and turbulence in the gas flow. It is not quite clear why $T_{r\phi}$ should be proportional to the total pressure. If we assume that $T_{r\phi}$ is proportional to the gas pressure only (the so-called $\beta$-disc; see e.g. Stella & Rosner 1984), the disc becomes stable in the radiation pressure dominated region (Figure \[fig:alpha\_beta\]). The existence of the radiation pressure dominated discs should impose significant constraints on the theoretical models of viscosity in the accretion flow. Model reliability ----------------- One important question we should ask is reliability of the model and parameters inferred. While modelling the disc spectrum in the “proper" way one should solve the following four problems. (i) The radial disc structure, (ii) the vertical disc structure, (iii) the radiation transfer and (iv) the relativistic effects. Our understanding of the first three problems is poor and there are still a lot of open questions concerning the viscosity prescription, vertical distribution of gravitational energy dissipation, effects of radiation pressure and irradiation of the disc, importance of bound-free opacity and heavy elements, and importance of Comptonization, just to name the most important issues. Consequently, every single prediction available in the literature is based on simplifying assumptions and is model dependent. On the other hand, the simplest solution, i.e. the sum of blackbodies gives the best description of the observational data. For example, a smooth multicolour spectrum with no ionization edges consistently reproduces the AGN observations in which the Lyman edge (expected by more advanced models) is not observed. As we have shown in this chapter, the multicolour model fits the black hole candidate spectra perfectly as well. We correct the emerging spectra for two effects that significantly affect measurements of the disc normalization and temperature. The first one is a deep gravitational potential in the vicinity of a fast rotating black hole. All special and general relativistic effects are treated to high accuracy by means of the transfer function, as was described in Section \[sec:model\]. The second important aspect is the effect of electron scattering, which we approximate by a single colour temperature correction factor, $f_{\rm col}$, using a diluted blackbody as a local spectrum (see Section \[sec:model\]). Under certain simplifying assumptions Shimura & Takahara (1995) computed the vertical disc structure around a Schwarzschild stellar-mass black hole and solved the radiation transfer. They found that for the viscosity parameter $\alpha = 0.1$ the local spectrum can be represented by a diluted blackbody when Comptonization is effective, which takes place in the inner disc region when $\dot{m}_{\rm d} \ga 1$. For $M = 10$M$_\odot$ and $\dot{m}_{\rm d} = 1$ they found that the sum of diluted blackbodies fits the whole disc spectrum very well with the best-fitting hardening factor of $f_{\rm col} = 1.7$ (see Figure 2 in Shimura & Takahra 1995). Merloni et al. (2000) made a similar comparison between the multicolour disc and a more realistic disc model with the vertical temperature structure and radiation transfer solved and came up with a comparable result. The multicolour disc spectra fit the realistic disc spectra well, and for $\dot{m}_{\rm d} \ga 1$ the hardening factor is $f_{\rm col} \approx 1.8$. In both observations in this paper $\dot{m}_{\rm d} \sim 1$, therefore, within the trustworthiness limits of the Shimura & Takahara and Merloni et al. calculations, we find our model reliable. A small variation in the hardening factor does not change our results significantly. For example, the GRO J1655–40 spin for $f_{\rm col} = 1.8$ is $0.60 \leq a \leq 0.83$. Acknowledgements {#acknowledgements .unnumbered} ================ We thank Andrzej A. Zdziarski and Chris Done for discussions and valuable comments. This research has been supported in part by the Polish KBN grants 2P03D00514, 2P03D00614, 2P03C00619p0(1,2) and a grant from the Foundation for Polish Science. Anders E., Grevesse N., 1989, Geochim. Cosmochim. Acta, 53, 197 Arnaud K. A., 1996, in Jacoby G. H., Barnes J., eds., Astronomical Data Analysis Software and Systems V. ASP Conf. Series Vol. 101, San Francisco, p. 17 Asaoka I., 1989, PASJ, 41, 763 Bailyn C. D., et al., 1995a, Nature, 374, 701 Bailyn C. D., Orosz J. A., McClintock J. E., Remillard R. A., 1995b, Nature, 378, 157 Ba[ł]{}uci[ń]{}ska-Church M., McCammon D., 1992, ApJ, 400, 699 Ba[ł]{}uci[ń]{}ska-Church M., Church M. J., 2000, MNRAS, 312, L55 Chen X., Abramowicz M., Lasota J.-P., Narayan R., Yi I., 1995, ApJ, 443, 61L Cowley A. P., Schmidtke P. C., Anderson A. L., McGrath T. K., 1995, PASP, 107, 145 Cui W., Zhang S. N., Chen W., 1997, ApJ, 492, L53 Cunningham C. T.: 1975, ApJ, 202, 788 Cunningham C. T.: 1976, ApJ, 208, 534 Cunningham C. T., Bardeen J. M., 1973, ApJ, 183, 237 Dotani T. et al., 1997, ApJ, 485, L87 Ebisawa K., 1991, Ph.D. Thesis Ebisawa K., Mitsuda K., Inoue H., 1989, PASP, 41, 519 Ebisawa K., Mitsuda K., Hanawa T., 1991, ApJ, 367, 213 Ebisawa K., Makino F., Mitsuda K., Belloni T., Cowley A. P., Schmidtke P. C., Treves A., 1993, ApJ, 403, 684 Ebisawa K., et al., 1994, PASJ, 46, 375 Esin A. A., McClintock J. E., Narayan R., 1997, ApJ, 489, 865 Feast M. W., Catchpole R. M., 1997, MNRAS, 286, L1 Fu A., Taam R. E., 1990, ApJ, 349, 553 Gierli[ń]{}ski M., Zdziarski A. A., Done C., Johnson W. N., Ebisawa K., Ueda Y., Haard F., Phlips B. F., 1997, MNRAS, 288, 958 Gierli[ń]{}ski M., Zdziarski A. A., Poutanen J., Coppi P. S., Ebisawa K., Johnson W. N., 1999, MNRAS, 309, 496 Grove J. E., Johnson W. N., Kroeger R. A., McNaron-Brown K., Skibo J. G. Phlips B. F., 1998, ApJ, 500, 899 Gruzinov A., 1999, astro-ph/9910335 Hanawa T., 1989, ApJ, 341, 948 Hjellming R. M., Rupen M. P., 1995, Nature, 375, 464 Hutchings J. B., Crampton D., Cowley A. P., Bianchi L., Thompson I. B., 1987, AJ, 94, 340 Laor A., 1990, MNRAS, 246, 369 Laor A., Netzer H., Piran T., 1990, MNRAS, 242, 560 Makishima K., et al., 2000, ApJ, 535, 632 Merloni A., Fabian A. C., Ross R. R., 2000, MNRAS, 313, 193 Mirabel I. F., Rodríguez L. F., 1994, Nature, 371, 46 Mitsuda K. et al., 1984, PASJ, 36, 741 Moderski R., Sikora M., Lasota J.-P., 1998, MNRAS, 301, 142 Novikov I. D., Thorne K. S., 1972, in: DeWitt C., DeWitt B. S., Black holes, Gordon and Beach Science Publishers, p. 343 Nowak M. A., Wilms J., Heindl W. A., Pottschmidt K., Dove J. B., Begelman M. C., 2001, MNRAS, 320, 316 Orosz J.A., Bailyn, C.D.: 1997, ApJ, 477, 876 Page D. N., Thorne K. S.: 1974, ApJ, 191, 499 Phillip K. C., M[é]{}sz[á]{}ros P.: 1986, ApJ, 310, 284 Press W. H., Teukolsky S. A., Vetterling W. T., Flannery B. P., 1992, Numerical Recipes. Cambridge Univ. Press, Cambridge Remillard R. A., Morgan E. H., McClintock J. E., Bailyn C. D., Orosz J. A., 1999, ApJ, 522, 397 Schlegel E. M., Marshall F. E., Mushotzky R. F., Smale A. P., Weaver K. A., Serlemitsos P. J., Petre R., Jahoda K. M., 1994, ApJ, 422, 243 Schmidtke P. C., Ponder A. L., Cowley A. P., 1999, ApJ, 117, 1292 Shakura N. I., Sunyaev R. A., 1973, A&A, 24, 337 Shahbaz T., van der Hooft F., Casares J., Charles P. A., van Paradijs J., 1999, MNRAS, 306, 89 Shimura T., Takahara F.: 1995, ApJ, 445, 780 Sobczak G. J., McClintock J. E., Remillard R. A., Bailyn C. D., Orosz J. A., 1999, ApJ, 520, 776 Stanek K. Z., Zaritsky D., Harris J., 1998, ApJ, 500, L141 Stella L., Rosner R., 1984, ApJ, 277, 312 Stella L., Vietri M., Morsink S. M., 1999, ApJ, 524, L63 Tavani M., Fruchter A., Zhang S. N., Harmon B. A., Hjellming R. N., Rupen M. P., Bailyn C., Livio M., 1996, ApJ, 473, L103 Thorne K. S., 1974, ApJ, 191, 507 Tingay S. J., et al., 1995, Nature, 374, 141 Tomsick J. A., Kaaret P., Kroeger R. A., Remillard R. A., 1999, ApJ, 512, 892 Treves A., et al., 2000, Advances in Space Res., 25, 437 Ueda Y., Inoue, H., Tanaka, Y., Ebisawa, K., Nagase, F., Kotani, T., Gehrels, N.: 1998, ApJ, 492, 782 van der Hooft F., Heemskerk M. H. M., Alberts F., van Paradijs J., 1998, A&A, 329, 538 Wilms J., Nowak M. A., Pottschmidt K., Heindl W. A., Dove J. B., Begelman M. C., 2001, MNRAS, 320, 327 Yamada T. T., Mineshige S., Ross R. R., Fukue J., 1994, PASJ, 46, 553 Yamaoka K., Ueda Y., Inoue H., Nagase F., Ebisawa K., Kotani T., Tanaka Y., Zhang S. N., 2000, ApJ, submitted Zdziarski A. A., Johnson W. N., Magdziarz P., 1996, MNRAS, 283, 193 Zdziarski A. A., Poutanen J., Miko[ł]{}ajewska J., Gierli[ń]{}ski M., Ebisawa K., Johnson W. N., 1998, MNRAS, 301, 435 Zhang S. N., Wilson C. A., Harmon B. A., Fishman G. J., Wilson R. B., Paciesas W. S., Scott M., Rubin B. C., 1994, IAU Circ. 6046 Zhang S. N., Cui W., Chen W., 1997a, ApJ, 482, L155 Zhang S. N., et al., 1997b, ApJ, 479, 381 Construction of the photon transfer function {#app:transfer_function} ============================================ We consider a gravitational field of a rotating black hole with the mass $M$, and the angular momentum, $J$. In both appendices we use $G = c = 1$ units, the $(-+++)$ signature and the Boyer-Lindquist coordinates, $(t, R, \theta, \phi)$. Non-zero components of the metric tensor are given by $$\begin{aligned} \gtt & = & -(1-2r/\Sigma), \nonumber \\ \gtp & = & -2ar\sin^2\theta/\Sigma, \nonumber \\ \gpp & = & \left( r^2+a^2+{2a^2r\sin^2\theta \over \Sigma} \right)\sin^2 \theta, \nonumber \\ g_{rr} & = & \Sigma/\Delta, \nonumber \\ g_{\theta \theta} & = & \Sigma,\end{aligned}$$ where $$\begin{aligned} \lefteqn{\Delta = r^2+a^2-2r,} \nonumber \\ \lefteqn{\Sigma = r^2+a^2\cos^2\theta,} \end{aligned}$$ and we use dimensionless distance and specific angular momentum parameter $$r={R \over M},~~~~a={J \over M^2}.$$ Our approach to calculation of the disc photon trajectories mostly follows that of Cunningham (1975). A null geodesic in the Kerr metric is described by two constants of motion, $\lambda$ and $\xi$, defined as $$\lambda={L \over EM},~~~~\xi^2={C \over E^2M^2},$$ where $E$ is the photon energy at infinity, $L$ is projection of the photon angular momentum on the symmetry axis and $C$ is the Carter constant (in the definition of $\xi$ we took into account the condition $C \ge 0$ satisfied by trajectories intersecting equatorial plane.) We take an usual assumption that the radial velocity in the disc can be neglected and its motion is Keplerian. In the non-rotating frame, the angular velocity and the linear velocity, respectively, are given by $$\Omega_{\rm K}= {1 \over M} {1 \over r^{3/2}+a},$$ and $$V_{\rm K}={A \over r^2 \sqrt{\Delta}} \left( \omega_{\rm K}-{2 a r \over A} \right),$$ where $$A=r^4+a^2r^2+2a^2r,$$ and $\omega_{\rm K}=M \Omega_{\rm K}$. Then, we get the following relation between the constants of motion and the emission angles in the disc rest frame (e.g., Cunningham & Bardeen 1973) $$\begin{aligned} \lambda & = & {\sin\theta_e \sin \pem+ V_{\rm K} \over (r_e^2 \Delta^{1/2}+ 2 a r_e V_{\rm K})/ A+\omega_{\rm K} \sin\theta_e \sin \pem}, \nonumber \\ \xi & = & \left( {A \over \Delta } \right)^{1/2} (1- V_{\rm K}^2)^{-1/2}(1-\lambda \omega_{\rm K}) \ct. \label{const}\end{aligned}$$ where $r_e$ is the initial photon radius, $\theta_e$ is a polar angle between the photon initial direction and the normal to the disc and $\pem$ is the azimuthal angle, in the disc plane, with respect to the $r$-direction. Then, the redshift of the photon emitted from the disc at a distance $r_e$ is given by $$g_{\rm eff}=r_e \left( {\Delta \over A} \right)^{1/2} {(1-V_{\rm K}^2)^{1/2} \over 1-\omega_{\rm K} \lambda}, \label{g}$$ and the angle, $i$, at which the photon will be observed far from the disc in the flat space-time is determined by the integral equation of motion $$\pm \int_{r_e}^\infty \Re^{-1/2}{\rm d} r=\int_{\pi/2}^{i} \Theta^{-1/2} {\rm d} \theta. \label{int}$$ where $\Re$ and $\Theta$ are radial and polar effective potentials, respectively, $$\begin{aligned} \lefteqn{\Re(r) = (r^2+a^2-\lambda a)^2 - \Delta \left[ \xi^2 + (\lambda - a)^2 \right] ,} \nonumber \\ \lefteqn{\Theta(\theta) =\xi^2 + \cos^2 \theta \left( a^2-\lambda^2/{\sin^2 \theta} \right).}\end{aligned}$$ The photon transfer function is tabulated in discrete steps in five dimensions ($a$, $r_e$, $\cos i$, $g_{\rm eff}$, $\cos \theta_e$). Generated photons are summed in the appropriate elements of the photon transfer function table according to the following algorithm. 1. for given $a$ and $r_e$ generate the photon initial direction from the distribution uniform both in $\phi_e$ and $\ct$, 2. find the constants of motion from equation (\[const\]), 3. solve equation (\[int\]) for $i$ (initial sign of $\Re$ in equation (\[int\]) is negative for $\pi/2 < \phi_e < 3\pi/2$ and positive otherwise), 4. find $g_{\rm eff}$ from equation (\[g\]), 5. increment an element of the transfer function table corresponding to $a$, $r_e$, $\cos i$, $g_{\rm eff}$ and $\cos \theta_e$. In order to compute and tabulate the transfer function we have calculated trajectories of $10^9$ photons. The disc is axially symmetric and emission from a given point of the disc is also axially symmetric. Therefore, integration over $\phi_e$ for trajectories starting at a given point of the disc and observed at a given angle $i$ and collected at all angles $\phi_o$ is equivalent to integration over all trajectories starting at a given radius $r_e$, which are observed at a given angle $i$ and any particular $\phi_o$. Transformation of the photon number intensity {#app:intensity_transformation} ============================================= As pointed out in Section \[sec:model\], application of the photon transfer function requires transformation of the photon number intensity between the reference frames of the distant observer (the Boyer-Lindquist coordinate frame) and the observer co-rotating with the disc. The latter one is represented by the covariant tetrad (we show only the components used below): $$\begin{aligned} \lefteqn{e^{t'} = - r^{-1}\left( 1-V_{\rm K}^2 \right)^{-1/2} \left({ A \over \Delta}\right)^{1/2}} \nonumber \\ \lefteqn{~~~~\times\left( \gtt+\omega_{\rm K} \gtp,0,0,\gtp+\omega_{\rm K} \gpp \right),} \nonumber \\ \lefteqn{e^{r'}=(0,r/\sqrt{\Delta},0,0),} \nonumber \\ \lefteqn{e^{\phi'} = r^{-1} \left( 1-V_{\rm K}^2 \right)^{-1/2} \left({ A \over \Delta} \right)^{1/2} \left[ V_{\rm K} \gtt + \gtp \left( {r^2 \Delta^{1/2} \over A} \right. \right.} \nonumber \\ \lefteqn{~~~~\left. \left. + {2arV_{\rm K} \over A} \right),0,0,V_{\rm K}\gtp + \gpp \left( {r^2 \Delta^{1/2} \over A} + {2arV_{\rm K} \over A} \right)\right],} \end{aligned}$$ where order of vector components is given by $(t,r,\theta,\phi)$, and primes denote coordinates defined in the disc local rest frame. Then, the one-form ${\rm d} t'$ can be written in terms of the Boyer-Lindquist coordinates as $${\rm d} t'= e_t^{t'} {\rm d} t + e_\phi^{t'} {\rm d} \phi$$ As for the observer at rest in the disc ${\rm d} \phi'=0$, and $${\rm d} \phi'= e_t^{\phi'} {\rm d} t + e_\phi^{\phi'} {\rm d} \phi,$$ we obtain the following relation $${\rm d} t'=\beta_t {\rm d} t,$$ where $$\beta_t= e_t^{t'}- e_\phi^{t'} e_t^{\phi'}/e_\phi^{\phi'}.$$ Similarly, for the disc unit area we find $${\rm d} r' {\rm d} \phi'=e_r^{r'} {\rm d} r \left( e_t^{\phi'} {\rm d} t+ e_\phi^{\phi'} {\rm d} \phi \right)=\beta_S {\rm d} r {\rm d} \phi$$ (in the latter equality we have used the condition ${\rm d} t'=0$, relevant for the three-space measurements in the disc rest frame), where $$\beta_S=e_r^{r'}\left( e_\phi^{\phi'} - e_\phi^{t'} e_t^{\phi'} /e_t^{t'} \right).$$ The factor $\beta_S$ gives correction for the integral element ${\rm d} r {\rm d} \phi$ in equation (\[eq:trans\_integ\]) (integration over $\phi$ is hidden in the convolution) which should be taken into account in calculation of the number of photons emitted from the accretion disc. \[lastpage\]
--- abstract: 'We present and discuss high-resolution grating spectra of the quasar [PG1211+143]{} obtained over three years. Based on an early observation from 2001, we find an outflow component of about 3000 [\^[-1]{} kms$^{-1}$]{} in contrast with the much higher velocity of about 24000 [\^[-1]{} kms$^{-1}$]{} reported earlier for this source, and based on the same data set. Subsequent grating spectra obtained for [PG1211+143]{} are consistent with the first observation in the broad-band sense, but not all narrow features used to identify the outflow are reproduced. We demonstrate that the poor S/N and time variability seen during all existing observations of [PG1211+143]{} make any claims about the outflow precariously inconclusive.' author: - 'S. Kaspi' - 'E. Behar' title: 'The High-Velocity Outflow of PG 1211+143: An Unbiased View Based on Several Observations' --- Introduction ============ Typical mass outflow velocities of a few hundreds to a few thousands [\^[-1]{} kms$^{-1}$]{} have been measured by now in numerous Active Galactic Nuclei (AGNs; Crenshaw et al. 2003 and references therein). Recent studies of the X-ray spectra of certain quasars have led to claims of much higher outflow velocities reaching a significant fraction of the speed of light, e.g., APM08279+5255 — Chartas et al. (2002) claim speeds of $\sim0.2c$ and $\sim0.4c$[^1], PG1115+080 — Chartas et al. (2003) find two X-ray absorption systems with outflow velocities of $\sim0.10c$ and $\sim0.34c$. These measurements, however, were carried out using spectra obtained with CCD cameras and hence at moderate spectral resolving powers of $R\sim 50$. Using the [[*XMM-Newton*]{}]{} reflection gratings ($R$ up to 500), high resolution X-ray spectra for several quasars have been obtained. For [PG1211+143]{}Pounds et al. (2003a, 2005) find a rich, well resolved spectrum featuring absorption lines of several ions, which they interpret as due to an outflow of $\sim$ 24000 [\^[-1]{} kms$^{-1}$]{}. A similar interpretation was applied to a similar observation of PG0844+349, where Pounds et al. (2003b) report even higher velocities reaching $\sim$ 60000 [\^[-1]{} kms$^{-1}$]{}. In NGC4051, Pounds et al. (2004) find a single absorption line at $\sim$ 7.1 keV, which they suggest may be [Fe]{}[xxvi]{} Ly$\alpha$ at an outflow velocity of $\sim$  6500 [\^[-1]{} kms$^{-1}$]{}, or the He$\alpha$ resonance absorption line of [Fe]{}[xxv]{} in which case the outflow velocity is $\sim$ 16500 [\^[-1]{} kms$^{-1}$]{}. Yet another ultra-high-velocity (UHV, i.e., sub-$c$) wind of 50000 [\^[-1]{} kms$^{-1}$]{} was reported by Reeves et al. (2003) for PDS 456. In all of these sources, the inferred hydrogen column densities through the wind is of the order of 10$^{23}$ , which is about an order of magnitude higher than the typical values measured for the nearby Seyfert sources. If indeed UHV outflows are common to bright quasars, this could have far reaching implications on our understanding of AGN winds and AGNs in general. For instance, if these winds carry a significant amount of mass as the high column densities may suggest, they would alter our estimates of the metal enrichment of the intergalactic medium by quasars. It remains to be shown theoretically what mechanism (e.g., radiation pressure) can drive these intense winds. Since the amount of mass in the wind is not well constrained, it is still unclear what effect it may have on the energy budget of the AGN. King & Pounds (2003) note that UHV winds have been found mostly for AGNs accreting near their Eddington limit. They provide a theory by which the UHV outflows are optically thick producing an effective photosphere, which is also responsible for the UV blackbody and soft X-ray (excess) continuum emission observed for these sources. [PG1211+143]{}  —  first observation  —  second view ==================================================== ![The EPIC-pn data and a simple fitted model, which is discussed in § \[epicpn\]. The upper panel shows the model folded through the instrument response and compared with the data. Bottom panel shows the unfolded model.[]{data-label="pnmodel"}](pnmodel_all_thaw.eps){width="7.5cm"} [PG1211+143]{} was observed with [*XMM-Newton*]{} during 2001 June 15 for about 55 ks. We retrieved the data for this observation from the [*XMM-Newton*]{} archive and reduced them using the Science Analysis System (SAS v5.3.0) in the standard processing chains as described in the data analysis threads and the ABC Guide to [*XMM-Newton*]{} Data Analysis. Overall, our data reduction results agree well with that of Pounds et al. (2003a, 2005) except for a few minor features which appear to be slightly different between the two reductions. We attribute these discrepancies to the different binning methods used and to the averaging of RGS1 and RGS2 in this work (see below). The results described in this section are presented in details in Kaspi & Behar (2006). EPIC-pn {#epicpn} ------- For the EPIC-pn data we first fitted the (line-free) rest-frame 2–5 keV energy range with a simple power law. The best fitted power law has a photon index of $\Gamma = 1.55\pm 0.05$ and a normalization of $(6.6\pm 0.4)\times10^{-4}$ phcm$^{-2}$s$^{-1}$keV$^{-1}$ and gives $\chi^{2}_{\nu}=0.74$ for 487 degrees-of-freedom (d.o.f.). Extrapolating this power law up to a rest-frame energy of 11 keV, we find a flux excess above the power law at around 6.4 keV, which is indicative of an iron K$\alpha$ line, and a flux deficit below the power law at energies above 7 keV. We add to the model a Gaussian emission line to account for the FeK$\alpha$ line and a photoelectric absorption edge to account for the deficit. Fitting for all parameters simultaneously, we find the best-fit Gaussian line center is at $6.04\pm 0.04$ keV (or $6.53\pm 0.05$ keV in the rest frame) and a line width ($\sigma$) of $0.096\pm 0.067$ keV. The total flux in the line is $(2.9\pm 1.4)\times 10^{-6}$ [\^[-2]{}[s]{}\^[-1]{} phcm$^{-2}$s$^{-1}$]{}. For the edge, we find a threshold energy of $6.72\pm 0.10$ keV, which is translated to a rest frame energy of $7.27\pm 0.11$ keV. The optical depth at the edge is $\tau=0.56\pm 0.10$. The power law model with the Gaussian line and the absorption edge gives a $\chi^{2}_{\nu}=0.983$ for 613 d.o.f. This model is plotted in Figure \[pnmodel\] where we show the model, both folded through the instrument and fluxed (i.e., unfolded). We stress that this edge does not necessarily contradict the presence of the line detected by Pounds et al. (2003a, 2005), since K$\alpha$ edges have lines right next to them. We also observe the lines at 2.68 keV and 1.47 keV claimed by Pounds et al. (2003a, 2005) to be from S and Mg, only we identify them as different lines at much lower velocities. The 2.68 keV line is identified here as [S]{}[xv]{}He$\beta$ and the 1.47 keV line is identified as [Mg]{}[xi]{}He$\beta$. RGS --- ![image](pgtrgs3pan.eps){width="16cm"} The RGS1 and RGS2 were operated in the standard spectroscopy mode resulting in an exposure time of $\sim 52$ ks. The spectra were extracted into uniform bins of $\sim$0.04 Å (which is about the RGS resolution and is 4 times the default bin width) in order to increase the signal-to-noise ratio (S/N). For the purpose of modeling narrow absorption lines, this rebinning method is better than the method used by Pounds et al. (2003a) of rebinning the spectrum to a minimum of 20 counts per bin, which distorts the spectrum especially around low-count-rate absorption troughs. To flux calibrate the RGS spectra we divided the count spectrum of each instrument by its exposure time and its effective area at each wavelength. Each flux-calibrated spectrum was corrected for Galactic absorption and the two spectra were combined into an error-weighted mean. At wavelengths where the RGS2 bins did not match exactly the wavelength of the RGS1 bins, we interpolated the RGS2 data to enable the averaging. The sky-subtracted combined RGS spectrum has in total $\sim 8900$ counts and its S/N ranges from $\sim 2$ around 8Å to $\sim 5$ around 18Å with an average of 3. Statistics in the second order of refraction are insufficient, hence we did not include it in our analysis. The combined RGS spectrum (RGS1 and RGS2) of [PG1211+143]{} is presented in Figure \[pgtrgs\]. Numerous absorption lines and several emission lines are detected. We identify K-shell lines of C, N, O and Mg and L-shell lines of O, Mg, Si, Ar, and Fe. The absorption line widths are consistent with the RGS resolution, and with the present S/N we are not able to resolve the intrinsic velocity widths. In emission, we identify significantly broadened lines of [N]{}[vii]{} Ly$\alpha$, [O]{}[viii]{} Ly$\alpha$, the forbidden line of [O]{}[vii]{} and its He$\alpha$ resonance line, the forbidden line of [Ne]{}[ix]{}, and the [Mg]{}[xi]{} He$\alpha$ resonance line, all in the rest frame of the source with no velocity shift. In order to quantitatively explore the emission and absorption lines, we have constructed a model for the entire RGS spectrum. The present method is an ion-by-ion fit to the data similar to the approach used in Sako et al. (2001) and in Behar et al. (2003). We first use the continuum measured from the EPIC-pn data, but renormalized to the RGS flux level. This continuum is then absorbed using the full set of lines for each individual ion. Our absorption model includes the first 10 resonance lines of H- and He- like ions of C, N, O, Ne, and Mg as well as edges for these ions. The model also includes our own calculation for the L-shell absorption lines of Fe (Behar et al. 2001) as well as of Si, S, and Ar corrected according to laboratory measurements (Lepson et al. 2003, 2005). Finally, we include inner-shell K$\alpha$ absorption lines of O and Mg (Behar & Netzer 2002), which we detect in the spectrum. The absorbed spectrum is complemented by the emission lines mentioned above, which are observed in the RGS spectrum. By experimenting with the absorption line parameters, we find that the observed lines are all blueshifted by about 3000 [\^[-1]{} kms$^{-1}$]{} with an uncertainty of 500 [\^[-1]{} kms$^{-1}$]{}. In the model we used a turbulence velocity of 1000 [\^[-1]{} kms$^{-1}$]{} to broaden the absorption lines. This width includes the instrumental broadening, which as noted above, we could not separate from the intrinsic broadening. Since the lines appear to be saturated, but no line goes to zero intensity in the trough, we obtain the best fit by assuming a covering factor of 0.7 for the X-ray continuum source. The best-fit column densities that we find for the different ions are consistent with a hydrogen column density of about $10^{21}$–$10^{22}$ . The emission lines are modeled using Gaussians with uniform widths of $\sigma = 2500$ [\^[-1]{} kms$^{-1}$]{} (resolved, but again, including the instrumental broadening), with no velocity shift, and assumed to be unabsorbed. These lines at FWHM $\simeq$ $6000\pm 1200$ [\^[-1]{} kms$^{-1}$]{} are even broader than those observed from the broad line region in the visible band ($\sim 2000$ [\^[-1]{} kms$^{-1}$]{}; Kaspi et al. 2000). The entire best-fit spectrum is shown in Figure \[pgtrgs\] (red curve). The spectrum beyond 25 Å is particularly challenging as it comprises many unresolved lines from L-shell ions of Si, S, and Ar while the RGS effective area drops rapidly. Several predicted lines may be observed here (e.g., [Ar]{}[xiii]{} - 28.92 Å, [Si]{}[xii]{} - 30.71 Å, [Ar]{}[xii]{} - 31.06 Å, [S]{}[xiii]{} - 31.93 Å; these wavelengths include the 3000 [\^[-1]{} kms$^{-1}$]{} shift). We are still unable to explain several features seen in the data, e.g., around 8.5 Å, 10.4 Å, or 29.8 Å, but the model gives a good fit to the data overall. Conclusions - First Observation ------------------------------- We have provided a self consistent model to the ionized outflow of [PG1211+143]{} revealing an outflow velocity of approximately 3000 [\^[-1]{} kms$^{-1}$]{}. Our model reproduces many absorption lines in the RGS band, although the S/N of the present data set is rather poor and some of the noise might be confused with absorption lines. The present approach is distinct from the commonly used global fitting methods and also from the line-by-line approach used by Pounds et al. (2003a). It allows for a physically consistent fit to the spectrum and is particularly appropriate for a broad-ionization-distribution absorber as observed here for [PG1211+143]{}. The present model also features several broad (FWHM = 6000 [\^[-1]{} kms$^{-1}$]{}) emission lines, which are observed directly in the data. A broad and relatively flat ionization distribution is found throughout the X-ray outflow consistent with a hydrogen column density of roughly $10^{21}$–$10^{22}$ . This is reminiscent of the outflow parameters measured in other well studied Seyfert galaxies. We also detect Fe-K absorption, which was identified by Pounds et al. (2003a, 2005) as a strongly blueshifted [Fe]{}[xxvi]{} absorption line. We find that most of the Fe-K opacity can alternatively be attributed to several consecutive, low charge states of Fe, although it can not be assessed whether the absorber is co-moving with the outflow or not. Future missions with microcalorimeter spectrometers on board might be able to address this interesting question. A second Observation of [PG1211+143]{} ====================================== ![image](2rgs.eps){width="16.2cm"} A second [[*XMM-Newton*]{}]{} observation of [PG1211+143]{} for $\sim 50$ks was carried out on 2004 June 21, three years after the first observation of 2001 June 15 which is described above. We have retrieved the data of this second observation from the [[*XMM-Newton*]{}]{} archive and reduced it in exactly the same way described above for the first observation. The data of the second RGS observation are plotted (red line) in Figure \[2rgs\] over the first observation (black line). The broad-band spectra of the two observations are generally consistent. However, not all narrow features are consistently reproduced. The total flux in the RGS band is the same in the two observations, though the continuum slope in the second observation the spectrum is somewhat harder. When inspecting the detailed narrow features in the spectrum some have changed while others remain the same. For example, the second observations shows features which appear to be emission lines around 8 Å where the first observation had absorption lines. Also, around 15 Å the absorption lines seem to have disappeared in the second observation. Conversely, some features are the same in the two spectra, for example, the emission O[vii]{} triplet around 22 Åand the Ne[ix]{} triplet around 13.5 Å. From Figure 3, it can be seen that due to the poor S/N in both spectra, it is extremely hard to determine whether the differences between the two spectra are real, or a mere result of the data’s poor S/N. Simultaneous observations of [[*XMM-Newton*]{}]{} and [*Chandra*]{} =================================================================== Simultaneously with the 2004 June 21 [[*XMM-Newton*]{}]{} observation, [PG1211+143]{} was also observed with the Low Energy Transmission Grating (LETG) on board the [*Chandra*]{} X-ray observatory. The LETG observation of $\sim 45$ ks has made use of the ACIS CCDs as the detector. We retrieved the data of this observation from the [*Chandra archive*]{} and reduced it using CIAO 3.2.1 and CALDB version 3.01, according to the updated CIAO threads. We have combined the +1 and -1 orders of the LETG spectrum using a weighted mean and the combined spectrum is represented in Figure \[rgs\_letg\] by a black line. The simultaneous RGS observation (the red data in Figure \[2rgs\]) is shown in red in Figure \[rgs\_letg\]. Although the simultaneous data from the two X-ray observatories are consistent overall, they differ in many details. For example, the RGS data between 7 to 9 Å show several emission-like features which the LETG data do not. Also, around 16.4 Å the LETG data show absorption-like features which are not present in the RGS data. These differences are consistent to within about 3$\sigma$ and are probably a result of the poor S/N of the observations After the first $\sim$45ks LETG observation of [PG1211+143]{} on 2004 June 21, which is described above, there were two more observations in consecutive orbits of the [*Chandra*]{} observatory. The second $\sim 45$ks observation took place on 2004 June 23 and the third observation was on 2004 June 25. These data are not presented here, but their spectra is in overall agreement with the first LETG observation, [*except*]{} that the flux level in the last two observations was twice that of the first observation, i.e., during a period of $\sim 2$ days the flux level doubled. This is somewhat unexpected for a source that had retained its flux level of three years earlier (see Figure \[2rgs\]). Besides the change in flux between the three LETG observations, there is also a change in the absorption features seen between the first observation and the other two. Some of these features have disappeared between the first low-flux observation and the high-flux observations taken 2 days later, while other features seems to appear. The absorption features interpreted by Reeves et al. (2005) as evidence for sub-c gravitational infall are seen only in the second observation and not in the first or third ones. The fact that, again, absorption features are not reproduced in different spectra is rather confusing. If these lines are statistically significant (see Reeves et al. 2005) then they represent a transient flow. Summary and Conclusions ======================= We claim in Kaspi & Behar (2006) that an outflowing absorber at a velocity of 3000 [\^[-1]{} kms$^{-1}$]{} fits the first (2001) RGS data of [PG1211+143]{} better than a 24000 [\^[-1]{} kms$^{-1}$]{} model. Admittedly though, the poor S/N of those data can tolerate more than one interpretation. A second RGS observation taken three years after the first observation shows general consistency with the first observation, but differs in important details of the absorption lines relevant to the outflow. Some features that appear in the first RGS observation disappear in the second one and vice versa. This could be a result of either short-time variability of the absorber (almost impossible to prove or refute) or the poor S/N of the data. Even more confusing is the fact that [*simultaneous*]{} observations of [PG1211+143]{} with RGS and LETG produce spectra that are partially incompatible in their absorption lines. This significantly reduces our confidence in the existence of the absorption lines and even more so in their identification. The poor S/N of the data calls for extra caution and careful modeling. The three LETG observations indicate that the continuum source changes on a timescale of days. If the discrete features seen in these spectra are real, they too vary on short timescales. With the loss of the high-resolution X-ray spectrometer (XRS) on board [*Astro-E2*]{}, a very long observation of a good, bright UHV-wind source with [*Chandra*]{} or [[*XMM-Newton*]{}]{} gratings remains as the most viable approach toward testing what we feel is still a putative phenomenon of high velocity outflow. Acknowledgments {#acknowledgments .unnumbered} =============== This research was supported by the Israel Science Foundation (grant no. 28/03), and by a Zeff fellowship to S.K. ![image](rgs_letg.eps){width="17cm"} Behar, E., Cottam, J. C., & Kahn, S. M. 2001, , 548, 966 Behar, E., & Netzer, H. 2002, , 570, 165 Behar, E., Rasmussen, A. P., Blustin, A. J., Sako, M., Kahn, S. M., Kaastra, J. S., Branduardi-Raymont, G., & Steenbrugge, K.C. 2003, , 598, 232 Chartas, G., Brandt, W. N., Gallagher, S. C., & Garmire, G. P. 2002, , 579, 169 Chartas, G., Brandt, W. N., & Gallagher, S. C. 2003, , 595, 85 Crenshaw, D. M., Kraemer, S. B., & George, I. M. 2003, , 41, 117 Hasinger, G., Schartel, N., & Komossa, S. 2002, , 573, L77 Kaspi, S., & Behar, E. 2006, ApJ, 636, in press Kaspi, S., Smith, P. S., Netzer, H., Maoz, D., Jannuzi, B. T., & Giveon, U. 2000, , 533, 631 King,  A. R. & Pounds, K. A. 2003, , 345, 657 Lepson, J. K., Beiersdorfer, P., Behar, E., & Kahn, S. M. 2003, , 590, 604 Lepson, J. K., Beiersdorfer, P., Behar, E., & Kahn, S. M. 2005, , 625, 1045 Pounds, K. A., Reeves, J. N., King, A. R., Page, K. L., O’Brien, P. T., & Turner, M. J. L. 2003a, , 345, 705 Pounds, K. A., King, A. R., Page, K. L., & O’Brien, P. T. 2003b, , 346, 1025 Pounds, K. A., Reeves, J. N., King, A. R., & Page, K. L. 2004, , 350, 10 Pounds, K. A., Reeves, J. N., King, A. R., Page, K. L., O’Brien, P. T., & Turner, M. J. L. 2005, , 356, 1599 Reeves, J. N., O’Brien, P. T., & Ward, M. J. 2003, , 593, 65 Reeves, J. N., Pounds,K., Uttley, P., Kraemer, S., Mushotzky, R., Yaqoob, T., George, I. M., and Turner, T. J. 2005, ApJL, 633, L81 Sako, M., et al. 2001, , 365, L168 [^1]: Though Hasinger et al. (2002) using a different instrument prefer a more conservative interpretation by which the X-ray wind is much slower and consistent with the well known, UV broad absorption line wind of that source, outflowing at velocities of up to 12000 [\^[-1]{} kms$^{-1}$]{}.
--- author: - | G. Cullen\ Deutsches Elektronen-Synchrotron DESY, Platanenallee 6, 15738 Zeuthen, Germany\ E-mail: - | H. van Deurzen, N. Greiner, G. Heinrich, G. Luisoni, E. Mirabella, T. Peraro, J. Reichel, J. Schlenk, J.F. von Soden-Fraunhofen\ Max Planck Institute for Physics, Föhringer Ring 6, 80805 Munich, Germany\ E-mail: - | P. Mastrolia\ Max Planck Institute for Physics, Föhringer Ring 6, 80805 Munich, Germany;\ Dipartimento di Fisica e Astronomia, Università di Padova, and INFN Sezione di Padova, via Marzolo 8, 35131 Padova, Italy\ E-mail: - | \ Physics Department, New York City College of Technology, The City University of New York, 300 Jay Street Brooklyn, NY 11201, USA;\ The Graduate School and University Center, The City University of New York, 365 Fifth Avenue, New York, NY 10016, USA\ E-mail: - | F. Tramontano\ Dipartimento di Scienze Fisiche, Università degli studi di Napoli and INFN, Sezione di Napoli, 80125 Napoli, Italy\ E-mail: title: NLO QCD Production of Higgs plus jets with GoSam --- Introduction ============ The large amount of data accumulated by the experimental collaborations at the Large Hadron Collider (LHC) allowed for a very detailed investigation of the Standard Model (SM) of particle physics. Moreover, the discovery of a Higgs boson with mass of about 126 GeV [@Aad:2012tfa] finally confirmed the validity of the electroweak symmetry breaking mechanism [@Englert:1964et]. In all these analyses, for example to further study the properties of the recently discovered Higgs boson, theory predictions play a fundamental role. They are not only needed for the signal, but also for the modeling of the relevant background processes, which share similar experimental signatures. Further, precise theory predictions are important in order to constrain model parameters in the event that a signal of New Physics is detected. Since leading-order (LO) results are affected by large uncertainties, theory predictions are not reliable without accounting for higher orders. Therefore, it is of primary interest to provide theoretical tools which are able to perform the comparison of LHC data to theory at NLO accuracy. In the past few years, the progress in the automation of NLO calculations for multi-particle final states has been tremendous and led to the so-called “NLO revolution” [@Salam:2011bj]. Several automated frameworks for one-loop calculations [@Berger:2008sj] have been presented, which are based on various new theoretical developments [@Ellis:2011cr]. It is fascinating to witness the number and quality of advanced automated NLO calculations that have been performed with different techniques. In this presentation, we review the main features of the framework [@Cullen:2011ac] for the automated computation of one-loop amplitudes. has been recently employed in several calculations at NLO QCD accuracy [@Greiner:2011mp; @Greiner:2012im; @vanDeurzen:2013rv; @Gehrmann:2013aga; @Gehrmann:2013bga; @Cullen:2013saa; @vanDeurzen:2013xla; @Dolan:2013rja] related to signal and backgrounds for Higgs boson production, as well as in the context of Beyond Standard Model (BSM) scenarios [@Cullen:2012eh; @Greiner:2013gca] and electroweak studies [@Chiesa:2013yma], and has been successfully interfaced with Monte Carlo programs to merge multiple NLO matrix elements with parton showers [@Luisoni:2013cuh; @Hoeche:2013mua]. We also briefly describe a selection of recent phenomenological results obtained with , with particular attention to the recent calculations of NLO QCD corrections to the production of Higgs boson in conjunction with jets at the LHC. The [[<span style="font-variant:small-caps;">GoSam</span>]{}]{} framework ========================================================================= combines automated diagram generation and algebraic manipulation [@Nogueira:1991ex; @Vermaseren:2000nd; @Reiter:2009ts; @Cullen:2010jv] with integrand-level reduction techniques [@Ossola:2006us]. Amplitudes are generated via Feynman diagrams, using [[`QGRAF`]{}]{} [@Nogueira:1991ex], [[`FORM`]{}]{} [@Vermaseren:2000nd], [[`spinney`]{}]{} [@Cullen:2010jv] and [[`haggies`]{}]{} [@Reiter:2009ts]. The individual program tasks are managed by python scripts, so that the only task required from the user is the preparation of an input file in order to launch the generation of the source code and its compiling, without having to worry about the internal details. The input file contains specific information about: i) the [*process*]{}, such as a list of initial and final state particles, the order in the coupling constants, and the model; ii) the [*scheme*]{} employed, such as the regularization and renormalization schemes; iii) the [*system*]{} , such as paths to libraries or compiler options; iv) optional information to control the code generation. After the generation of all contributing diagrams, the virtual corrections are evaluated using the $d$-dimensional integrand-level reduction method, as implemented in [[<span style="font-variant:small-caps;">Samurai</span>]{}]{} [@Mastrolia:2010nb] library, which allows for the combined determination of both cut-constructible and rational terms at once. Alternatively, the tensorial decomposition provided by  [@Binoth:2008uq; @Heinrich:2010ax; @Cullen:2011kv] is also available. Such reduction, which is numerically stable but more time consuming, is employed as a rescue system. After the reduction, all relevant master integrals can be computed by means of  [@Cullen:2011kv],  [@vanOldenborgh:1990yc], or  [@vanHameren:2010cp]. As a novel approach to the integrand reduction, the method proposed in [@Mastrolia:2012bu] allows for the extraction of all the coefficients in the integrand decomposition by performing a Laurent expansion, whenever the analytic form of the numerator function is known. This method has been implemented, within the framework, in the ++ library , showing an improvement in the computational performance, both in terms of speed and precision, with respect to the standard algorithms. More details are provided in the talk of T. Peraro at this conference [@TizianoEPS]. The new library has been recently employed in the evaluation of NLO QCD corrections to $p p \to t {\bar t} H j $ [@vanDeurzen:2013xla]. [[<span style="font-variant:small-caps;">GoSam</span>]{}]{} can be used to generate and evaluate one-loop corrections in both QCD and electro-weak theory. Model files for BSM theories can be generated from a Universal FeynRules Output (`UFO`) [@Christensen:2008py] or `LanHEP` [@Semenov:2010qt] file. #### Code development {#code} New features have been recently implemented within [[<span style="font-variant:small-caps;">GoSam</span>]{}]{}, with respect to the current public version. In order to deal with the complexity level of calculations such as $pp\to Hjjj$ [@Cullen:2013saa], the code has been enhanced. On the one side, the generation algorithm has been improved by a more efficient diagrammatic layout: Feynman diagrams are grouped according to their topologies, namely global numerators are constructed by combining diagrams that have a common set, or subset, of denominators, irrespectively of the specific particle content. On the other side, additional improvements in the performances of [GoSam]{} have been achieved by exploiting the optimized manipulation of polynomial expressions available in [Form 4.0]{} [@Kuipers:2012rf]. The possibility of employing numerical polarization vectors and the option to sum diagrams sharing the same propagators algebraically during the generation of the code led to an enormous gain in code generation time and reduction of code size. Concerning the amplitude reduction, aside from the already mentioned new integrand-level reduction via Laurent expansion [@Mastrolia:2012bu], [GoSam]{} has been enhanced to reduce integrands that may exhibit numerators with rank larger than the number of the denominators. This is indeed the case in the presence of effective couplings [@vanDeurzen:2013rv; @Cullen:2013saa], which appear in the large top-mass approximation, or when dealing with spin-2 particles [@Greiner:2013gca]. For these cases, within the context of integrand-reduction techniques, the parametrization of the residues at the multiple-cut has to be extended and the decomposition of any one-loop amplitude acquires new master integrals [@Mastrolia:2012bu]. The extended integrand decomposition has been implemented in the  library [@Mastrolia:2012du]. The new developments regarding the improved generation and reduction algorithms will be publicly available in the [[<span style="font-variant:small-caps;">GoSam</span>]{}]{}2.0 release, which is currently in preparation. #### Interfacing with MC and BLHA The computation of physical observables at NLO accuracy, such as cross sections and differential distributions, requires to combine the one-loop results for the virtual amplitudes obtained with , with other tools that can take care of the computation of the real emission contributions and of the subtraction terms, needed to control the cancellation of IR singularities. This can be obtained by embedding the calculation of virtual corrections within a Monte Carlo framework (MC), that can take care of the phase-space integration, and of the combination of the different pieces of the calculation. A table with a comprehensive list of interfaces with MC programs has been recently presented in [@Cullen:2013cka]. In order to facilitate the communication between the programs computing virtual one-loop amplitudes and the MC frameworks, a standard interface called the *Binoth Les Houches Accord* (BLHA) [@Binoth:2010xt] has been designed. Within the BLHA, the interaction between the One-loop Program (OLP) and the Monte Carlo framework (MC) proceeds in two phases. During the first phase, called pre-runtime phase, the MC creates an order file, which contains information about the setup and the subprocesses needed from the OLP in order to perform the computation. The OLP reads the order file, checks availability for each item, and returns a contract file telling the MC what it can provide. In the second stage, the MC requires from the OLP the values of the virtual one-loop amplitudes at specific phase-space points. Higgs boson production in Gluon Fusion ====================================== At the LHC, the dominant Higgs production mechanism proceeds via gluon fusion (GF), where the coupling of the Higgs to the gluons is mediated by a heavy quark loop. For this reason, the calculation of higher order corrections for the GF production of a Higgs boson in association with jets has received a lot of attention in the theory community over the past decade [@Dittmaier:2011ti]. The developments in [[<span style="font-variant:small-caps;">GoSam</span>]{}]{}, described in Section \[code\], allowed to compute the NLO QCD corrections to the production of $H+2$ jets [@vanDeurzen:2013rv] and, for the first time, also $H+3$ jets [@Cullen:2013saa] in GF (in the large top-mass limit). While a fully automated BLHA interface between [[<span style="font-variant:small-caps;">GoSam</span>]{}]{} and  [@Gleisberg:2008ta] has been used for $pp \to H+2 j$, the complexity of the integration for the process $pp \to H+3 j$ forced us to employ a hybrid setup which combines [[<span style="font-variant:small-caps;">GoSam</span>]{}]{}, and the MadDipole/Madgraph4/MadEvent framework [@Frederix:2008hu]. This calculation is indeed challenging both on the side of real-emission contributions and of the virtual corrections, which alone involve more than $10,000$ one-loop Feynman diagrams with up to rank-seven hexagons. In the calculation of $H+3$ jets the cteq6L1 and cteq6mE parton-distribution functions were used for LO and NLO respectively, and a minimal set of cuts based on the anti-$k_T$ jet algorithm with $R=0.5$, $p_{T,min}>20$ GeV and $\left|\eta\right|<4.0$ was applied. Figure \[fig:H3j\] shows the $p_T$ distributions of the three jets and of the Higgs boson, respectively. The NLO corrections enhance all distributions for $p_T$ values lower than $150-200$ GeV, whereas their contribution is negative at higher $p_T$. This study also shows that the virtual contributions for $pp \to Hjjj$ generated by is ready to be paired with available Monte Carlo programs to aim at further phenomenological studies. Other Phenomenological results ============================== #### Diphotons+jets [[<span style="font-variant:small-caps;">GoSam</span>]{}]{} in combination with MadDipole/MadGraph4/MadEvent has been used to calculate the NLO QCD corrections to $pp\to\gamma\gamma +1$jet[@Gehrmann:2013aga] and $pp\to\gamma\gamma +2$jets[@Gehrmann:2013bga], where the former also includes the fragmentation component. This calculation allowed for a first reliable prediction of the absolute normalization of this process, and demonstrated that the shape of important kinematical distributions is modified by higher-order effects. #### Beyond the Standard Model [[<span style="font-variant:small-caps;">GoSam</span>]{}]{} has been used to calculate the NLO Susy-QCD corrections to the production of a pair of the lightest neutralinos plus one jet at the LHC at $8$TeV, which appears as a monojet signature in combination with missing energy. All non-resonant diagrams have been fully included, namely without using the assumption that production and decay factorize. We observe that the NLO corrections to the missing transverse energy are large, mainly due to additional channels opening up at NLO. The detailed setup can be found in [@Cullen:2012eh]. Another recent BSM result obtained with [[<span style="font-variant:small-caps;">GoSam</span>]{}]{}+MadDipole/MadGraph4/MadEvent is the calculation of NLO QCD corrections to the production of a graviton in association with one jet [@Greiner:2013gca], where the graviton decays into a photon pair, within ADD models of large extra dimensions [@ArkaniHamed:1998rs]. The calculation is quite complicated due to the tensor structure introduced by spin-2 particles, and the non-standard propagator of the graviton, coming from the summation over Kaluza-Klein modes. It is interesting to notice that the $K$-factors of the invariant mass distribution of the photon pair emitted in the decay of the graviton are not uniform. Since the latter is used to derive exclusion limits, it is advisable to take into account the effect of NLO corrections. For details we refer to [@Greiner:2013gca]. #### Higgs+Vector Boson+jet was interfaced with the POWHEG BOX to compute the associate production of a Higgs boson, a vector boson, and one jet [@Luisoni:2013cuh]. In this calculation, the improved MiNLO procedure [@Hamilton:2012rf] was used to obtain NLO accurate predictions also when the jet is not resolved. Using this interface the generation of any process is fully automated, except for the construction of the Born phase space. #### NLO QCD corrections to $p p \to t {\bar t} Hj $ The production rate for a Higgs boson associated with a top-antitop pair ($t \bar t H$) is particularly interesting to study the properties of the newly discovered Higgs boson, since it is directly proportional to the SM Yukawa coupling of the Higgs boson to the top quark. We recently presented the complete NLO QCD corrections to the process $ pp \to t \bar t H + 1$ jet ($t \bar t H j$) at the LHC [@vanDeurzen:2013xla]. The goal of the calculation was twofold. On the one hand, it is important for the phenomenological analyses at the LHC, in particular for the high-$p_T$ region, where the presence of the additional jet can be relevant. On the other hand, from the technical point of view, for the presence of two mass scales (Higgs boson and top quark) and internal massive particles, together with a high number of diagrams, $p p \to t {\bar t} H j $ constitutes a challenge for many reduction algorithms. This calculation represents the first application of the novel reduction algorithm, implemented in the library , based on integrand-level reduction via Laurent expansion [@Mastrolia:2012bu]. Conclusions =========== [[<span style="font-variant:small-caps;">GoSam</span>]{}]{} is a flexible and widely applicable tool for the automated calculation of the virtual part of multi-particle scattering amplitudes at NLO accuracy. After interfacing it with MC programs, that can perform integration over phase space and combine the contributions coming from real emission and subtraction terms as well, total cross-sections and differential distributions can be easily obtained for a variety of processes of interest at the LHC. Boosted by state-of-the-art techniques for the reduction of the scattering amplitudes, [[<span style="font-variant:small-caps;">GoSam</span>]{}]{} provides a reliable answer for multi-leg amplitudes in the presence of massive internal and external legs and propagators, such as the production of a Higgs boson in conjunction with a top-quark pair, as well as in configurations with relatively high multiplicity, such as Higgs boson plus jets production. While the [[<span style="font-variant:small-caps;">GoSam</span>]{}]{} code will be further improved, it will be interesting to observe whether the attempt of extending integrand-level techniques to higher orders [@Mastrolia:2011pr] will succeed and provide a comparable level of automation, at least for the calculation of the virtual parts. Other challenges for the near future involve interfacing [[<span style="font-variant:small-caps;">GoSam</span>]{}]{} with MC programs for an automated generation of the full cross section including parton showering, and ultimately the production of codes and results to be used within experimental analyses. We believe that the amount of recent calculations that were produced with the [[<span style="font-variant:small-caps;">GoSam</span>]{}]{} framework shows, both in terms of stability and precision, that it is an ideal multi-purpose tool for studying the physics of the LHC. Acknowledgments {#acknowledgments .unnumbered} --------------- The work of G.C. was supported by DFG Sonderforschungsbereich Transregio 9, Computergestützte Theoretische Teilchenphysik and the support of the Research Executive Agency (REA) of the European Union under the Grant Agreement number PITN-GA-2010-264564 (LHCPhenoNet). H.v.D., G.L., P.M., and T.P. are supported by the Alexander von Humboldt Foundation, in the framework of the Sofja Kovaleskaja Award Project “Advanced Mathematical Methods for Particle Physics”, endowed by the German Federal Ministry of Education and Research. The work of G.O. was supported in part by the National Science Foundation under Grant PHY-1068550 and PSC-CUNY Award No. 65188-00 43. This research work benefited of computing resources from the Rechenzentrum Garching and the the CTP cluster of the New York City College of Technology. [10]{} url \#1[[\#1]{}]{}urlprefix\[2\]\[\][[\#2](#2)]{} Aad G [*et al.*]{} (ATLAS Collaboration) 2012 [*Phys.Lett.*]{} [**B716**]{} 1–29; Chatrchyan S [*et al.*]{} (CMS Collaboration) 2012 [*Phys.Lett.*]{} [ **B716**]{} 30–61 Englert F and Brout R 1964 [*Phys.Rev.Lett.*]{} [**13**]{} 321–323; Higgs P W 1964 [*Phys.Lett.*]{} [**12**]{} 132–133 Salam G P 2010 [*PoS*]{} [**ICHEP2010**]{} 556; Perret-Gallix D 2013 [*J.Phys.Conf.Ser.*]{} [**454**]{} 012051 Berger C, Bern Z, Dixon L, Febres Cordero F, Forde D [*et al.*]{} 2008 [ *Phys.Rev.*]{} [**D78**]{} 036003; Bevilacqua G, Czakon M, Garzelli M, van Hameren A, Kardos A [*et al.*]{} 2013 [*Comput.Phys.Commun.*]{} [**184**]{} 986–997; Hirschi V, Frederix R, Frixione S, Garzelli M V, Maltoni F [*et al.*]{} 2011 [*JHEP*]{} [**1105**]{} 044; Cascioli F, Maierhofer P and Pozzorini S 2012 [*Phys.Rev.Lett.*]{} [**108**]{} 111601; Agrawal S, Hahn T and Mirabella E 2012 [*PoS*]{} [**LL2012**]{} 046; Badger S, Biedermann B, Uwer P and Yundin V 2013 [*Comput.Phys.Commun.*]{} [**184**]{} 1981–1998; Actis S, Denner A, Hofer L, Scharf A and Uccirati S 2013 [*JHEP*]{} [ **1304**]{} 037 Cullen G, Greiner N, Heinrich G, Luisoni G, Mastrolia P [*et al.*]{} 2012 [*Eur.Phys.J.*]{} [**C72**]{} 1889 Ellis R K, Kunszt Z, Melnikov K and Zanderighi G 2012 [*Phys.Rept.*]{} [ **518**]{} 141–250; Bern Z, Dixon L and Kosower D 2007 [*Annals Phys.*]{} [**322**]{} 1587; Ossola G 2013 (*Preprint* ) Greiner N, Guffanti A, Reiter T and Reuter J 2011 [*Phys.Rev.Lett.*]{} [ **107**]{} 102002 Greiner N, Heinrich G, Mastrolia P, Ossola G, Reiter T [*et al.*]{} 2012 [ *Phys.Lett.*]{} [**B713**]{} 277–283 van Deurzen H, Greiner N, Luisoni G, Mastrolia P, Mirabella E [*et al.*]{} 2013 [*Phys.Lett.*]{} [**B721**]{} 74–81 Gehrmann T, Greiner N and Heinrich G 2013 [*JHEP*]{} [**1306**]{} 058 Gehrmann T, Greiner N and Heinrich G 2013 (*Preprint* ) Cullen G, van Deurzen H, Greiner N, Luisoni G, Mastrolia P [*et al.*]{} 2013 [*Phys.Rev.Lett.*]{} [**111**]{} 131801 van Deurzen H, Luisoni G, Mastrolia P, Mirabella E, Ossola G [*et al.*]{} 2013 (*Preprint* ) Dolan M J, Englert C, Greiner N and Spannowsky M 2013 (*Preprint* ) Cullen G, Greiner N and Heinrich G 2013 [*Eur.Phys.J.*]{} [**C73**]{} 2388 Greiner N, Heinrich G, Reichel J and von Soden-Fraunhofen J F 2013 (*Preprint* ) Chiesa M, Montagna G, Barze‘ L, Moretti M, Nicrosini O [*et al.*]{} 2013 [ *Phys.Rev.Lett.*]{} [**111**]{} 121801; Mishra K, Becher T, Barze L, Chiesa M, Dittmaier S [*et al.*]{} 2013 (*Preprint* ) Luisoni G, Nason P, Oleari C and Tramontano F 2013 (*Preprint* ) Hoeche S, Huang J, Luisoni G, Schoenherr M and Winter J 2013 [*Phys.Rev.*]{} [**D88**]{} 014040 Nogueira P 1993 [*J.Comput.Phys.*]{} [**105**]{} 279–289 Vermaseren J A M 2000 (*Preprint* ) Reiter T 2010 [*Comput.Phys.Commun.*]{} [**181**]{} 1301–1331 Cullen G, Koch-Janusz M and Reiter T 2011 [*Comput.Phys.Commun.*]{} [**182**]{} 2368–2387 Ossola G, Papadopoulos C G and Pittau R 2007 [*Nucl.Phys.*]{} [**B763**]{} 147–169; Ossola G, Papadopoulos C G and Pittau R 2007 [*JHEP*]{} [**0707**]{} 085; Ellis R K, Giele W T and Kunszt Z 2008 [*JHEP*]{} [**03**]{} 003; Ossola G, Papadopoulos C G and Pittau R 2008 [*JHEP*]{} [**0805**]{} 004; Mastrolia P, Ossola G, Papadopoulos C and Pittau R 2008 [*JHEP*]{} [**0806**]{} 030 Mastrolia P, Mirabella E and Peraro T 2012 [*JHEP*]{} [**1206**]{} 095 Mastrolia P, Ossola G, Reiter T and Tramontano F 2010 [*JHEP*]{} [**1008**]{} 080 Binoth T, Guillet J P, Heinrich G, Pilon E and Reiter T 2009 [ *Comput.Phys.Commun.*]{} [**180**]{} 2317–2330 Heinrich G, Ossola G, Reiter T and Tramontano F 2010 [*JHEP*]{} [**1010**]{} 105 Cullen G, Guillet J, Heinrich G, Kleinschmidt T, Pilon E [*et al.*]{} 2011 [*Comput.Phys.Commun.*]{} [**182**]{} 2276–2284 van Oldenborgh G 1991 [*Comput.Phys.Commun.*]{} [**66**]{} 1–15; Ellis R K and Zanderighi G 2008 [*JHEP*]{} [**02**]{} 002 van Hameren A 2011 [*Comput.Phys.Commun.*]{} [**182**]{} 2427–2438 Peraro T, [*presentation at EPS-HEP 2013, in these proceedings*]{} Christensen N D and Duhr C 2009 [*Comput. Phys. Commun.*]{} [**180**]{} 1614–1641; Degrande C, Duhr C, Fuks B, Grellscheid D, Mattelaer O [*et al.*]{} 2011 (*Preprint* ); Alloul A, Christensen N D, Degrande C, Duhr C and Fuks B 2013 (*Preprint* ) Semenov A 2010 (*Preprint* ) Kuipers J, Ueda T, Vermaseren J and Vollinga J 2013 [*Comput.Phys.Commun.*]{} [**184**]{} 1453–1467 Mastrolia P, Mirabella E, Ossola G, Peraro T and van Deurzen H 2012 [*PoS*]{} [**LL2012**]{} 028 (*Preprint* ) Cullen G, van Deurzen H, Greiner N, Heinrich G, Luisoni G [*et al.*]{} 2013 (*Preprint* ) Binoth T, Boudjema F, Dissertori G, Lazopoulos A, Denner A [*et al.*]{} 2010 [*Comput.Phys.Commun.*]{} [**181**]{} 1612–1622; Alioli S, Badger S, Bellm J, Biedermann B, Boudjema F [*et al.*]{} 2013 (*Preprint* ) Dittmaier S [*et al.*]{} (LHC Higgs Cross Section Working Group) 2011 (*Preprint* ); Dittmaier S, Dittmaier S, Mariotti C, Passarino G, Tanaka R [*et al.*]{} 2012 (*Preprint* ); Heinemeyer S [*et al.*]{} (The LHC Higgs Cross Section Working Group) 2013 (*Preprint* ) Gleisberg T, Hoeche S, Krauss F, Schonherr M, Schumann S [*et al.*]{} 2009 [*JHEP*]{} [**0902**]{} 007 Frederix R, Gehrmann T and Greiner N 2008 [*JHEP*]{} [**0809**]{} 122; Frederix R, Gehrmann T and Greiner N 2010 [*JHEP*]{} [**1006**]{} 086; Stelzer T and Long W 1994 [*Comput.Phys.Commun.*]{} [**81**]{} 357–371; Maltoni F and Stelzer T 2003 [*JHEP*]{} [**0302**]{} 027; Alwall J, Demin P, de Visscher S, Frederix R, Herquet M [*et al.*]{} 2007 [*JHEP*]{} [**0709**]{} 028 Arkani-Hamed N, Dimopoulos S and Dvali G 1998 [*Phys.Lett.*]{} [**B429**]{} 263–272; Antoniadis I, Arkani-Hamed N, Dimopoulos S and Dvali G 1998 [*Phys.Lett.*]{} [**B436**]{} 257–263 Hamilton K, Nason P, Oleari C and Zanderighi G 2013 [*JHEP*]{} [**1305**]{} 082 Mastrolia P and Ossola G 2011 [*JHEP*]{} [**1111**]{} 014 (*Preprint* ); Badger S, Frellesvig H and Zhang Y 2012 [*JHEP*]{} [**1204**]{} 055; Zhang Y 2012 [*JHEP*]{} [**1209**]{} 042; Mastrolia P, Mirabella E, Ossola G and Peraro T 2012 [*Phys.Lett.*]{} [ **B718**]{} 173–177; Feng B and Huang R 2013 [*JHEP*]{} [**1302**]{} 117; Mastrolia P, Mirabella E, Ossola G and Peraro T 2013 [*Phys.Rev.*]{} [ **D87**]{} 085026; Mastrolia P, Mirabella E, Ossola G and Peraro T 2013 (*Preprint* ); Badger S, Frellesvig H and Zhang Y 2013 (*Preprint* 1310.1051)
--- abstract: 'This paper is concerned with the comparison of semi-analytical and non-averaged propagation methods for Earth satellite orbits. We analyse the total integration error for semi-analytical methods and [propose a novel decomposition]{} into dynamical, model truncation, short-periodic, and numerical error components. The first three are attributable to distinct approximations required by the method of averaging, which fundamentally limit the attainable accuracy. In contrast, numerical error, the only component present in non-averaged methods, can be significantly mitigated by employing adaptive numerical algorithms and regularized formulations of the equations of motion. We present a collection of non-averaged methods based on the integration of [existing]{} regularized formulations of the equations of motion through an adaptive solver. [We implemented the collection in the orbit propagation code [`THALASSA`]{}, which we make publicly available, and we compared the non-averaged methods]{} to the semi-analytical method implemented in the orbit propagation tool [`STELA`]{} through numerical tests involving long-term propagations (on the order of decades) of LEO, GTO, and high-altitude HEO orbits. For the test cases considered, regularized non-averaged methods were found to be up to two times slower than semi-analytical for the LEO orbit, to have comparable speed for the GTO, and to be ten times as fast for the HEO (for the same accuracy). [We show for the first time that]{} efficient implementations of non-averaged regularized formulations of the equations of motion, and especially of non-singular element methods, are attractive candidates for the long-term study of high-altitude and highly elliptical Earth satellite orbits.' author: - Davide Amato - Claudio Bombardelli - Giulio Baù - Vincent Morand - 'Aaron J. Rosengren' bibliography: - 'references.bib' date: 'Received: date / Accepted: date' title: 'Non-averaged regularized formulations as an alternative to semi-analytical orbit propagation methods [^1] ' --- Introduction ============ Predicting the evolution of the population of Earth satellites requires fast orbit propagation techniques that are capable of efficiently taking into account a plethora of physical phenomena and dynamical configurations. A comprehensive picture of the evolution of the near-Earth environment is usually constructed through large-scale Monte Carlo simulations, which are computationally burdensome. These simulations are often performed either through general perturbations methods, which rely on approximate analytical solutions of the perturbed gravitational two-body problem, or through semi-analytical methods, in which averaged equations of motion are integrated numerically. Due to their decisive speed advantage, general perturbations theories are employed to propagate large ensembles of objects, especially when frequent orbital updates are available or when only statistical results are required, with relaxed accuracy requirements for individual objects. Perhaps the most widely used general perturbations theory is SGP4, a simplified version of the SGP (Simplified General Perturbations) theory. SGP4 was originally used to propagate the Two-Line-Element (TLE) catalog produced and maintained by the US Joint Space Operations Center [@Vallado2006]. SGP4 is based on the Brouwer analytical solution of the main satellite problem that includes zonal terms of the Earth’s gravity potential up to the fifth order, and drag computed through a power law for the atmospheric density [@Hoots2004]. The Debris Cloud Propagator (DCP) included in the SDM suite [@Rossi2009] is a general perturbations software used to study the evolution of the population of orbital debris. DCP adopts a faster and less sophisticated approach than SGP4, using analytical solutions for the semi-major axis, eccentricity, and inclination under the effect of atmospheric drag, and for the longitude of node and argument of perigee under the effect of the Earth’s oblateness [@Rossi2009; @Anselmo1997]. Recently, @Moeckel2015 introduced the analytical propagation code Ikebana, a parallelized version of the FLORA orbit propagator. Ikebana makes use of modern parallel computing capabilities endowed by GPUs to accelerate the propagation of large object populations for applications to debris models, and uses single precision wherever possible to increase the performance of GPU calculations. Both FLORA and Ikebana were validated against a non-averaged code and by reproducing the overall long-term behavior of historic TLE data for the Vanguard-1 satellite. [Short-term analytical solutions (in the order of a few revolutions) using the Kustaanheimo-Stiefel regularization [@KS1965] have also been derived, for instance by including perturbations from zonal harmonics [@Sellamuthu2018 and references therein] and lunar gravitation [@Sellamuthu2017].]{} Semi-analytical techniques offer a compromise between general and special perturbations. In these approaches, the fast variables are eliminated from the equations through the *method of averaging*, which consists in the elimination of high-frequency components from the equations of motion by an analytical or numerical averaging over a short time scale. If averaging is correctly performed, the time scale of the problem changes from one characteristic of the orbital period to that of the long-periodic effects, thus enabling a numerical solver to take larger step sizes with respect to the osculating orbit. In essence, semi-analytical techniques employ information on the approximate analytical solutions provided by general perturbations to improve the efficiency of special perturbations, at the cost of accuracy. This process may permit significant savings in computational time with respect to the integration of the equations of motion in Cartesian coordinates, i.e., the Cowell formulation. Semi-analytical propagation algorithms are widely employed for the future prediction of the circumterrestrial debris environment. The European Space Agency MASTER debris model uses the semi-analytical orbit propagation code FOCUS (Fast Orbit Computation Utility Software), which integrates single-averaged Gauss equations for equinoctial elements. A faster propagation code denominated DELTOP is also used for the prediction of the future evolution of the MASTER model. DELTOP employs a simple Euler integration scheme to decrease computational time [@Klinkrad2006 chapter 5]. The software was validated against accurate solutions, suggesting that the employment of mean Keplerian elements compensates the inaccuracy of the Euler integration scheme [@Dahlquist1974 chapter 1]. NASA uses the semi-analytical propagators PROP3D and GEOPROP to update its LEGEND debris population model [@Liou2004]. PROP3D is employed for all the orbital regimes except GEO. It includes perturbations from atmospheric drag, zonal harmonics of the geopotential up to $J_4$, lunisolar gravitational accelerations, and solar radiation pressure. GEOPROP is used for the GEO population and is based on the averaged approach by @vanderHa1986. The semi-analytical propagator FOP, which is included in the SDM suite by  @Rossi2009, is an optimized version of the LOP code developed by @Kwok1986. Orbit propagators used for Space Situational Awareness (SSA) are, in general, more sophisticated than those used for debris modeling. The Draper Semi-analytic Satellite Theory (DSST) was one of the first propagators employed in SSA, and it includes all the principal perturbations affecting Earth satellite orbits (gravitational perturbations from the Earth’s zonal and tesseral harmonics, the Sun, and the Moon, and non-gravitational perturbations from atmospheric drag and solar radiation pressure) in the equations of motion for mean equinoctial elements. Besides, it retains long-periodic and secular trends arising from tesseral resonances and it includes higher-order cross-coupling terms between the geopotential and atmospheric drag. Although there is no principal reference for the DSST, a comprehensive summary of the theory and equations has been given by @Danielson1995. @Golikov2012 presented the numerical code THEONA, based on an elegant approach in which the equations of motion describe the deviations from an *intermediate* orbit corresponding to the exact solution of the generalized problem of two fixed centers. THEONA has been used for orbit prediction in Soyuz and Progress missions. The French space agency CNES (Centre Nationale d’Études Spatiales), developed the semi-analytical propagator [`STELA`]{} (Semi-Analytical Tool for End-of-Life Analysis) to verify the compliance of spacecraft with the end-of-life regulations detailed in the French Space Operations Act [@LeFevre2014]. [`STELA`]{} integrates the equations of motion for mean equinoctial elements including all the main perturbations acting on Earth satellite orbits. The averaging theory is carried out at first order for all perturbations except $J_2$, for which it is carried out at second order. Cross-coupling between the oblateness and atmospheric drag perturbations is also considered. Short-periodic terms can be recovered for the $J_2$ and lunisolar perturbations. Due to its excellent state of validation and its public availability, we use [`STELA`]{} as the reference semi-analytical propagator against which to compare the code that we introduce in this paper. A thorough understanding of the dynamical characteristics of the system is required in order to obtain reliable results and optimize the performance of semi-analytical methods. If two or more angular frequencies are commensurable, the corresponding terms of the perturbing function change according to a long-periodic or secular behavior. This situation is ubiquitous in celestial mechanics, and is [known as a state of]{} *resonance* [@Murray1999]. For instance, the navigational satellites in MEO and the telecommunications satellites in GEO are in resonance with the tesseral harmonics of the Earth’s geopotential to satisfy [ground-track repeatability]{} requirements. Secular and semi-secular resonances with the Sun and the Moon can be exploited to drastically reduce orbit lifetimes [@Gkolias2016], while lunar mean-motion resonances are being exploited to achieve stable highly elliptical orbits [@Dichmann2013]. Specific provisions have to be adopted during the averaging process in the presence of resonances, otherwise long-periodic and secular trends in the orbital elements will be missed, resulting in an incorrect trajectory. In the case of tesseral resonances, some of the terms depending on the mean anomaly (that would otherwise be “averaged out”) have to be retained in the geopotential [@Kaula1966], and similar techniques are used for other types of resonances [@Morbidelli2002]. High-order averaging schemes are required to capture coupling effects between different perturbations. This is exemplified by the dynamics of GTOs, in which the coupling between $J_2$, solar [gravity]{}, and drag perturbations makes the trajectories extremely sensitive to the initial conditions and to uncertainty in the state and in physical parameters [@Lamy2011].[^2] Even if these dynamical configurations are duly accounted for, semi-analytical techniques are intrinsically limited in their accuracy due to the approximations introduced in the averaging process, and some parameters, such as the orders of truncation of the perturbing functions, have to be judiciously chosen before integrating the rates of change of the mean elements. Moreover, averaging over the fast variables necessarily involves the loss of information on the short-periodic variations in the orbital elements. The information can only be partially recovered by adding short-periodic terms (which are computed analytically) to the mean elements. As we will show in this paper, this process introduces errors in the calculation of the osculating elements from the mean. On the other hand, semi-analytical methods may be faster by up to two orders of magnitude with respect to the Cowell formulation [@Setty2016]. In contrast to semi-analytical techniques, the accuracy of special perturbations is only limited by the physical model and by the available processing power and memory. Once the physical model is defined, the solver can be easily tuned by changing the integration time step. Regardless of these advantages, special perturbations (or *non-averaged*) methods have not found widespread use in the integration of large sets of initial conditions until recently, probably due to the lack of the necessary computational resources. However, these circumstances are changing. Already in 1997, @Coffey1998 demonstrated that maintaining the US space object catalog[^3] through special perturbations could easily be achieved, given enough computational hardware. In fact today, this catalog is fully maintained by the US Joint Space Operations Center using special perturbations. A wide range of single- and multi-step numerical solvers for the Cowell formulation has been implemented in the GMAT[[^4]]{} and Copernicus [@Williams2010] software suites. There are some drawbacks in the Cowell formulation. First, the solution of the equations of motion in Cartesian coordinates is unstable in the Lyapunov sense, even in the unperturbed problem [@Bond1996]. Then, since the gravitational potential is singular at collision, the right-hand side of the equations of motion exhibit strong oscillations when the particle is close to the main body. Furthermore, because the [physical time is used as the independent variable, the distribution of steps along the orbit is shifted towards the apoapsis for constant step size, a fact that is particularly detrimental for highly eccentric orbits.]{} These disadvantages can be mitigated by resorting to *regularized formulations*. A *regularization* of the two-body problem is an analytical procedure that removes the singularity of collision from the vector field. It usually consists of three steps: introducing a new independent variable instead of time (which is called *fictitious time*), transforming the Cartesian coordinates of position and velocity into new variables, and embedding the integrals of the motion into the transformed equations. As a result the new differential equations are linear with constant coefficients when the motion is unperturbed. Additionally, depending upon the transformation of time and the spatial variables, the solution can be analytically continued through the collision. This is the case for the regularizations due to Moser [@MoserZehnder2005 section 1.6], @Sperling1961, and @KS1965. Other regularizations, like the one developed by @Burdet1969 and @Ferrandiz1988, lead to formulations that still enjoy the linear form of the transformed system, but are not defined at collision. Regularized formulations are superior in terms of accuracy and computational cost than Cowell’s method. However, additional operations are required to obtain the position and velocity at a prescribed epoch starting from the new spatial variables and the fictitious time[, which could be the reason why they are not commonly implemented in orbit propagators.]{} Since regularizations transform the nonlinear Newtonian equations of the two-body problem into a set of linear equations, variation of parameters (VOP) methods can be easily developed from them. These formulations use a set of orbital elements, i.e., quantities that are constant along the unperturbed solution, to propagate the state also when perturbations are applied. The new elements inherit the advantageous properties of their parent variables, for example they may be regular at collision, with the advantage over the latter that if perturbations are weak they can reach the same accuracy with much fewer integration steps. This feature is particularly attractive because Earth satellite orbits can often be regarded as weakly perturbed.[^5] Recently, [@Bau2015] have proposed a VOP formulation, here referred to as EDromo, which appears to be particularly promising for efficient orbit propagation. It was noticed that, especially for highly eccentric orbits, this method shows an excellent performance when compared to many other regularized schemes. The reader is addressed to the monograph by @Roa2017 for a comprehensive overview of regularization theory and its applications. In this paper, we show that special perturbation methods based on regularized formulations can compete and even perform better than semi-analytical methods for the long-term propagations (on the order of decades) of objects orbiting around the Earth. Note that for this kind of applications the Cowell formulation is never used because of the small step sizes required, which cause strong accumulation of round-off error and long computational times. In order to carry out [this]{} study we developed a Fortran code, named [`THALASSA`]{}, which includes Cowell’s method, EDromo, the Kustaanheimo Stiefel (KS) regularization [@KS1965], and a set of regular elements that were obtained by @Stiefel1971 [section 19] from KS. A sophisticated numerical solver, named [`LSODAR`]{} (Livermore Solver for Ordinary Differential equations with Automatic Root-finding), has been included to integrate the differential equations of motion. Some results using a preliminary version of [`THALASSA`]{} for cis- and translunar orbits have been shown in @Amato2018. [`THALASSA`]{} is compared to the [`STELA`]{} orbit propagator through numerical experiments performed for a Low Earth Orbit (LEO), a Geostationary Transfer Orbit (GTO), and a high-altitude, Highly Elliptical Orbit (HEO). The test cases have been chosen as to maximize the scientific interest and the intrinsic difficulty in obtaining accurate position and velocity on decadal timescales. Symplectic integration methods, which are based on the rigorous conservation of an approximate Hamiltonian of the problem, are commonly used in astrophysical research to perform extremely long integrations and may seem like a feasible candidate for the study. Nevertheless, we do not take them into account in this work since previous research has shown that the conservation of the symplectic structure does not necessarily imply the reduction of errors in position and velocity [@Amato2017]. The paper is organized as follows. In \[sec:avg\_method\] we provide an overview of the method of averaging. In \[sec:avg\_err\_analysis\] we present an analytical breakdown of the errors arising in the integration of averaged equations of motion, which is also an original contribution of the paper. [`STELA`]{} and [`THALASSA`]{}, the two codes used in the study, are described in \[sec:software\], and the numerical results are presented in . contains the conclusions. Method of averaging {#sec:avg_method} =================== In this section, we summarize the theory of averaging in equinoctial elements as presented by [@Danielson1995], using the expressions for the perturbing functions presented by [@Giacaglia1977]. This set of elements underlies many of the most widely known semi-analytical propagators, such as the Semi-analytical Tool for End-of-life Analysis ([`STELA`]{}) and the DSST. Osculating equations of motion and perturbing functions ------------------------------------------------------- We show the osculating (i.e., non-averaged) equations of motion for the set of equinoctial elements $\bm{E}$ as derived by @Giacaglia1977 and @Nacozy1977, which is expressed in terms of the classical orbital elements as $$\renewcommand\arraystretch{1.8} \bm{E} = \begin{Bmatrix} a \\ h \\ k \\ P \\ Q \\ \lambda \end{Bmatrix} = \begin{Bmatrix} a \\ e \sin \left( \omega + \Omega \right) \\ e \cos \left( \omega + \Omega \right) \\ \sin \dfrac{i}{2} \cos \Omega \\[6pt] \sin \dfrac{i}{2} \sin \Omega \\ M + \omega + \Omega \end{Bmatrix}. \label{eq:equ_els}$$ This set only presents a singularity for $i = \SI{180}{\degree}$, a case that is of limited interest for applications to Earth satellites. Letting $\gamma = \sqrt{1 - h^2 - k^2} = \sqrt{1 - e^2}$, the rates of change of the osculating elements are $$\begin{aligned} {\frac{\mathrm{d}a}{\mathrm{d}t}} &= \dfrac{\strut 2}{\strut na} {\mathcal{R}}_\lambda \\ {\frac{\mathrm{d}h}{\mathrm{d}t}} &= - \dfrac{\gamma}{na^2 (1 + \gamma)} h {\mathcal{R}}_\lambda + \dfrac{\gamma}{na^2} {\mathcal{R}}_k + \dfrac{1}{2na^2 \gamma} k \left( P {\mathcal{R}}_P + Q {\mathcal{R}}_Q \right) \\ {\frac{\mathrm{d}k}{\mathrm{d}t}} &= - \dfrac{\gamma}{na^2 (1 + \gamma)} k {\mathcal{R}}_\lambda - \dfrac{\gamma}{na^2} {\mathcal{R}}_h - \dfrac{1}{\strut 2na^2 \gamma} h \left( P {\mathcal{R}}_P + Q {\mathcal{R}}_Q \right) \\ {\frac{\mathrm{d}P}{\mathrm{d}t}} &= -\dfrac{1}{2na^2 \gamma} P {\mathcal{R}}_\lambda - \dfrac{1}{4na^2 \gamma} {\mathcal{R}}_Q + \dfrac{\strut 1}{\strut 2na^2 \gamma} P \left( h {\mathcal{R}}_k - k {\mathcal{R}}_h \right) \\ {\frac{\mathrm{d}Q}{\mathrm{d}t}} &= -\dfrac{1}{2na^2 \gamma} Q {\mathcal{R}}_\lambda + \dfrac{1}{4na^2 \gamma} {\mathcal{R}}_P + \dfrac{ 1}{ 2na^2 \gamma} Q \left( h {\mathcal{R}}_k - k {\mathcal{R}}_h \right) \\ {\frac{\mathrm{d}\lambda}{\mathrm{d}t}} &= n - \dfrac{\strut 2}{na} R_a + \dfrac{\gamma}{2na^2} \left(k {\mathcal{R}}_k + h {\mathcal{R}}_h \right) + \dfrac{1}{2na^2 \gamma} \left( P {\mathcal{R}}_P + Q {\mathcal{R}}_Q \right) \end{aligned}. \label{eq:osc_EoMs_Giacaglia}$$ Denoting with $E_i \: (i = 1, \ldots, 6)$ a generic equinoctial element, we take into account both conservative and dissipative perturbations in the quantity ${\mathcal{R}}_{E_i}$: $${\mathcal{R}}_{E_i} = \frac{\partial {\mathcal{R}}}{\partial E_i} + \bm{q} \cdot \frac{\partial \bm{r}}{\partial E_i},$$ where $\bm{q}$ is the vector sum of the dissipative perturbing accelerations. In the perturbing function ${\mathcal{R}}$, we consider perturbations due to the non-spherical gravity field of the Earth and to the Moon and the Sun considered as point masses, $${\mathcal{R}}= {\mathcal{R}_\Earth}+ {\mathcal{R}}_\Sun + {\mathcal{R}}_\leftmoon.$$ According to @Kaula1966 [p. 31], the perturbing function ${\mathcal{R}_\Earth}$ is expanded in spherical coordinates as $${\mathcal{R}_\Earth}= \sum_{l=1}^{\infty} \sum_{m=0}^{l} \frac{\mu_\Earth a^l_\Earth}{r^{l+1}} P_{lm} \left( \sin \phi \right) \left( C_{lm} \cos{m}L + S_{lm}\sin{m}L \right),$$ where $a_\Earth$ is the mean equatorial radius of the Earth, $P_{lm}$ are the associated Legendre functions of order $l$ and degree $m$, $\phi$ and $L$ are respectively the geographic latitude and longitude, and $C_{lm}$, $S_{lm}$ are coefficients that are determined empirically. For an axially symmetric body $C_{11} = S_{11} = 0$, therefore we will consider the outer sum to always start from $l = 2$ in the following. The expression for ${\mathcal{R}_\Earth}$ in equinoctial elements is $${\mathcal{R}_\Earth}= \sum_{l=2}^{\infty} \sum_{m=0}^{l} \sum_{p=0}^{l} \sum_{q=-\infty}^{\infty} \mathcal{R}_{lmpq}. \label{eq:R}$$ Each of the terms $\mathcal{R}_{lmpq}$ is written as $$\begin{split} \mathcal{R}_{lmpq} = \frac{\mu_\Earth a^l_\Earth}{a^{l+1}} J_{lmp}(c) K_{lpq}(\gamma) \times \\ \times \left[ \mathds{R}_{lmpq}(h,k,P,Q) \left( A_{lm} \cos{\psi_{lmpq}} + B_{lm} \sin{\psi_{lmpq}} \right) + \right. \\ \left. + \mathds{I}_{lmpq}(h,k,P,Q) \left( A_{lm} \sin{\psi_{lmpq}} - B_{lm} \cos{\psi_{lmpq}} \right) \right], \end{split} \label{eq:Rlmpq}$$ where $$A_{lm} = \left\lbrace \begin{aligned} C_{lm}, \quad &l-m \: \text{even} \\ -S_{lm}, \quad &l-m \: \text{odd}\end{aligned} \right. , \qquad B_{lm} = \left\lbrace \begin{aligned} S_{lm}, \quad &l-m \: \text{even} \\ C_{lm}, \quad &l-m \: \text{odd}\end{aligned} \right. ,$$ and $c = \sqrt{1 - P^2 - Q^2} = \cos \left(i/2\right)$. The quantities $J_{lmp}$ and $K_{lpq}$ are, respectively, a polynomial in $c$ and an infinite power series in $\gamma$. The functions $\mathds{R}_{lmpq}$ and $\mathds{I}_{lmpq}$ are finite power series of their arguments. The angle $\psi_{lmpq}$, $$\psi_{lmpq} = (l - 2p + q)\lambda - m\theta, \label{eq:psi_Earth}$$ is a linear combination of the mean longitude $\lambda$ and the Greenwich Mean Sidereal Time (GMST) $\theta$. The perturbing function due to the Moon and the Sun as point masses takes the same form ${\mathcal{R}'}$ for each of the bodies, $${\mathcal{R}'}= \sum_{l=2}^{\infty} \sum_{m=0}^{l} \sum_{p=0}^{l} \sum_{p'=0}^{l} \sum_{q=-\infty}^{\infty} \sum_{q'=-\infty}^{\infty} {\mathcal{R}'}_{lmpqp'q'}, \label{eq:R1}$$ where the expression for the terms ${\mathcal{R}'}_{lmpqp'q'}$ is the following: $$\begin{split} {\mathcal{R}'}_{lmpqp'q'} = \mu' (n')^2 \frac{a^l}{(a')^{l-2}} \varepsilon_m \frac{\left(l-m\right)!}{\left(l+m\right)!} J_{lmp}(c) L_{lpq}(\gamma) F_{lmp'}(i') G_{lp'q'}(e') \times \\ \times \left( \mathds{R}_{lmpq} \cos \psi'_{lmpqp'q'} + \mathds{I}_{lmpq} \sin \psi'_{lmpqp'q'} \right). \end{split} \label{eq:R1lmpq}$$ In \[eq:R1lmpq\], primed quantities pertain to the perturbing body. In particular, $\mu'$ is its gravitational parameter, and $n'$ is its mean motion. The eccentricity functions $L_{lpq}$ and $G_{lp'q'}$ are infinite power series, and the inclination function $F_{lmp'}$ is a polynomial in the trigonometric functions of the inclination. The factor $\varepsilon_m$ is equal to for $m = 0$, and is equal to for $m \neq 0$. The angle $\psi'_{lmpqp'q'}$ is defined as $$\begin{split} \psi'_{lmpqp'q'} = \left(l - 2p + q \right)\lambda - \left( l - 2p' + q' \right) \lambda' + q'\left( \omega' + \Omega' \right) - \\ - \left(m + 2p' - l\right)\Omega'. \end{split} \label{eq:psi_pert}$$ All of the primed quantities in the right-hand side of the above equation are usually considered as functions of time that are slow with respect to the mean motion. For instance, when considering a MEO satellite perturbed by the Moon, $\dot{\lambda}$ is in the order of hours while $\dot{\lambda}'$ is equal to about days. In the rest of the paper, primed quantities will always refer to the lunisolar gravitational perturbations. Averaging perturbing functions ------------------------------ We summarize here the core details of the method of averaging perturbations stemming from perturbing functions as presented in @Danielson1995. Also, we denote osculating orbital elements with a hat in the following. We define the *mean* elements $\bm{E}$ through their relation with the *osculating* elements $\hat{\bm{E}}$, $$\hat{E}_i = E_i + \sum_{j=1}^{\infty} \epsilon^j \eta_{i,j} \left( \bm{E}, t \right),\qquad i = 1, \ldots, 6, \label{eq:osc_mean_NI}$$ where $\epsilon$ is a small perturbation parameter. The terms $\epsilon^j \eta_{i,j}$ are small, short-periodic variations that are added to the mean elements $E_i$ to obtain the osculating elements $\hat{E}_i$. They explicitly depend on time, since perturbations as tesseral harmonics of the Earth’s gravity potential, and as those due to perturbing bodies, are also explicit in time. The perturbed two-body problem is stated in osculating elements as $$\frac{\mathrm{d}\hat{E}_i}{\mathrm{d}t} = n(\hat{a}) \delta_{i,6} + \epsilon F_i (\hat{\bm{E}},t), \label{eq:osc_EoMs}$$ where $\delta_{i,6}$ is the Kronecker delta and $n$ is the mean motion, which is a function of the osculating semi-major axis $\hat{a}$. Note that $\epsilon F_i$ includes all the terms appearing on the right-hand side of \[eq:osc\_EoMs\_Giacaglia\] that contain the perturbing forces. Our aim is to write the averaged equations of motion in the form $$\frac{\mathrm{d}{E}_i}{\mathrm{d}t} = n\left(a \right) \delta_{i,6} + \sum_{j=1}^{\infty} \epsilon^j A_{i,j} \left(a,h,k,P,Q,t\right), \label{eq:mean_EoMs}$$ where the right-hand side represents a power series in the perturbation parameter $\epsilon$ with the coefficients given by $A_{i,j}$. Since the high-frequency components are relegated to the short-periodic terms, the rates of change of the mean elements are small, that is $$\begin{aligned} \frac{1}{na} \left| \frac{\mathrm{d}a}{\mathrm{d}t} \right| &\ll 1, \\ \frac{1}{n} \left| \frac{\mathrm{d}E_i}{\mathrm{d}t} \right| &\ll 1, \quad \text{for} \: i = 2, \ldots, 5, \\ \left| \frac{1}{n} \frac{\mathrm{d}\lambda}{\mathrm{d}t} - 1 \right| &\ll 1.\end{aligned}$$ As to find a suitable form for the coefficients $A_{i,j}$ in \[eq:mean\_EoMs\], we first express $n(\hat{a})$ and $F_i(\hat{\bm{E}},t)$ as functions of the mean elements by expanding them in power series of $\epsilon$, $$\begin{aligned} n\left(\hat{a} \right) &= n \left( a \right) + \sum_{j=1}^{\infty} \epsilon^j N_j \left( a \right) \label{eq:n_mean} \\ F_i ( \hat{\bm{E}}, t ) &= F_i \left(\bm{E}, t \right) + \sum_{j=1}^{\infty} \epsilon^j f_{i,j} \left( \bm{E}, t \right). \label{eq:F_mean}\end{aligned}$$ Then, we differentiate \[eq:osc\_mean\_NI\] with the use of \[eq:mean\_EoMs\] and set the resulting expressions for the derivatives equal to those in \[eq:osc\_EoMs\], where we have used \[eq:n\_mean,eq:F\_mean\]. The *equations of averaging* take the form $$\sum_{j=1}^{\infty} \epsilon^j \left( A_{i,j} + \frac{\partial \eta_{i,j}}{\partial \bm{E}} \cdot \frac{\mathrm{d} \bm{E}}{\mathrm{d}t} + \frac{\partial \eta_{i,j}}{\partial t} \right) = \epsilon F_i + \sum_{j=1}^{\infty} \epsilon^j \left( \delta_{i,6} N_j \left( a \right) + \epsilon f_{i,j} \right). \label{eq:averaging}$$ By finding $A_{i,j}$ and $\eta_{i,j}$ such that they satisfy \[eq:averaging\] up to the $M$-th order, we can build an $M$-th order averaged theory. We will limit ourselves to the first order in this study; more details on how to derive higher order theories are found in @Danielson1995. We have $$A_{i,1} + \frac{\partial \eta_{i,1}}{\partial \lambda} n\left( a \right) + \frac{\partial \eta_{i,1}}{\partial t} = F_i + \delta_{i,6} N_1 (a). \label{eq:avg_first}$$ We now define the *single-averaging operator* $\langle \bullet \rangle$, $$\langle f \left(\bm{E},t\right) \rangle \triangleq \frac{1}{2\pi} \int_{-\pi}^{\pi} f \left(\bm{E},t\right) \mathrm{d}\lambda, \label{eq:singleavg}$$ and apply it to both sides of \[eq:avg\_first\], yielding $$A_{i,1} = \langle F_i\left( \bm{E},t \right) \rangle. \label{eq:avg_A1}$$ The above equation states that, to first order, the rates of change of the mean elements are the *averaged* rates of change of the osculating elements. In the practical calculation of the $A_{i,1}$, we have to take into account their dependence from the total perturbing function $\mathcal{R} = \mathcal{R}_\Earth + \mathcal{R}'$. Thus we make the dependence of $F_i$ from the disturbing function explicit in \[eq:avg\_A1\], $$A_{i,1} = \left \langle F_i \left( \frac{\partial \mathcal{R}}{\partial \bm{E}} \left(\bm{E},t \right) , \bm{E}, t \right) \right \rangle. \label{eq:avg_oper_A1}$$ We can bring the averaging operator inside the parentheses since we keep the mean elements constant during the averaging operation. Thus, the mean element rates are obtained by plugging into the osculating equations of motion the averaged total perturbing function, $$A_{i,1} = F_i \left( \left \langle \frac{\partial \mathcal{R}}{\partial \bm{E}} \right \rangle, \bm{E}, t \right) = F_i \left( \frac{\partial}{\partial \bm{E}}\langle\mathcal{R}\rangle , \bm{E}, t \right) . \label{eq:avg_pertF_A1}$$ It can be shown that applying the averaging operator to the disturbing function is equivalent to setting $q = 2p - l$ in \[eq:R,eq:R1\] to eliminate the terms depending on the fast angle [@Giacaglia1977]. Additional perturbations can be superimposed by considering $$\begin{aligned} \epsilon A_{i,1} &= \sum_\alpha \nu_\alpha A_{i,1\alpha}, \\ \epsilon F_{i} &= \sum_\alpha \nu_\alpha F_{i\alpha},\end{aligned}$$ where the index $\alpha$ varies over all the perturbations to be considered, and the $\nu_\alpha$ are the small parameters of each of the perturbations. Each coefficient $A_{i,1\alpha}$ is calculated by averaging the corresponding perturbation, $$A_{i,1\alpha} = \left \langle F_{i\alpha} \left(\bm{E},t\right) \right \rangle,$$ and coupling terms between different perturbations arise at higher orders. Once $A_{i,1}$ is obtained through \[eq:avg\_A1\], it is substituted in \[eq:mean\_EoMs\], which is integrated with a suitable numerical scheme. At each step, the osculating elements can be recovered from the mean ones by \[eq:osc\_mean\_NI\]. The short-periodic terms $\eta_{i,1}$ are computed by integrating \[eq:avg\_first\] over the mean longitude $\lambda$, while keeping the rest of the mean elements constant. Until now, we implicitly assumed that all perturbing forces are quickly-varying (i.e., the $F_i$ are of the same order of the mean motion) through their dependence on the mean longitude $\lambda$, which is removed through the application of the single-averaging operator. Gravitational perturbations are functions of a *resonant angle* $\psi$ that is a linear combination of angles that includes $\lambda$, and which is usually quickly-varying. However, if the orbiter is in a *resonance condition*, $\psi$ changes slowly because the rates of change of $\lambda$ and of another of the angles contained in the linear combination are commensurate. In this case, applying the single-averaging operator as in \[eq:singleavg\] leads to significant errors, since the long-periodic behaviors that arise from the slow variation of $\psi$ are neglected. The issue can be solved through the application of the *double-averaging* operator, which is described in the following section. ### Resonances The perturbing functions ${{\mathcal{R}}}$ and ${{\mathcal{R}'}}$ depend on the resonant angles $\psi_{lmpq}$ and $\psi_{lmpqp'q'}$, respectively (\[eq:psi\_Earth,eq:psi\_pert\]), which are linear combinations of both fast and slow variables. Letting $\psi$ denote either of the angles, we consider the case in which $\psi$ can be written as $$\psi = j{\lambda} - r{\vartheta},$$ with $j$ and $r$ mutually prime integers. The angle $\vartheta$ is a fast variable corresponding to either $\theta$ or $\lambda'$, depending on whether we are considering the perturbation from the tesseral harmonics of the Earth’s potential or from a perturbing body, respectively. In the latter case, we neglect the slowly-varying orbital elements of the perturbing body in the expression for $\psi$. Let the orbiter be *in resonance* if the inequality $$\left| \frac{\mathrm{d}\psi}{\mathrm{d}t} \right| < \frac{2\pi}{\tau} \label{eq:mmres_cond}$$ is satisfied [@Danielson1995 p. 22], where $\tau$ is the minimum period of the perturbations that must be accounted for in the averaged equations in order not to lose significant propagation accuracy. The meaning of $\tau$ is that of an empirical “resonance width”, which measures the maximum distance from the condition $\mathrm{d}\psi/\mathrm{d}t = 0$ for which the satellite is considered in a resonance. The parameter $\tau$ should be chosen to be several times larger than the integration step, the mean motion, and the period of the fast angle $\tau_\vartheta$, otherwise the mean elements would include short-periodic variations. On the other hand, $\tau$ must not be so large that long-periodic effects are inadvertently included in the short-periodic terms [@Morand2013]. In the presence of a resonance, long-periodic and secular effects arise due to the commensurability of the two fast frequencies $\dot{\lambda}$ and $\dot{\vartheta}$, which pertain to the orbiter and to one of the massive bodies respectively. By keeping $\vartheta$ fixed during the application of the single-averaging operator in \[eq:singleavg\], these effects are missed. Therefore, we also integrate over $\vartheta$ by using the *double-averaging operator* $\langle\langle \bullet \rangle\rangle$, $$\begin{split} \langle\langle f \left(a,h,k,P,Q,\lambda,\vartheta,t\right) \rangle\rangle \triangleq \frac{1}{4\pi^2} \int_{-\pi}^{\pi} \int_{-\pi}^{\pi} f \left(a,h,k,P,Q,\lambda,\vartheta,t\right) \mathrm{d}\lambda \mathrm{d} \vartheta + \\ + \frac{1}{2\pi^2} \sum_{(j,r) \in \mathcal{B}} \left[ \cos\psi \int_{-\pi}^{\pi} \int_{-\pi}^{\pi} f \left(a,h,k,P,Q,\lambda^*,\vartheta^*,t\right) \mathrm{d}\lambda^* \mathrm{d}\vartheta^* + \right. \\ \left. + \sin\psi \int_{-\pi}^{\pi} \int_{-\pi}^{\pi} f \left(a,h,k,P,Q,\lambda^*,\vartheta^*,t\right) \mathrm{d}\lambda^* \mathrm{d}\vartheta^* \right], \end{split} \label{eq:doubleavg}$$ where $\mathcal{B}$ is the set of all $(j,r)$ for which the inequality is satisfied. Furthermore, terms depending on $\lambda$ which are responsible for the resonances have to be retained in the perturbing functions ${\mathcal{R}_\Earth}$ and ${\mathcal{R}'}$. For the case of tesseral resonances with the Earth’s gravitational potential, the terms to be retained in ${\mathcal{R}_\Earth}$ are those satisfying the identities $$\begin{aligned} 2p - l &= q - sj, \\ m &= sr,\end{aligned}$$ with $s$ integer. On the other hand, for *mean-motion resonances* with the Moon, the terms to be retained in ${\mathcal{R}'}$ obey $$j(l - 2p + q) = r(l' - 2p' + q').$$ Mean-motion resonances with the Sun do not arise in practical situations due to the extremely large values of the semi-major axis that they would require. Averaging perturbing accelerations ---------------------------------- Dissipative perturbations such as atmospheric drag cannot be expressed as the gradient of a disturbing function. In this case, the rates of change of the osculating elements can be written in Gauss form and the perturbing acceleration can be *numerically averaged* over one orbital period [@Uphoff1973; @Ely2014]. Average of the short-periodic terms ----------------------------------- In obtaining \[eq:avg\_A1\], we assumed that the short-periodic terms $\eta_{i,1}$ average to zero. This is equivalent to requiring that the mean elements are “centered”, i.e. that the short-periodic terms do not contain any long-periodic or secular offsets from the mean elements. If one is only interested in the mean elements history this hypothesis is superfluous, and it can be overlooked. However, in general it is necessary to impose that $\langle \eta_{i,j} \rangle = 0 $ to avoid divergence of the osculating and mean trajectories in the long term [@Lara2013]. Error analysis for averaged methods {#sec:avg_err_analysis} =================================== The method of averaging involves approximations needed to simplify the analytical developments, which would be intractable otherwise. These approximations introduce errors with respect to the real trajectory that add up to the numerical error accumulated during the integration of the mean equations of motion. While the propagation of numerical error in the integration of ordinary differential equations has been studied extensively, we are not aware of any quantitative study of the impact that the approximations involved in averaging methods have on the integration error. In the following, we analyse the contributions to the integration error in averaged methods, and we separate them into components with distinct sources. [Besides providing the theoretical framework for interpreting the numerical results of this study, the analysis in this section can also be used as a starting point for the improvement of existing semi-analytical methods.]{} Let ${\delta}{\hat{E}}_i$ be the *osculating* integration error with respect to the $i$-th equinoctial element, that is the difference between the computed osculating equinoctial element ${\hat{E}}_i$ and its true (or *reference*) value. Denoting reference values of the osculating and mean elements with a superscript $\mathrm{R}$, we have $${\delta}\hat{E}_i = {\hat{E}}_i - {\hat{E}^\mathrm{R}}_i. \label{eq:oscerr_def}$$ We assume that the reference values ${\hat{E}^\mathrm{R}}_i$ and $\bm{E}^\mathrm{R}$ satisfy \[eq:osc\_mean\_NI\] exactly, while the computed values ${\hat{E}}_i$ are truncated to order $M$, $${\hat{E}}_i = E_i + \sum_{j=1}^{M} \epsilon^j {\eta_{i,j}} ( \bm{E}, t )\label{eq:osc_mean_NI_comp}.$$ Using in \[eq:osc\_mean\_NI\] the reference values, $${\hat{E}}^\mathrm{R}_i = E_i^\mathrm{R} + \sum_{j=1}^{\infty} \epsilon^j {\eta_{i,j}} \left( \bm{E}^\mathrm{R}, t \right), \label{eq:osc_mean_NI_ref}$$ and subtracting the two preceding equations gives $${\delta}\hat{E}_i = \left( E_i - E^\mathrm{R}_i \right) + \left(\sum_{j=1}^{M} \epsilon^j \left( \eta_{i,j} - \eta^\mathrm{R}_{i,j} \right) - \sum_{j=M+1}^{\infty} \epsilon^j \eta^\mathrm{R}_{i,j}\right) = {\delta}E_i + {\delta}\eta_i, \label{eq:tot_err}$$ where ${\eta}^\mathrm{R}_{i,j} = {\eta}_{i,j}(\bm{E}^\mathrm{R},t)$. The osculating integration error ${\delta}\hat{E}_i$ is the sum of the *mean integration error* ${\delta}{E}_i = E_i - E^\mathrm{R}_i$ and of the error on the short-periodic terms ${\delta}{\eta}_i$. The mean integration error can be further decomposed by considering $${\delta}{E_i} = {\delta}{E}_{i, \mathrm{dyn}} + {\delta}{E}_{i, \mathrm{mod}} + {\delta}{E}_{i, \mathrm{num}}, \label{eq:avg_err_breakdown}$$ where the first term is the *dynamical error*, the second is the *model truncation error*, and the last is the *numerical integration error*. Dynamical error dEdyn {#sec:dyn_err} --------------------- For a first-order theory, the second term in \[eq:mean\_EoMs\] is truncated at $j=1$ yielding $${\frac{\mathrm{d}E_i}{\mathrm{d}t}} = n(a) \delta_{i,6} + \epsilon A_{i,1}. \label{eq:mean_EoM_trunc}$$ The mean rate $A_{i,1}$ is obtained by \[eq:avg\_oper\_A1\], in which the mean elements are kept constant during the averaging operation. We define the reference mean rates $A^\mathrm{R}_{i,1}$ as those that would be obtained by computing exactly the definite integrals in \[eq:singleavg,eq:doubleavg\], i.e. by taking into account the time dependence of the mean orbital elements over the period of integration. Plugging into \[eq:mean\_EoMs\] the exact mean rates yields $$\frac{\mathrm{d}{{E}^\mathrm{R}_i}}{\mathrm{d}t} = n\left(a \right) \delta_{i,6} + \sum_{j=1}^{\infty} \epsilon^j A^\mathrm{R}_{i,j}, \label{eq:mean_EoMs_exact}$$ which, when subtracted from \[eq:mean\_EoM\_trunc\], gives the error on the total mean rates of change $$\delta{\dot{E}_{i,\text{dyn}}} = \epsilon \left( A_{i,1} - A^\mathrm{R}_{i,1} \right) - \sum_{j=2}^{\infty} \epsilon^j A^\mathrm{R}_{i,j} = \epsilon {\delta} A_{i,1} + \mathcal{O}\left(\epsilon^2 \right). \label{eq:dyn_err}$$ This error is the sum of a term of order $\epsilon$, due to the inexact result from the application of the averaging operator, and higher-order terms that are neglected. At each integration step, we write the *dynamical error* can be quantified by $$\delta{E}_{i,\text{dyn}} = \delta{\dot{E}_{i,\text{dyn}}} \Delta t.$$ From the standpoint of numerical integration, $\delta{E}_{i,\text{dyn}}$ propagates during the integration in a way similar to the local truncation error. However, the dynamical error cannot be reduced by choosing a smaller time step. Situations giving rise to a large $\delta{E}_{i,\text{dyn}}$ take place if the mean elements change significantly during one orbital period, or if higher orders are non-negligible in the equations of averaging (\[eq:averaging\]). This would be also the case if the single-averaging operator was applied in the presence of a resonance, without taking into account that the slow rate of change of the resonant angle $\psi$ generates long-periodic or secular trends. In \[eq:dyn\_err\], we have only considered first-order mean rates of a single perturbation. The development carried out here can be extended to higher orders and applied to several perturbations, which will give rise to further terms in the definition of the dynamical error. Model truncation error dEmod ---------------------------- To perform the analytical averaging of conservative perturbations, the expressions of the perturbing functions in \[eq:R,eq:R1\] and the eccentricity functions $K_{lpq}$, $L_{lpq}$ and $G_{lp'q'}$ have to be truncated at a given order $l_\mathrm{M}$. Also, the only terms retained in the sums on $q$ and $q'$ are those that give rise to long-periodic effects, which we denote with the finite sets $\mathcal{Q}$ and $\mathcal{Q'}$. In this way, only the lowest frequencies governing the motion are taken into account, reducing computational complexity and cost. By defining ${\mathcal{R}_\Earth}^\mathrm{T}$ and $\mathcal{R'}^\mathrm{T}$ as the truncated perturbing functions, $$\begin{aligned} {\mathcal{R}_\Earth}^\mathrm{T} &= \sum_{l=2}^{l_\mathrm{M}} \sum_{m=0}^{l} \sum_{p=0}^{l} \sum_{q \in \mathcal{ Q}} \mathcal{R}_{lmpq} \label{eq:Rtrunc} \\ \mathcal{R'}^\mathrm{T} &= \sum_{l=2}^{l_\mathrm{M}} \sum_{m=0}^{l} \sum_{p=0}^{l} \sum_{p'=0}^{l} \sum_{q \in \mathcal{Q}} \sum_{q \in \mathcal{Q'}} {\mathcal{R}'}_{lmpqp'q'} \label{eq:R1trunc},\end{aligned}$$ we find the truncation error on the perturbing function $\delta\mathcal{R}$ as $$\delta\mathcal{R} = ( {\mathcal{R}_\Earth}^\mathrm{T} - {\mathcal{R}_\Earth}) + ( \mathcal{R'}^\mathrm{T} - {\mathcal{R}'}).$$ Since the mean rate ${A}_{i,1}$ is obtained by averaging $\mathcal{R}^\mathrm{T}$, the error $\delta\mathcal{R}$ will give rise to a *model truncation error* $\delta{E_{i, \text{mod}}}$ on the mean orbital elements, which will propagate during the integration of the equations of motion, analogously to $\delta{E_{i,\text{dyn}}}$. The model truncation error is caused by neglecting higher-order terms in the perturbing functions. In fact, one may build high-order theories (in the mean rates $A_{i,j}$) while keeping only a limited number of the most relevant terms in the perturbing functions. Note that the gravitational perturbing function of the geoid is always truncated even for non-averaged methods, since a closed form does not exist. However, the Sun and the Moon are usually considered as point masses, an approximation that holds well for Earth satellite orbits. Thus, their perturbing function can be expressed in a closed form that can be evaluated efficiently by non-averaged methods [@Battin1999 section 8.4]. Numerical error dEnum --------------------- The mean elements at each step of the numerical integration of \[eq:mean\_EoMs\] are affected by truncation and round-off errors. Dropping the subscript $i$ for ease of notation, we denote them with ${\delta}{E}_\mathrm{num, T}$ and ${\delta}{E}_\mathrm{num, R}$ respectively, $${\delta}{E}_\mathrm{num} = {\delta}{E}_\mathrm{num, T} + {\delta}{E}_\mathrm{num, R}.$$ If the numerical scheme is of $p$-th order, the local truncation error accumulated with a step size ${\Delta}t$ is proportional to the $(p+1)$-th derivative of the mean element, $${\delta}E_{\mathrm{num, T}} \sim \frac{\mathrm{d}^{p+1} E}{\mathrm{d} t^{p+1}} {\Delta}t^{p+1}.$$ The round-off error ${\delta}{E}_\mathrm{num, R}$ grows with the number of floating-point operations performed during the integration. Thus, it is proportional to some power of the total number of integration steps. The accumulated round-off error can be estimated through statistical laws [@Brouwer1937; @Milani1987]. Note that in non-averaged methods the osculating integration error (\[eq:tot\_err\]) is only due to the numerical error. Error on the short-periodic terms deltaeta {#sec:error_sp} ------------------------------------------ Following \[eq:tot\_err\], the error on the short-periodic terms can be written as $$\delta{\eta_{i}} = \sum_{j=1}^{M} \epsilon^j ( \eta_{i,j} - \eta^\mathrm{R}_{i,j} ) - \sum_{j=M+1}^{\infty} \epsilon^j \eta^\mathrm{R}_{i,j}. \label{eq:err_sp}$$ We expand $( \eta_{i,j} - \eta^\mathrm{R}_{i,j} )$ as $$\eta_{i,j}(\bm{E},t ) - {\eta}_{i,j}(\bm{E}^\mathrm{R},t ) \approx \frac{\partial \eta_{i,j}}{\partial E_i}(\bm{E}^\mathrm{R},t) \delta{E_i},$$ where $\delta{E_i}=E_i-E_i^\mathrm{R}$. Substituting in \[eq:err\_sp\] we obtain $$\delta{\eta}_i \approx \sum_{j=1}^{M} \epsilon^j \frac{\partial {\eta}_{i,j}}{\partial {E}_i} \delta{E_i} - \sum_{j=M+1}^{\infty} \epsilon^j \eta^\mathrm{R}_{i,j} = \delta{\eta}_{i,\mathrm{M}} + \delta{\eta}_{i,\mathrm{HO}}. \label{eq:err_sp_break}$$ The error on the short-periodic terms is given by two contributions. The first, $\delta{\eta}_{i,\mathrm{M}}$, is due to the mean integration error $\delta {E}_i$ as defined above. In fact, the short-periodic terms are computed from values of the mean elements that are affected by $\delta {E}_i$, generating an error on the short-periodic terms up to the $M$-th order. The second contribution, $\delta{\eta}_{i,\mathrm{HO}}$, is due to the truncation to the $M$-th order that is performed in \[eq:osc\_mean\_NI\_comp\]. The first term of $\delta{\eta}_i$ is proportional to $\epsilon \delta{E}_i$, and will be negligible compared to the other sources of error. The impact of the error on the short-periodic terms is particularly critical for the initial conditions of a propagation, which are often assigned in osculating elements. The *mean* initial conditions for a semi-analytical propagation must be derived by subtracting the short-periodic terms, which are affected by the error $\delta{\eta}_i$. The error on the mean initial conditions will lead to a divergence of the computed trajectory from the reference which might become critical in some cases, for example when accuracy requirements are stringent or the dynamics is chaotic. Error budget ------------ It is desirable that the dynamical error ${\delta}E_{i,\mathrm{dyn}}$ dominates over the remaining terms. This is because both the model truncation error ${\delta}{E_{i,\mathrm{mod}}}$ and the numerical error ${\delta}{E_{i,\mathrm{num}}}$ can be driven down, within certain limits. Assuming that the analytical expressions for the expansions are available, the error ${\delta}{E_{i,\mathrm{mod}}}$ for the lunar and solar perturbing functions can be reduced by choosing higher truncation indices in \[eq:Rtrunc,eq:R1trunc\]. Moreover, it is possible to reduce ${\delta}{E_{i,\mathrm{num}}}$ by using an efficient numerical scheme, and by applying an appropriate value for the tolerance or the step-size of the solver. As we will show in the numerical tests, it is generally possible to decrease ${\delta}{E_{i,\mathrm{num}}}$ so that it is negligible with respect to the other error contributions. Situations in which the numerical error dominates might only arise in extremely long propagations, for which round-off error could play a significant role. Orbit propagation software {#sec:software} ========================== In the following, we outline the characteristics of the software used to perform the numerical tests. To ensure the validity of the analysis performed on the results from the numerical tests, it is necessary to carefully *align* the codes, i.e., to implement the same physical model in order to exclude any difference in numerical results due to different modeling of the physical phenomena. [`STELA`]{} ----------- The [[`STELA`]{}]{} (Semi-analytical Tool for End of Life Analysis) orbit propagator has been developed by the French space agency CNES to assess compliance with spacecraft re-entry requirements imposed by the French Space Act [@LeFevre2014], and is similar in function to the DSST. The software is publicly available as a Java executable,[[^6]]{} and in this work we use version 3.2. [[`STELA`]{}]{} integrates single-averaged equations of motion for the equinoctial element set $\bm{E} = (a,h,k,P,Q,\lambda)$ with a Runge-Kutta solver with fixed step size $\Delta{t}$. The order of integration can be chosen between and ; we use the latter value for all of our simulations. We assign the initial conditions for the propagations in osculating orbital elements. [[`STELA`]{}]{} automatically removes the short-periodic terms from the initial osculating elements in order to obtain the mean initial conditions for the propagation. ### Physical model {#sec:STELA_mathmod} [[`STELA`]{}]{} allows the user to choose which perturbations to include among several options. For the conservative perturbations, we take into account the geopotential corresponding to the <span style="font-variant:small-caps;">GRIM5-S1</span> model [@Biancale2000], and a simplified version of the analytical solar and lunar ephemerides by @Meeus1998. The geopotential harmonics can be computed up to any degree and order through a recurrence formulation. [[`STELA`]{}]{} truncates the expansion  of the Earth’s non-spherical perturbing function at the second degree in the presence of tesseral resonances [@Morand2013]. , that is the expansion of the solar and lunar perturbing functions, is truncated at an order $l_\mathrm{M}$ that can be varied between 2 and 8 by the user. The short-periodic terms (\[eq:osc\_mean\_NI\_comp\]) are computed at first order for lunisolar perturbations and at second order for $J_2$. Resonant tesseral harmonics are retained in the perturbing function according to their period, that is $2\pi / \dot{\psi}_{lmpq}$ with $\psi_{lmpq}$ defined according to \[eq:psi\_Earth\]. If the period of a tesseral harmonic is greater than a customizable multiple $N_\mathrm{tess}$ of the integration step, it is retained in the averaged perturbing function [@Morand2013]; that is, the tesseral harmonics retained according to condition have periods greater than $\tau_\mathrm{tess} = N_\mathrm{tess} \Delta{t}$. While [[`STELA`]{}]{} considers precessional and nutational movements of the rotational axis of the Earth by integrating the equations of motion in the Celestial Intermediate Reference Frame (CIRF) [@STELA_Man2016], we disabled these effects in our tests as they are not relevant for the evaluation of the numerical performance. [`STELA`]{} can also consider drag arising from an atmosphere co-rotating with the Earth as an additional perturbation. As to obtain the corresponding contribution to the rate of change of the mean elements (\[eq:mean\_EoMs\]), first the osculating elements are recovered at a prescribed number of points $M_\mathrm{quad}$ along the orbit. The drag acceleration is computed from the osculating elements at each of these points, and its average over one period is performed through a numerical quadrature. This algorithm is executed every $N_\mathrm{drag}$ integration steps; both $N_\mathrm{drag}$ and $M_\mathrm{quad}$ can be chosen by the user depending on the required balance between speed and accuracy. The atmospheric density is computed using either the US 1976 Standard Atmosphere (US76), the Jacchia 1977, or the NRLMSISE-00 models [@US76; @Jacchia1977; @Picone2002]. To better align the codes, we assume the solar flux at the wavelength and the planetary geomagnetic amplitude $A_\mathrm{p}$ to be equal to 140 SFU and 15, respectively. We neglect the Earth oblateness in the computation of the geodetic height by setting the ellipticity to zero. [`STELA`]{} only computes the drag acceleration when the altitude is lower than an assigned value of . [`STELA`]{} also includes perturbations stemming from Earth solid tides and solar radiation pressure; however, we turn them off for this study. [`THALASSA`]{} -------------- [`THALASSA`]{} is an orbit propagation code that numerically integrates the non-averaged equations of motion of the perturbed two-body problem written for different formulations[; the code was specifically developed for the present study.]{} [[`THALASSA`]{}]{} includes the classical Cowell’s method [@Battin1999 p. 447], in which the Cartesian coordinates of the position and velocity relative to the primary body of attraction are employed as state variables, and three regularized formulations. The Kustaanheimo Stiefel (KS) regularization has been implemented following @Stiefel1971 [pp. 33-35]. The independent variable $s$ is defined by the time transformation (known also as the Sundman transformation) $$\frac{d t}{d s}=\frac{r}{\beta}, \label{eq:Sund}$$ where $\beta=1$, $t$ is the physical time, and $r$ is the orbital distance. For negative values of the total energy $h$, the two-body problem is transformed into four uncoupled harmonic oscillators of equal frequency $\sqrt{-h/2}$. The state vector is composed by the four KS parameters along with their derivatives with respect to $s$, the total energy, and time. In our implementation, we also consider a linear time element instead of time as a dependent variable, because a better performance can be reached for more eccentric orbits. By analytically integrating \[eq:Sund\] when the motion is unperturbed a time element can be introduced as either a constant quantity or a linear function of $s$ [see @Bau2014]. We also implemented in [[`THALASSA`]{}]{} two formulations based on regular elements (see the Introduction). The first is a VOP method related to the KS scheme which was developed by @Broucke1966. Eight elements are obtained from the solution of the KS harmonic oscillators for $h<0$, and the total energy and a linear time element complete the state vector. These elements are non-singular for any inclination and eccentricity smaller than 1 and are well-defined at collision. We refer to this method as SS because we implemented the same equations derived in @Stiefel1971 [section 19]. [The second one, named EDromo, has been recently proposed by @Bau2014a [@Bau2015]. This formulation was inspired by the ideal elements first proposed by @Deprit1975 and later on by @Pelaez2007, where their connection to the regularization devised by @Burdet1969 and @Ferrandiz1988 becomes clear. For a comprehensive overview of the ideal elements we refer to the Introduction of @Bau2015 and to @Lara2017a. The basic idea is to decompose the motion into the rotation of the radial unit vector and the displacement along the radial direction. In EDromo four elements are the Euler parameters that define the orientation of an intermediate reference frame in space.]{} This frame is fixed when there are no perturbations and has one axis pointing towards the angular momentum vector. Other three elements determine the shape of the ellipse and together with the fictitious time allow us to locate the particle along the orbit. Either a linear or a constant time element can be used. EDromo elements are non-singular like the SS ones, but they do not work at collision. For both SS and EDromo the independent variable is related to time by \[eq:Sund\] with $\beta=\sqrt{-h/2}$. summarizes some relevant features of the formulations contained in [[`THALASSA`]{}]{}. Formulation Variables Time element Dimension ------------- ------------- -------------- ----------- Cowell coordinates no 6 KS coordinates yes 10 SS elements yes 10 EDromo elements yes 8 : For the formulations implemented in [`THALASSA`]{} we specify if the spatial variables consist in orbital elements, i.e. constant quantities along the Keplerian solution, or not, if a time element is employed, and the dimension of the state vector. \[tab:forms\] The equations of motion are integrated with the [[`LSODAR`]{}]{} numerical solver[[^7]]{} [@Radhakrishnan1993] in double precision. [[`LSODAR`]{}]{} chooses the step size and order along the integration according to relative and absolute tolerances on the local truncation error, which are set equal to a single value assigned by the user. In all the test cases, we use very strict values ( or smaller) of the [[`LSODAR`]{}]{} local truncation error tolerances to generate reference trajectories in quadruple precision. ### Physical model {#physical-model} [`THALASSA`]{} has been aligned to [`STELA`]{} by implementing the same set of perturbations, and by using the same sources for the physical constants, ephemerides, and atmospheric models. The perturbations included in [`THALASSA`]{} are the same as those considered in [[`STELA`]{}]{}, and they are taken from the same models.[^8] All the constants involved in the computations for the geopotential and lunisolar perturbations are identical, as to exclude any difference in the computed trajectories due to different perturbation models. The code for the US76 model is ported from the one used in [`STELA`]{}, while the NRLMSISE-00 model is implemented through the official subroutines that are publicly available.[[^9]]{} The value of the air density is replicated to double precision machine zero with the US76 model. We could not assure a similar level of alignment in the computation of density with the Jacchia 77[[^10]]{} and NRLMSISE-00 models. This is because the models only prescribe the fundamental equations, constants and analytical procedures to compute the air density, but several choices regarding their implementation are left to the user. Since the source code of [`STELA`]{} is not available to the public, it is not possible to replicate these choices on a line-by-line basis, preventing reproduction of the results at machine precision. A particularly important section of the [[`THALASSA`]{}]{} code involves the calculation of the non-spherical gravity perturbation. From a preliminary test on a LEO orbit, we discovered that the evaluation of this perturbation can absorb more than 50% of the computational time for a moderately complex (e.g., $5 \times 5$) model. In order to reduce this burden, we implement the algorithm for the calculation of the geopotential harmonics by @Pines1973. The algorithm is quite efficient in that only two trigonometric function evaluations are required for each evaluation of the perturbing function. Besides, no singularities at the poles are present, in contrast to the explicit evaluation of the Legendre associated functions in the classical perturbing function expansion. [Lunisolar gravitational perturbations are implemented in [`THALASSA`]{} according to two different mathematical approaches.]{} The first approach consists of the classical third-body perturbing acceleration $\bm{P}'$ expressed as the derivative of the perturbing function $\mathcal{R}'$ written in Cartesian coordinates [@Battin1999 section 8.4], $$\begin{aligned} \mathcal{R}' &= \mu' \left( \frac{1}{d} - \frac{\bm{r} \cdot \bm{r}'}{rr'} \right) \label{eq:R1Cart}, \\ \bm{P}' &= \frac{\partial \mathcal{R'}}{\partial \bm{r}} = -\mu' \left( \frac{\bm{d}}{d^3} + \frac{\bm{r}'}{\left(r'\right)^3} \right), \label{eq:P1Cart}\end{aligned}$$ where $\bm{r}'$ is the position vector of the perturbing body with respect to the Earth, and $\bm{d} = \bm{r} - \bm{r}'$. In order to isolate the effect of the model truncation error ${\delta}E_{i, \mathrm{mod}}$ from that of the dynamical error $\delta{E}_{i,\mathrm{dyn}}$, in our second approach we consider the perturbing acceleration derived from the *truncated* perturbing function $\mathcal{R'}^\mathrm{T}$. Expanding $\mathcal{R}'$ in Legendre polynomials gives $$\mathcal{R'} = \frac{\mu}{r'} \left[ 1 + \sum_{l=2}^{\infty} \left( \frac{r}{r'} \right)^l P_l \left( \frac{\bm{r} \cdot \bm{r}'}{rr'} \right) \right], \label{eq:R1Legendre}$$ where $P_l \left( \nu \right)$ is the Legendre polynomial of order $l$, $$P_l \left( \nu \right) \triangleq \sum_{s=0}^{\left \lfloor \frac{1}{2} l \right \rfloor} \frac{\left( -1 \right)^s \left( 2l - 2s \right)!}{2^l s! \left( l - s \right)! \left( l - 2s \right)!} \nu^{l - 2s} = \sum_{s=0}^{\left \lfloor \frac{1}{2} l \right \rfloor} G(l,s) \nu^{l - 2s}.$$ This expression for $\mathcal{R'}$ is formally equivalent to that in \[eq:R1\], and in fact it is the first step in the derivation of $\mathcal{R}'$ as a function of the classical orbital elements [@Murray1999 section 6.3]. By making $P_l$ explicit in \[eq:R1Legendre\] and manipulating the resulting expression we rewrite $\mathcal{R'}$ as $$\mathcal{R'} = \frac{\mu}{r'} \left[ 1 + \sum_{l=2}^{\infty} \sum_{s=0}^{\left \lfloor \frac{1}{2} l \right \rfloor} \frac{G(l,s)}{(r')^l} r^{2s} \left( \bm{r} \cdot \bm{\hat{r}'} \right)^{l-2s} \right], \label{eq:R1Gls}$$ where $\bm{\hat{r}'} = \bm{r}'/r'$ is a unit vector. By taking the gradient of \[eq:R1Gls\], truncating the expansion at order $l_\mathrm{M}$, and rearranging the terms, we obtain a compact expression for ${\bm{P}'}^\mathrm{T}$, the lunisolar perturbing acceleration resulting from the truncated perturbing function $\mathcal{R'}^\mathrm{T}$, $${\bm{P}'}^\mathrm{T} = \frac{\partial \mathcal{R'}^\mathrm{T}}{\partial \bm{r}} = \frac{\mu'}{r'} \sum_{l=2}^{l_\mathrm{M}} \sum_{s=0}^{\left \lfloor \frac{1}{2} l \right \rfloor} \frac{G(l,s)}{(r')^l} r^{2s} \left( \bm{r} \cdot \bm{\hat{r}'} \right)^{l-2s} \left[ 2s \frac{\bm{r}}{r^2} + \frac{\left( l - 2s\right)}{ \left( \bm{r} \cdot \bm{\hat{r}'} \right)} \bm{\hat{r}'} \right]. \label{eq:P1Trunc}$$ Using a low-order expansion for ${\bm{P}'}^\mathrm{T}$, rather than $\bm{P}'$, can increase numerical accuracy when the orbit is very close to the primary body [@Battin1999 p. 388]. However, by expressing lunisolar perturbations through ${\bm{P}'}^\mathrm{T}$ we aim to introduce an “artificial” model truncation error. For equal $l_\mathrm{M}$, [`THALASSA`]{} propagations performed with ${\bm{P}'}^\mathrm{T}$ will be affected by the same model truncation error as [`STELA`]{}, thus allowing to isolate its effect on the total integration error separately from the other contributions. The numerical test cases in \[sec:GTO\_test,sec:HEO\_test\] highlight these considerations. In the LEO test case, we always consider the full expression $\bm{P}'$. ### [[`THALASSA`]{}]{} implementation and comparison with [`STELA`]{} {#sec:thalassa_impl} Any mathematical procedure implemented in a computer may exhibit very different computational cost according to implementation-dependent features such as the memory access method, the number of disk I/O operations, and source code language, factoring and optimization. It is important to carefully choose these features in order to attain the maximum performance. In the following, we provide a broad overview of the implementation of the [[`THALASSA`]{}]{} code. The main [[`THALASSA`]{}]{} executable is implemented in Fortran 2008, and is available through a public Gitlab repository.[^11] The Fortran code is written for sequential execution, but the repository also includes Python scripts that execute propagations in parallel over several cores for large scale simulations. We use the [`THALASSA`]{} version 1.2 for the tests. For the purposes of comparing the computational time with respect to [`STELA`]{} we emphasize that, depending on multiple factors such as the machine architecture, Java Development Kit and Fortran compiler versions, programs written in Fortran may exhibit lower computational times than their equivalents in Java. Benchmarks for scientific computing [@Bull2001] and applications to astrodynamics [@Eichhorn2017] show that Java codes are usually slower than Fortran ones, although the computational time strongly depends on the particular benchmark and on the characteristics of the implementation. The factor by which Java algorithms are slowed down with respect to their Fortran versions can be conservatively estimated as 5 for our applications. Since [`STELA`]{} is heavily optimized for performance and its internal algorithms have not been released publicly, in the rest of this article we will report the actual CPU times required to run the different codes, without assigning any “penalty” to [`THALASSA`]{}. The [`THALASSA`]{} code is compiled by using `gfortran 6.3.0` with the optimization flag `-O`; the tests are performed on an iMac Pro equipped with 18 Intel Xeon W cores with up to Turbo Boost. LEO numerical test case {#sec:LEO_test} ======================= $$\begin{array}{cS[tight-spacing=true,table-format=5.6e0]s} \toprule \text{MJD} & 58171.738177 & \\ \hat{a} & 6892.14 & \kilo\metre \\ \hat{e} & 0 & \\ \hat{i} & 97.46 & \degree \\ \hat{\Omega} & 281 & \degree \\ \hat{u} & 0.0 & \degree \\ \bottomrule \end{array}$$ We consider the propagation of a LEO orbit corresponding to that of the *Tintin A* spacecraft.[[^12]]{} The initial osculating orbital elements in \[tab:ICs\_LEO\] correspond to its parking, Sun-synchronous orbit about minutes after launch on February 22^nd^, 2018. We assume a constant drag coefficient $C_\mathrm{D} = 2.2$, a spacecraft mass of and a cross-sectional area of . The physical model includes a $5 \times 5$ geopotential, lunisolar perturbations and atmospheric drag. The solar flux and the geomagnetic planetary index and amplitude are kept constant ($K_p = \num{3.0}$, $A_p = \num{15}$, $F_{10.7} = \SI{140}{\SFU}$). ![Osculating perigee altitude $\hat{h}_\mathrm{p}$ and remaining orbital elements as a function of time for the propagation of the *Tintin A* initial conditions in \[tab:ICs\_LEO\] until re-entry due to atmospheric drag (the mean argument of latitude is $\hat{u} = \hat{M} + \hat{\omega}$). Green, orange, and purple curves are obtained with the US76, Jacchia 77, and NRLMSISE-00 atmospheric models, respectively. The trajectories are computed with both [`THALASSA`]{} and [`STELA`]{}; the latter are shown with circular markers. \[fig:LEO\_COE\]](Tintin_5x5_traj){width="0.8\columnwidth"} $$\begin{array}{c*5{S[tight-spacing=true]}} \toprule {\Delta {t}} & {l_\mathrm{M}} & {N_\mathrm{tess}} & N_\mathrm{drag} & {M_\mathrm{quad}} \\ \midrule \SI{24}{\hour} & 3 & 20 & 1 & 33 \\ \bottomrule \end{array}$$ LEO orbital dynamics -------------------- shows the history of the osculating perigee altitude and of the remaining osculating orbital elements for the propagation of the initial conditions in \[tab:ICs\_LEO\] until a re-entry at the height of is detected. We display the trajectories obtained with both [`THALASSA`]{} and [`STELA`]{} for all the atmospheric models. The parameters affecting the numerical propagation in [`STELA`]{} are chosen through a trial-and-error calibration procedure. Their nominal values, shown in \[tab:STELA\_LEO\_params\], are chosen as those for which the final re-entry date converges within an acceptable computational time. The [`THALASSA`]{} trajectories are propagated by integrating the KS equations with a solver tolerance of and no time element. Examination of \[fig:LEO\_COE\] shows that the choice of atmospheric model heavily impacts the lifetime estimate of $\num{16.6} \pm \num{3.6}$ years. In fact, the modelling of atmospheric drag constitutes the largest physical source of uncertainty for LEO orbits. Curves for [`STELA`]{} and [`THALASSA`]{} almost overlap when using the US76 atmospheric model, as the values of atmospheric density are consistent in both codes within a few units of double precision machine zero. With this model, existing discrepancies are entirely to be attributed to the dynamical and short-periodic errors in [`STELA`]{}, since the numerical error in the [`THALASSA`]{} solution is substantially mitigated due to the very small tolerance value. Performance of the semi-analytical method ----------------------------------------- \[sec:LEO\_SA\_perf\] shows the CPU time as a function of the errors in osculating orbital elements with respect to a reference trajectory computed by running [`THALASSA`]{} in quadruple precision and with a solver tolerance of . The error values refer to trajectories obtained by varying the [`THALASSA`]{} solver tolerance between and , and the [`STELA`]{} integration ${\Delta}t$ between and . The CPU times are averaged over 3 runs of each propagation. We evaluate the errors at a common epoch of years from the start rather than at the re-entry epoch, since the latter changes according to different propagations. In both codes, we consider the US76 atmospheric model to eliminate any sources of error different from those described in \[sec:avg\_err\_analysis\]. [`STELA`]{} attains accuracy in the radius of perigee with CPU times between and seconds, depending on the time step. Excluding very small time step values, the average [`STELA`]{} computational time is in the order of seconds. [[`THALASSA`]{}]{} requires larger CPU times for the same accuracy, between to seconds according to the chosen formulation. In this respect, regularized element methods such as EDromo and the one due to Stiefel and Scheifele achieve the smallest computational times, which is twice that obtained with [`STELA`]{}. Note that previous works estimate non-averaged methods to be a hundred times slower than semi-analytical ones [@Lara2012]. However, the maximum accuracy attainable by [`STELA`]{} is limited by the approximations intrinsic to the averaging process. While the expansions of the gravitational perturbing functions converge very quickly due to the small eccentricity, the mean integration error ${\delta}{E}_i$ (\[eq:avg\_err\_breakdown\]) still impacts the total integration error considerably. It can be shown that the latter is one order of magnitude larger than the short-periodic terms in the eccentricity and angular variables. Further [`STELA`]{} propagations performed with $l_\mathrm{M} > 3$ do not result in significant improvements, suggesting that the lunisolar model truncation error is negligible, as it can be expected from the small value of the semi-major axis. As mentioned in \[sec:avg\_err\_analysis\], the dynamical error, the model truncation error, and the error on the short-periodic terms are entirely independent from the numerical integration process and it is not possible to mitigate them by choosing a smaller step size. According to \[eq:avg\_err\_breakdown\], the time step ${\Delta}t$ should only be small enough to ensure that $\left|{\delta}E_\mathrm{num}\right| \ll |{\delta}E_\mathrm{dyn}| + |{\delta}E_\mathrm{mod}|$. Any smaller time step will result in an increase of CPU time without any corresponding accuracy improvement. This fine-tuning of the time step should be performed for each orbital regime and for different spacecraft characteristics to ensure that a semi-analytical method gives the best performance. Similarly, the four additional parameters listed in \[tab:STELA\_LEO\_params\] should also be tuned, which can be time-consuming. In contrast, the accuracy of a non-averaged method is completely determined by one parameter, that is the integration tolerance. The optimization of only one parameter in [`THALASSA`]{} (rather than five as in [`STELA`]{}) results in a much simpler propagator configuration. Performance of the non-averaged methods --------------------------------------- The maximum accuracy reached by [`THALASSA`]{} is limited by the accumulation of round-off error (leftmost points in the panels in \[fig:LEO\_batch\_COE\]). The EDromo and SS element methods are more efficient than Cowell and KS, which are based on coordinates. For the same computational time, element methods improve the accuracy with respect to [`STELA`]{} by up to three orders of magnitude in each of the orbital elements. Equivalently, they endow a reduction in CPU time with respect to Cowell by a factor of 3 for the same accuracy. There is no strong speed advantage in choosing the Kustaanheimo Stiefel formulation over Cowell in the propagation of a quasi-circular orbit, since the rate of change of the physical time is equal to that of the fictitious time, except from a multiplicative constant ($\mathrm{d}t = r \mathrm{d}s$). Nevertheless, KS does achieve a higher accuracy especially when used in conjunction with a linear time element. Computing the position along the orbit with an accuracy greater than (equivalent to about in the along-track direction) is only possible if regularized formulations are employed. This is remarkable given the long time span of the integration, which is of about orbital periods. Re-entry date prediction accuracy --------------------------------- We use a procedure analogous to that of the previous section to compute the error in the re-entry date with respect to the reference solution for [`THALASSA`]{} and [`STELA`]{}. The US76 atmospheric model is used for both of them. shows that the dynamical and model truncation errors in [`STELA`]{} limit its re-entry date prediction accuracy to day after 20 years of propagation. [`STELA`]{} is three times faster than [`THALASSA`]{} for the same accuracy in re-entry time. The difference in performance of the methods is similar to the case in which we take the error on the orbital elements as an accuracy metric (compare \[fig:LEO\_batch\_JD\] against \[fig:LEO\_batch\_COE\]). Regarding the formulations implemented in [`THALASSA`]{}, EDromo provides the most accurate re-entry predictions. We also repeated the propagation of the initial conditions in \[tab:ICs\_LEO\], but changing the initial inclination to the “critical” value of , for which $\dot{\omega} \approx 0$. All the perturbations considered until now were kept active in the test. The qualitative behavior of the trajectories and the re-entry dates obtained with both codes were in good agreement. We highlight that [`STELA`]{} was able to provide reliable estimations of the re-entry epoch even close to the condition $\dot{\omega} \approx 0$, which is an intrinsic singularity of the main problem [@Coffey1986]. $$\begin{array}{cS[tight-spacing=true,table-format=5.7e0]s} \toprule \text{MJD} & 57249.958333 & \\ \hat{a} & 24326.18 & \kilo\metre \\ \hat{e} & 0.73 & \\ \hat{i} & 10 & \degree \\ \hat{\Omega} & 310 & \degree \\ \hat{\omega} & 0 & \degree \\ \hat{M} & 180 & \degree \\ \bottomrule \end{array}$$ GTO numerical test case {#sec:GTO_test} ======================= The same numerical experiments as in the previous section are repeated for a set of initial conditions representing a GTO orbit, which is displayed in \[tab:ICs\_GTO\]. The physical model is unchanged. The mass, area, and drag coefficient of the spacecraft are set at $M = \SI{1000}{\kilo\gram}$, $A = \SI{10}{\square\metre}$, $C_D = 2.2$, respectively. GTO dynamics and impact on the integration error ------------------------------------------------ ![Osculating orbital elements as a function of time for the propagation of the GTO initial conditions in \[tab:ICs\_GTO\] until re-entry. Refer to \[fig:LEO\_COE\]\[fig:GTO\_COE\] for the interpretation of the curves.](GTO_traj){width="0.8\columnwidth"} $$\begin{array}{c*5{S[tight-spacing=true]}} \toprule {\Delta {t}} & {l_\mathrm{M}} & {N_\mathrm{tess}} & N_\mathrm{drag} & {M_\mathrm{quad}} \\ \midrule \SI{6}{\hour} & 8 & 5 & 1 & 67 \\ \bottomrule \end{array}$$ The complex dynamics of GTOs is dictated by the interplay between lunisolar perturbations and atmospheric drag, and the evolution of any single orbit can only be predicted in statistical terms. Nevertheless, it is still possible to predict the evolution of a single orbit over a certain time span if the integration errors are reduced to within a few orders magnitude of the machine zero. Therefore we build reference solutions for all atmospheric models by propagating the initial conditions in \[tab:ICs\_GTO\] with [`THALASSA`]{}, with a very strict solver tolerance of . For the US76 model, we also propagate in quadruple precision and with a tolerance of . The quadruple and double precision solutions are in excellent agreement, indicating that double precision is adequate to integrate this orbit accurately. Regarding [`STELA`]{}, its parameters were chosen by trial and error as to make the re-entry date converge within an acceptable CPU time, analogously to \[sec:LEO\_test\]. Note that the integration step (${\Delta {t}}$) of [`STELA`]{} needs to be four times smaller than in the LEO case, notwithstanding the longer GTO orbital period. shows the time histories of the orbital elements until re-entry for both codes and all atmospheric models. Differences between the two codes are more pronounced than in the LEO case. The visible discrepancy in the last 3 years of propagation with the US76 model is due to the accumulation of dynamical error, which may be aggravated in the presence of drag. The mean rate of change of the elements due to atmospheric drag is obtained in [`STELA`]{} by averaging the related acceleration at $M_\mathrm{quad}$ points on the *osculating* orbit by numerical quadrature. However, the dynamical and short-periodic errors affecting the osculating elements ultimately lead to an error on the computed altitude and thus on the density and drag acceleration. This effect can be relevant in the latter part of the propagation, when the dynamical error has accumulated significantly and the orbit crosses the densest layers of the atmosphere. ![Osculating [`STELA`]{} trajectories for the order of truncation of the lunisolar expansions $l_\mathrm{M}$ varying from (gray) to (dark green). Only $(\hat{a},\hat{e},\hat{i})$ are shown.\[fig:GTO\_COE\_lM\]](GTO_traj_lM) Due to the high sensitivity of this orbit, a higher truncation order $l_\mathrm{M}$ of the lunisolar perturbing function expansion is needed to achieve a good convergence of the re-entry date. The impact of $l_\mathrm{M}$ is apparent from \[fig:GTO\_COE\_lM\], which highlights that the evolution of the trajectory in the last years of propagation (the most relevant in understanding the evolution of the re-entry process) changes substantially with $l_\mathrm{M}$. Choosing an adequate $l_\mathrm{M}$ value increases the effort needed for fine-tuning the semi-analytical propagator, as noted in \[sec:LEO\_SA\_perf\]. Performance of the methods -------------------------- shows the measured CPU time as a function of the errors on the orbital elements for all methods. The errors are taken with respect to a reference trajectory computed in quadruple precision in [`THALASSA`]{} (with a solver tolerance of ) for the US76 atmospheric model after 20 years of propagation. In order to investigate the impact of the model truncation error, we report results obtained with both $l_\mathrm{M} = 5$ and $l_\mathrm{M} = 8$ for [`STELA`]{}. We vary the [`THALASSA`]{} solver tolerance between and , and the [`STELA`]{} time step between and days. As in the previous case, the regularized formulations implemented in [`THALASSA`]{} reach significantly higher accuracy than the semi-analytical approach in [`STELA`]{} for all orbital elements except $\hat{M}$. Regularization is highly beneficial for the integration of this test case, as the Cowell formulation has high CPU time at a relatively poor accuracy, and the best performing formulations are the EDromo and SS element methods. The performance of [`THALASSA`]{} is superior to that of [`STELA`]{} since, by using regularized formulations, it is possible to get a solution that is more accurate than the semi-analytical approach with comparable CPU time. Increasing $l_\mathrm{M}$ after a certain threshold does not improve the accuracy of the [`STELA`]{} solutions. Examination of \[fig:GTO\_batch\_COE\] suggests that the model truncation error does not have a significant impact for $l_\mathrm{M} > 5$. Since decreasing the time step does not result in improvements either, the large integration errors produced by [`STELA`]{} are ascribable to the dynamical error $\delta{E}_{i,\mathrm{dyn}}$, as explained in \[sec:avg\_err\_analysis\]. We verified that using [`THALASSA`]{} with the truncated expansion of the lunisolar perturbations (\[eq:P1Trunc\]) shows negligible qualitative improvements in the trajectory for $l_\mathrm{M} > 6$. The accuracy of the trajectories computed with [`STELA`]{} is strongly sensitive to the value of the time step, as it can be inferred from the noticeable scattering in the plots. Regularized formulations show a smoother convergence with decreasing tolerance, and the variations in the integration error are more contained. Errors in mean anomaly are large for all the methods, preventing the accurate computation of the position vector. This is not an issue, since the uncertainty embedded in the physical model and in the orbit determination prevents the accurate recovery of the position over such a long time span in practical computations. All of the above considerations also affect the errors on the re-entry date, which are displayed in \[fig:GTO\_batch\_JD\] for all the methods. In particular, it is possible to constrain the error on the re-entry date to under 10 days (with CPU times between and seconds) only by using regularized element methods. Both [`STELA`]{} and the Cowell formulation achieve errors on the re-entry date in the order of 100 days with CPU times of about 20 to 30 seconds. HEO numerical test case {#sec:HEO_test} ======================= $$\begin{array}{cS[tight-spacing=true,table-format=6.8e0]s} \toprule \text{MJD} & 56664.86336805 & \\ \hat{a} & 106247.136454 & \kilo\metre \\ \hat{e} & 0.75173 & \\ \hat{i} & 5.2789 & \degree \\ \hat{\Omega} & 49.351 & \degree \\ \hat{\omega} & 180 & \degree \\ \hat{M} & 0 & \degree \\ \bottomrule \end{array}$$ We consider the orbit of the proposed Simbol-X mission as a test case representative of a high-altitude HEO. The initial conditions in \[tab:ICs\_HEO\] were used to benchmark the performance of the semi-analytical propagator by @Lara2017, and for the study performed on [`THALASSA`]{} in @Amato2018. The large values of eccentricity and semi-major axis make this orbit a challenging test case for both semi-analytical and numerical methods. With a period of approximately $\num{4}$ days, the orbit is also close to a $\num{7}:\num{1}$ mean motion resonance with the Moon. We consider the same physical model as in the previous sections, and a spacecraft mass $M = \SI{1470}{\kilo\gram}$, cross-sectional area $A = \SI{15}{\square\meter}$, and drag coefficient $C_D = \num{2.2}$. Even if the semi-major axis is very large, the high eccentricity of this orbit causes the spacecraft to cross the atmosphere in some parts of the trajectory. Thus we leave the atmospheric drag perturbation active, with the air density computed through the US76 model only. Impact of dynamical and model truncation errors ----------------------------------------------- $$\begin{array}{c*5{S[tight-spacing=true]}} \toprule {\Delta {t}} & {l_\mathrm{M}} & {N_\mathrm{tess}} & N_\mathrm{drag} & {M_\mathrm{quad}} \\ \midrule \SI{48}{\hour} & \text{5 and 8} & 2.5 & 1 & 33 \\ \bottomrule \end{array}$$ Trajectories resulting from the propagation of the initial conditions in \[tab:ICs\_HEO\] are presented in \[fig:HEO\_COE\]. To investigate the impact of the model truncation error, we show trajectories obtained with [`THALASSA`]{} expressing third-body perturbations through either \[eq:P1Cart\] (i.e. the “full” expression), or \[eq:P1Trunc\] with $l_\mathrm{M} = 5$ for both the Sun and the Moon. Both trajectories are computed with strict solver tolerances in order for the numerical error to have a negligible impact, and we take the one with the “full” expression as the reference trajectory. We use the [`STELA`]{} parameters in \[tab:STELA\_HEO\_params\], which have been found through the same trial-and-error procedure described in the previous sections. All the propagations are stopped at re-entry, which takes place due to the increase in eccentricity caused by lunisolar perturbations. The re-entry epoch changes considerably according to the different solutions. The impact of $l_\mathrm{M}$ on the evolution of the orbital elements is significant, implying that the model truncation error is important for this case. The [`STELA`]{} and [`THALASSA`]{} solutions for $l_\mathrm{M} = 5$ are qualitatively different from the [`THALASSA`]{} reference, as is particularly evident from the plots of eccentricity and inclination. The frequency of the oscillations of these elements is affected by substantial errors for $l_\mathrm{M} = 5$. Since re-entry can only take place when the eccentricity reaches a peak in the long-periodic oscillations, small errors in their amplitude caused by the model truncation lead to an overestimation of the re-entry epoch. Also for $l_\mathrm{M} = 5$, [`STELA`]{} reports a re-entry at about years from the initial epoch, while in the [`THALASSA`]{} solution with $l_\mathrm{M} = 5$ re-entry takes place years later. [`STELA`]{} considers re-entry to take place when the osculating radius of perigee is less than , while [`THALASSA`]{} checks this condition on the osculating radius $\hat{r}$. This leads to an error in the lifetime estimation of years, since the condition on the instantaneous radius of perigee does not correspond to a physical re-entry. For $l_\mathrm{M} < 5$, discrepancies with respect to the reference solutions are even more relevant; however, we omit the corresponding trajectories here as the interested reader can find these results in @Amato2018. The dynamical error is of minor importance, but it can explain the remaining discrepancies between [`STELA`]{} and [`THALASSA`]{} solutions. Considering the orbital elements of the Moon as constant during the averaging operation generates a non-negligible dynamical error, since the Moon moves on an arc of approximately along its orbit during a single orbital period of the spacecraft. Moreover, the proximity to the $7:1$ mean motion resonance may cause long-periodic effects to be neglected within the single averaging approach. In fact, this is a possible explanation to the sudden divergence of the [`STELA`]{} and [`THALASSA`]{} solutions for $l_\mathrm{M} = 5$ at 75 years from the starting epoch. Ultimately, the accumulation of model truncation and dynamical errors are ascribable to the same cause, which is the large value of the ratio between the mean semi-major axes of the orbiter and of the Moon, $(a/a_\leftmoon) \approx 0.3$. This implies that the frequencies pertaining to the orbital motion of the spacecraft and to the perturbing body are not well separated, which reduces the efficiency of the semi-analytical approach. Performance of the methods -------------------------- The CPU time as a function of the errors on the osculating orbital elements is shown in \[fig:HEO\_batch\_COE,fig:HEO\_batch\_COE\_Zoom\], where all the propagations are stopped at 75 years after the initial epoch. For [`THALASSA`]{}, we consider the same range of solver tolerance as in the previous sections, while the [`STELA`]{} time step is between and days. We also generate a [`THALASSA`]{} solution in quadruple precision with a solver tolerance of , and using the expansion ${\bm{P}'}^\mathrm{T}$ truncated at $l_\mathrm{M} = 8$, which is the same order that we take into account for the [`STELA`]{} propagations. Since [`THALASSA`]{} uses non-averaged equations the solution is free from both dynamical error and error on short-periodic terms, and due to the employment of quadruple precision and a very strict tolerance it can be considered free from both round-off and numerical truncation error. Thus, the total integration error for this solution, which is represented in the panels in \[fig:HEO\_batch\_COE\] by dashed blue lines, will be only due to the model truncation error contribution, $${\delta}\hat{E}_i = {\delta}E_{i,\mathrm{mod}}.$$ [`STELA`]{} exhibits very coarse accuracy due to the aforementioned accumulation of mean integration error, with [`THALASSA`]{} being six or more orders of magnitude more accurate on all the orbital elements. The total integration error is dominated by the contribution of model truncation for all the elements except the semi-major axis: its mean value is invariant in the single-averaged restricted three-body problem, which is a good approximation of the physical model for this test if the Moon is considered as the secondary mass. Decreasing the value of the time step makes the [`STELA`]{} error values converge to the total integration error of the truncated [`THALASSA`]{} solution, confirming the importance of the model truncation error. Note that carrying the third-body expansion to such a high order substantially increases the computational time. As a consequence, [`STELA`]{} is ten times slower than [`THALASSA`]{} to achieve the same accuracy. is a zoom of the lower region of the panels in \[fig:HEO\_batch\_COE\], which allows to examine more in detail the performance of the [`THALASSA`]{} formulations. Since the trajectory is highly perturbed by the Sun and the Moon, there is no particular advantage in using element-based methods. However, regularization is highly beneficial to the integration, as all the regularized formulations display CPU times of less than half [that of the unregularized]{} Cowell for the same accuracy. Conclusions {#sec:conclu} =========== This paper presents a study of semi-analytical and non-averaged orbit propagation methods for Earth satellites. By [analyzing]{} the approximations involved in the method of averaging, we break down the total integration error of a semi-analytical method with respect to an osculating reference (or “true”) solution into components with distinct mathematical causes. These are the *dynamical error*, originating in the approximations involved in performing the averaging integrals, the *model truncation error*, due to the truncation of the expansions of the disturbing functions, and the error involved in the calculation of *short-periodic terms*. All of the above contributions to the total integration error do not depend on the numerical integration method, and can only be abated by resorting to more refined analytical developments. In contrast, the *numerical error*, which is the last component of the total integration error, can be significantly mitigated with appropriately configured numerical methods. In this sense, the accuracy of semi-analytical methods is intrinsically limited with respect to the non-averaged. Moreover, semi-analytical methods involve several parameters that have to be tuned (often by trial and error) to increase performance, while non-averaged methods only require choosing the solver step size or local truncation error tolerance. We implemented a collection of non-averaged methods in the [`THALASSA`]{} orbit propagation code, consisting of regularized formulations of the equations of motion and of the Cowell formulation (i.e., the unregularized equations in rectangular coordinates). Their performance is compared to that of the semi-analytical method implemented in the [`STELA`]{} software. The physical model implemented in [`THALASSA`]{} was aligned to machine precision with that implemented in [`STELA`]{}. In this way, any discrepancy between [`THALASSA`]{} and [`STELA`]{} solutions can be ascribed to the error components mentioned above. [`THALASSA`]{} also makes use of a highly efficient and non-singular algorithm for the calculation of the perturbing part of the geopotential. We presented results from numerical test cases involving the propagation over several decades of initial conditions corresponding to a LEO, a GTO, and a high-altitude HEO. In the LEO case, the formulations implemented in [`THALASSA`]{} require twice the computational time as [`STELA`]{} to attain comparable accuracy. The orbit is quasi-circular and has a small semi-major axis, hence the expansions of the perturbing functions and of the averaging integrals converge rapidly and the semi-analytical method is very efficient. Regularized formulations achieve errors smaller by three orders of magnitude at the expense of the computational cost; among these, element-based methods are particularly efficient. On the other hand, regularization is highly beneficial for the GTO orbit, as regularized formulations endow an increase in accuracy of up to two orders of magnitude with respect to [`STELA`]{}. Dynamical and short-periodic error contributions are particularly important for the GTO, making semi-analytical methods less advantageous than non-averaged methods based on regularized formulations. For the HEO orbit, the large values of semi-major axis and eccentricity imply that the model truncation error is very significant, and the lunar perturbing function expansion in [`STELA`]{} must be carried out to the highest order (the eighth) to retain qualitative similarity between its solution and the reference. We offer a quantitative proof by building an additional solution with [`THALASSA`]{} in which the third-body perturbing acceleration is explicitly written as the gradient of the *truncated* perturbing function through a novel expression. For the HEO, [`THALASSA`]{} is considerably more [efficient: by]{} using regularized element methods the computational time is abated tenfold with respect to [`STELA`]{} for the same accuracy. When choosing small values of the solver tolerance, the accuracy reached by [`THALASSA`]{} is significantly higher than that of [`STELA`]{}. We propose to expand the present work by investigating the performance of the codes in terms of function evaluations, rather than CPU time, and by comparing results of Monte Carlo runs for the estimation of GTO lifetimes. Additionally, the osculating trajectories produced by [`THALASSA`]{} could be numerically averaged after the propagation in order to compare them to the mean trajectories produced by [`STELA`]{}. D. A. gratefully acknowledges the advice by Juan-Félix San Juan, Martin Lara, Denis Hautesserress, and Florent Deleflie during numerous occasions, and in particular during the KePASSA conferences and CNES conference on HEO orbits. D.A. recognizes the extremely helpful assistance by Hodei Urrutxua in the implementation of the non-singular geopotential code. Conflict of interest {#conflict-of-interest .unnumbered} ==================== The authors declare that they have no conflict of interest. [^1]: This work is partially funded by the European Commission’s Framework Programme 7, through the Stardust Marie Curie Initial Training Network, FP7-PEOPLE-2012-ITN, Grant Agreement 317185. [^2]: Regardless of the orbit propagation method, uncertainties in the orbit determination, in the predictions of solar activity, and in the modeling of the atmosphere-spacecraft interaction make GTOs unpredictable in the long term. [^3]: The US space object catalog is a public database of orbital data for more than objects (at the time of writing). Access to the catalog is available at <https://www.space-track.org/>, last visited: July 6^th^, 2018. [^4]: <span style="font-variant:small-caps;">URL:</span> <http://gmatcentral.org/>, last visited: October 8^th^, 2018. [^5]: Exceptions are given by translunar orbits, impulsive maneuvers, and the terminal phase of re-entry trajectories. [^6]: <span style="font-variant:small-caps;">URL:</span> <https://logiciels.cnes.fr/content/stela?language=en>, last visited: May 31^st^, 2018. [^7]: <span style="font-variant:small-caps;">URL:</span> <http://www.netlib.org/odepack/>, last visited: October 15^th^, 2017. [^8]: The subroutine for the computation of the ephemerides was kindly provided by Florent Deleflie. [^9]: <span style="font-variant:small-caps;">URL:</span> <https://ccmc.gsfc.nasa.gov/pub/modelweb/atmospheric/msis/nrlmsise00/>, last visited: October 15^th^, 2017. [^10]: <span style="font-variant:small-caps;">URL:</span> <http://www.dem.inpe.br/~val/atmod/default.html>, last visited: October 15^th^, 2017. [^11]: <span style="font-variant:small-caps;">URL:</span> <https://gitlab.com/souvlaki/thalassa>, last visited May 31^st^, 2018. [^12]: <span style="font-variant:small-caps;">URL:</span> <http://space.skyrocket.de/doc_sdat/microsat-2.htm>, last visited: July 25^th^, 2018.
--- abstract: 'We present a model for a random walk with memory, phenomenologically inspired in a biological system. The walker has the capacity to remember the time of the last visit to each site and the step taken from there. This memory affects the behavior of the walker each time it reaches an already visited site modulating the probability of repeating previous moves. This probability increases with the time elapsed from the last visit. A biological analog of the walker is a frugivore, with the lattice sites representing plants. The memory effect can be associated with the time needed by plants to recover its fruit load. We propose two different strategies, conservative and explorative, as well as intermediate cases, leading to non intuitive interesting results, such as the emergence of cycles.' author: - 'Laila D. Kazimierski' - Guillermo Abramson - 'Marcelo N. Kuperman' title: 'A random walk model to study the cycles emerging from the exploration-exploitation trade-off' --- Introduction ============ The movement of animals in search for food, refugia or other resources is nowadays the subject of active research trying to unveil the mechanisms that give rise to a wide family of related complex patterns. In particular, physicists find in these a fruitful field to explore reaction-diffusion mechanisms [@kuperman; @abramson2013], to apply the formalism of stochastic differential equations [@okubo02; @mikhailov06; @schat96] and to perform simulations based on random walks [@viswanathan11; @viswanathan96; @giuggioli09; @borger08]. One of the key aspects of this phenomenon is the fed back interaction between the individual and the environment [@turc98]. These interactions may involve intra and inter specific competition that, together with previous experience [@nath08; @mor10] and the search for resources, drive the displacement of the individuals. In particular, when animals move around in order to collect food from patches of renewable resources, their trajectories depend strongly on the spatial arrangement of such patches [@ohashi07]. This observation has motivated a large collection of studies focused on finding optimal search strategies under different assumptions of animal perception and memory [@bartumeus02; @fronhofer13]. A related open question is that of the origin of home ranges, a concept introduced in [@burt] to characterize the spatial extent of the displacements of an animal during its daily activities. Many species perform bounded explorations around their refugia, even though the available space and resources extend far beyond. There are several hypotheses that try to explain this phenomenon, which could be only an emergent behavior associated to very simple causes [@abramson2014]. The review by Börger et al. [@borger08] is an exhaustive compilation of the state of the art. There, the authors point out that movement models not always lead to the formation of stationary home ranges. Still, home ranges arise, for example, in biased diffusion [@okubo02], in self-attracting walks [@tan] and in models with memory [@schu]. Nevertheless, the quest to unveil and characterize the underlying weave of causes and effects behind the emergent patterns is not over. How do these emerge as the result of the interaction between the behavior of an organism and the spatial structure of the environment? In this context, the venerable symmetrical random walk has been the subject of many studies, with a large collection of applications and characterizations that include aspects beyond the simple walker capable of only uncorrelated short-range steps. Just to focus on what we want to present here, let us restrict the examples to random walks on discrete lattices where the walker can gather information to build up a history. One such case is the self avoiding walk (SAW), where the walker builds up its trajectory by avoiding to step onto an already visited site [@flor; @dege]. A characteristic result corresponds to the walker running into a site with all its neighboring sites already visited and being blocked. The converse case occurs when the walker prefers sites visited earlier. Previous works have shown that introducing long-range correlations into a random walk may lead to non trivial effects translated into drastic changes in the asymptotic behavior. The usual diffusive dynamics can evolve into sub-diffusive, super-diffusive or persistent. Such random walks with long-range memory have been extensively studied in recent years [@hod; @schu; @trimp99; @trimp01; @kesh; @para; @silva; @cres]. In [@sapo; @orde1; @orde2] a behavior that can be interpreted as memory has been explored. These works analyze a self attracting walk where the walker jumps to the nearest neighbor according to a probability that increases when the site has already been visited. A generalization that includes an enhancement of this memory with the frequency of visits, but also with a degradation with time, was proposed in [@tan]. In this work we propose a random walk with a specific memory that induces local correlations at long times. The rationale for this model is to mimic the movement of a foraging animal, e.g. a frugivore, going from one plant to another in order to feed. We show that the emergence of looped walks, that can be associated with home ranges, can be promoted by very rudimentary capacities of the individual together with a natural dynamics of the environment. The model ========= For a forager the proximity of a plant is not enough to make it attractive for a future visit: the plant must also have a visible and interesting load of fruit. Moreover, when visiting a plant the animal usually takes only part of the available fruit and moves on. After this, the plant needs some time to recover its fruit load. Such a model was analyzed in [@abramson2014]. We attempt here a further simplification, coding the complex interaction of memory, consumption and relaxation in the probabilities defining the random walk from each site of the lattice. As a first simplification, consider that the animal eats all the available ripe fruit in the visited plant and leaves. Let us say that a walker moving in such a substrate has a memory, allowing it to remember the time of visit to every site and the step taken from there. When revisiting a fruitful plant the animal will consider it a success and repeat the step taken from there, “remembering” its previous visit. When returning to a plant before its recovery the walker takes a random step. This unlimited memory is not necessary associated to an extraordinary skill of the forager. It could be stored in the environment as the state of each plant, which proximity and fruit load can trigger on the forager the inclination to choose a specific direction. Thus, the memory of having visited a site once, needs not to be stored on the animal but recorded on the topology of the environment (as is the case in [@abramson2014]). Also, we can anticipate here that when a home range emerges the walker effectively uses a bounded amount of memory. Besides this, imagine two possible strategies for the *update* of the memory, the details of which will be given below. A *conservative* walker will keep in memory the time in which the visit to that site was successful and the step taken on that occasion. An *exploring* walker, instead, will update the memory of the visit to the current time and the step to the randomly chosen one. Between these two strategies there might be intermediate ones, all of which will be explored below. Now, with the motivation just exposed, let us define a random walk that modifies the probabilities of steps from each site according to the time since the last visit and a parameter defining the strategy. The rules of the walk can be summarized as follows: - When visiting a new site, take a random step in either of the four directions. Store in memory the time of visit $t_v$ and the step. - When returning at time $t$ to a site previously visited at time $t_v$: - With probability $p_r(t-t_v)$ repeat the step stored in memory. Update the visit time stored in memory. - - With probability $1-p_r(t-t_v)$ take a random step and: - With probability $\rho$, update in memory the time of visit and the step taken. - - With probability $1-\rho$, keep the memory unmodified. The probability distribution used to repeat the step taken in the previous visit is used to model the replenishment of the fruit mentioned above. It can be simply a Heaviside step function $p_r(t-t_v)=\theta(t-t_v-\tau)$, where $\tau$ is a parameter representing the recovery time of the plants. It is equivalent to the memory of the *elephant walk* [@schu], but used in a different way. Contrary to the usual memory that makes the probability of revisiting a site fade out with time, here we are considering a probability of revisiting a site that increases with time. In such a case the walker will always repeat its step when returning after $\tau$ steps, and always take a random step when returning earlier. This strict condition can be relaxed by modeling $p_r$ with a smooth step function. In the results shown below only the Heaviside step distribution will be used, since, as we will show later, no significant differences where found when using a smooth distribution. In such a case, the walks are characterized by two parameters, $\tau$ and $\rho$. Our results show the emergence of closed circuits in non trivial ways. To characterize the behavior of these we analyze both the duration of the transient elapsed until the walker enters the closed circuit, as well as the length of such cycles. The emergence of such circuits is reflected in the fact that during the initial stages the mean square displacement exhibits a diffusive behavior whereas for longer times it reaches a plateau. Such a behavior has been already reported in previous works [@trimp99; @trimp01] where due to a fed back coupling between a particle and its environment, it gains experiences with modified surroundings, resulting in a bounded walk. Results ======= The results presented below correspond to mean values taken over $10^3$-$10^4$ realizations, on a 2 dimensional lattice, large enough to avoid that the walker reaches the borders. The simulations were done for $10^5$ and $10^6$ time steps, showing no significant dependence between them. One of most revealing features of any sort of walk, be this random, self avoiding, self attracting, etc. is its mean square displacement (MSD). The behavior of the MSD in the present model shows rather interesting features. Figure \[figure:MSD\_tau20\] displays the MSD as a function of time for a range of values of $\rho$, from 0 to 1, and for $\tau=20$. Recalling that $\rho$ is the probability that the walker updates the information, stored in its memory, regarding the time of visit to a site and the step taken from there, we associate $\rho=1$ with the *exploring* behavior and $\rho=0$ with the *conservative* one. We observe that for $\rho=0$ the behavior is clearly diffusive, while for $\rho=1$ the MSD reaches a plateau indicating that the walker remains trapped in a bounded region. Contrary to the intuitive guess, this shows that it is the exploring behavior the one which allows the walker to find closed circuits more easily, while the conservative behavior leads to a diffusive walk. Intermediate values of $\rho$ generate intermediate behaviors. We have analyzed the model for values of $\tau$ ranging from 5 to 150, finding analogous results for all of them. ![Mean square displacement vs. time for probability $\rho=0$ (black), $\rho=1$ (orange) and intermediate values, and corresponding to recovery time of the plants $\tau=20$. Simulations performed in a square lattice of $5000\times 5000$ sites, $10^5$ time steps and $10^3$ realizations. (Color on line.)[]{data-label="figure:MSD_tau20"}](MSD_tau20r){width="\columnwidth"} These results rise several questions about the dependence of the the emergence of cycles on each parameter. Even though all two-dimensional walks (including the case $\rho=0$) eventually return to a site in a condition that allows the settling of a cycle, the time necessary to fulfill this condition can vary greatly. As a result of this, after a fixed number of steps only a fraction of the walkers are able to do so. In the following we proceed to characterize the statistical behavior of these walkers by measuring several relevant quantities. Figure \[figure:cant\_ciclos\] shows a contour diagram representing the fraction of realizations that end in a cycle, as a function of the parameters $\rho$ and $\tau$. We observe that this fraction increases both for decreasing $\tau$ as well as for increasing $\rho$. Consistently, mapping this situation to the biological scenario, when plants take too long to recover (large $\tau$), or when the foragers are not exploring enough (too small $\rho$), there is no formation of home ranges. Another informative aspect of the walks that needs characterization is the length of the cycles. The concept of a home range is always associated to the measurement of the amount of space utilized. Sometimes it is measured through the utilization distribution [@ford79], that represents the probability of finding an animal in a defined area within its home range. In this case, once the cycle is established, the animal will visit each site within the cycle only once at each turn, so the utilization distribution will be uniformly distributed among the sites within the cycle. Still we can have an estimation of the amount of used spaced by measuring the longitude of the cycle. A priori we know that $\tau$ is the greatest lower bound (infimum) for the average cycle length. This average is shown in Fig. \[figure:prom\_ciclos\]. We can conclude that the mean length of the cycles is very close to this bound for all parameters sets, showing a very weak dependence on $\rho$ for the largest values of $\tau$, undoubtedly due to the undersampling arising from the finite simulation runs. Observe, nevertheless, the wedge shaped region of very conservative walkers that never find a cycle, which grows with the recovering parameter $\tau$. ![Contour plot of the fraction of realizations that end in a cycle, as a function of parameters $\rho$ and $\tau$. Simulations performed in a square lattice of $5000\times 5000$ sites, $10^5$ time steps and $10^4$ realizations. The gray region corresponds to realizations that do not end in cycles due to finite observation time.[]{data-label="figure:cant_ciclos"}](cant_ciclos){width="\columnwidth"} ![Contour plot of the mean cycle length as a function of $\rho$ and $\tau$. Simulations performed in a square lattice of $5000\times 5000$ sites, $10^5$ time steps and $10^4$ realizations.[]{data-label="figure:prom_ciclos"}](prom_ciclos){width="\columnwidth"} ![Contour plot of the mean transient length as a function of parameters $\rho$ and $\tau$. The color scale is logarithmic. Simulations performed in a square lattice of $10^4\times10^4$ sites, $10^6$ time steps.[]{data-label="figure:prom_transitorio"}](prom_transit){width="\columnwidth"} Let us now focus on the extreme cases of $\rho=0$ and $\rho=1$. When $\rho=0$ we found that the behavior is diffusive for all values of $\tau$, so that $\langle x^2\rangle =D(\tau)\,t$. As shown in Fig. \[figure:d\_vs\_tau\], $D(\tau)$ depends on $\tau$ approaching 1 from below as $\tau$ increases. On the other hand, perfect explorers—those with $\rho=1$—always find a cycle. We have found that the average length of the transient depends quadratically on $\tau$. The transient regime is longer as the value of $\tau$ is larger, i.e., for short recovery times, the walker finds a cycle easier (and faster). If $\tau$ is very large, it may happen that the walker returns successive times to the same site earlier than $\tau$, and randomly choose the next step, losing the possibility of repeating the last steps and thus entering a cycle. ![Diffusion coefficient of the $\rho=0$ case (slope of the average MSD curves for each value of $\tau$). Forty uniformly distributed values of $\tau$ were considered between 5 and 200. []{data-label="figure:d_vs_tau"}](d_vs_tau){width="\columnwidth"} Observe that the exploring walker is the one that continuously updates the stored information. An intuitive guess of the resulting dynamics, analyzed in terms of the intensity of exploring activity of the individual, may lead us to think that such walker would have a higher difficulty in establishing a walking pattern and finding a closed circuit. Also, for those who maintain the stored information (the conservative walkers), finding an optimal closed circuit would be a relatively simple task. However, our results show that this intuition is wrong. Relevant insight on the mechanisms that give rise to the observed behavior of the forager walk can be obtained from well known results of conventional random walks. A random walk in one and two dimensions is recurrent, i.e. the probability that the walker eventually returns to the starting site is 1. (In higher dimensions, the random walk is transient, the former probability being less than 1 [@green].) So, in principle, for any value of $\tau$ and $\rho=0$ the forager walk eventually ends up in a cycle. However, this asymptotic behavior of the system may not be the most relevant one in many contexts. In the biological scenario, for example, one would be interested in the possibility of finding cycles in relatively short times. Our results can be explained by considering the so called *Pólya problem* or first return time. The probability that a simple random walk in one dimension returns for the first time to a given site after $2n$ steps is $$\binom{2n}{n}\frac{1}{(2n-1)2^{2n}}. \label{first}$$ In two dimensions the probability that a simple random walk returns to a given site after $2n$ steps is the square of the previous probability [@green] as a simple random in two dimensions can be projected into two independent one-dimensional walks on the $x$ and $y$ axes. The probability given by Eq. (\[first\]) asymptotically decays as $n^{-3/2}$, indicating that returning to the initial site is increasingly improbable with the elapsed time. The forager walk can be interpreted in the following way. Until the moment that the walker gets trapped in a cycle, it performs a random walk. Afterwards, the behavior is deterministic. That very moment corresponds to the first time a cycle is completed, so it is a return to the initial step of the cycle after $\tau_c\ge \tau$ time steps, where $\tau_c$ is the period of the cycle of an individual realization for a given choice of $\tau$. Let us assume that the transient walk executed up to this first return can be used to estimate a probability analogous to Eq. (\[first\]). We can do this from the length of the transient and the fraction of realizations that successfully ended in a cycle. The transient can be thought of as consisting of successive realizations of walks of length $\tau_c$ that were *not* successful in returning to the starting point. We have verified this algebraic dependence. The immediate question about the validity of the present results for higher dimensions can be answered by invoking the recurrence theorem presented by G. Pólya in 1921 [@poly21], where he. shows that a random walk is recurrent in 1 and 2-dimensional lattices, and that it is transient for lattices with more than 2 dimensions. The emergence of home ranges as presented in this work is strongly dependent of the probability of eventual returns to already visited places. Thus, for dimensions higher than 2 the expected cycle lengths will be longer and their very existence less probable, as can be deduced from the calculated probabilities of returns to the origin in these cases [@mont56]. Besides, the fact that increasing $\rho$ produces an increase in the probability of finding a cycle can be understood in the following way. The probability of returning to a given site decreases as the walker moves away. When $\rho$ is small the walker can move increasingly farther away from the stored site, making it rather difficult to return to it and enter a cycle. When $\rho$ is high the foraging walker constantly updates its memory, in a way that it is always relatively close to the most recently stored site. This increases the probability of returning to it and triggering a cycle. For completeness, we include a plot showing results based on the use of a smoother distribution. The smooth step depends on two parameters, $\tau$ and $w$. The limit $w\to\infty$ tends to a Heaviside step function at $t=\tau$. Figure \[figure:nostep\] displays the behavior of the walk for three values of $\omega$ ($10$, 2 and $0.5$), exemplifying the typical behaviors for a fixed value of $\tau=20$. The MSD’s are averages over 1000 realizations. The black curves correspond to $\omega=10$, which is very similar to a step, and gives an MSD almost identical to the one shown in Fig. 1, with $\rho=1$ (orange curve). While smoother curves tend to plateaus at higher values no qualitatively differences are observed in the behavior. ![Mean square displacement vs. time considering a smooth function for the probability to repeat the step taken in the previous visit. $\omega=10$ (black), $\omega=2$ (red), $\omega=0.5$ (blue) and $\tau=20$ (color on line). Simulations performed in a square lattice of $5000\times 5000$ sites, $10^5$ time steps and $10^3$ realizations. The inset shows the functional expression and shape of the probability distribution.[]{data-label="figure:nostep"}](MSD_wr){width="\columnwidth"} Conclusions =========== One of the important aspect related to animal movement is the effect that spatial heterogeneities have on the observed patterns. When the spatial heterogeneity is manifested through the distribution of resources, the link between resource dynamics and random walk models might be the key to answer many of open questions about the emergence of home ranges. Another route to explore this problem is by accounting for learning abilities and spatial memory [@stamp99]. The formation of a home range has previously been investigated with models in which a single individual displays both an avoidance response to recently visited sites and an attractive response toward places that have been visited sometime in the past [@fronhofer13; @moor09]. An animal searching for food would choose its movements based not only on its internal state and the instantaneous perception of the environment, but also on acquired knowledge and experience. Animals use their memory to infer the current state of areas not previously visited. This memory is build up by collecting information remembered from previous visits to neighboring locations [@faw14]. Although the emergence of home ranges is crucial in understanding the patterns arising from animal movement, there are few mechanistic models that reproduce this phenomenon. Traditional random walks, widely used to describe animal movement, show a diffusive behavior far from displaying a bounded home range. However, the addition of memory capacity has proven to predict bounded walks [@schu; @trimp99; @trimp01]. Home ranges also arise in biased diffusion [@okubo02] and in self-attracting walks [@tan]. The interest aspect of the results presented here is that they not only reveal the non trivial behavior of the so called *frugivore walk* but also contribute to a deeper understanding of the causes underlying the constitution of home ranges as an emergent phenomenon, among which we highlight the foraging strategy. By considering a minimal model we have shown that a walker with rudimentary learning abilities, together with the feedback from a dynamic substrate, give rise to an optimal foraging activity in terms of the usage of the spatial resource. Indeed, neither a foraging strategy based just on diffusion (a random walk without memory), nor a walk strongly determined by memory (like our conservative walker), are optimal. A better strategy is one that combines the use of memory with an exploratory behavior, such as our *explorative* walker. There is evidence supporting that precisely this combined strategy may be the one favored by evolutionary mechanisms [@gaf81; @eli07]. Foraging activity must balance between exploration and exploitation: on the one hand, exploring the environment is crucial to find and learn about the distributed resources; on the other hand, exploitation of known resources is energetically optimal. Indeed, this trade-off is a central thesis in current studies of foraging ecology, as it is apparent in the thorough work by W. Bell [@bell1991], in Lévy flight models [@viswanathan11] and others. The simple mechanism analyzed here contributes with theoretical support to these ideas. We have shown that the balance between exploration and exploitation not only provides an optimal use of resources. It may also be responsible for the emergence of a home range. The balance between exploration and exploitation appears as the road to successful foraging. This work was supported by grants from ANPCyT (PICT-2011-0790), U. N. de Cuyo (06/C410) and CONICET (PIP 112-20110100310) [99]{} M. Fuentes, M. N. Kuperman and V. M. Kenkre, Phys. Rev. Lett. **91**, 158104 (2003). G. Abramson, L. Giuggioli, R. R. Parmenter and V. M. Kenkre, J. Theor. Biol. **319**, 96 (2013). A. Okubo and S. A. Levin, *Diffusion and Ecological Problems* (Springer, 2002). A Mikhailov, V Calenbuhr, *From Cells to Societies* (Springer, 2006). C. L. Schat, M. N. Kuperman and H. S. Wio, Math. Biosc. **131**, 205 (1996). G. M. Viswanathan, M. G. E. da Luz, E. P. Raposo and H. E. Stanley, *The Physics of Foraging: An Introduction to Random Searches and Biological Encounters* (Cambridge University Press, 2011). G. M. Viswanathan, V. Afanasyev, S. V. Buldyrev, E. J. Murphy, P. A. Prince and H. E. Stanley, Nature **381**, 413 (1996). L. Giuggioli, F. J. Sevilla and V. M. Kenkre, J. Phys. A: Math. Theor. **42**, 434004 (2009). L. Börger, B. D. Dalziel, J. M. Fryxell, Ecol. Lett. **11**, 637 (2008). P. Turchin, *Quantitative Analysis of Movement: Measuring and Modeling Population Redistribution in Animals and Plants* (Sinauer Associates, Sunderland, Massachusetts, USA 1998). R. Nathan, Proc. Natl. Acad. Sci. USA **105**, 19050 (2008). J. M. Morales, P. Moorcroft, J. Matthiopoulos, J. Frair, J. Kie, R. Powell, E. Merrill and D. Haydon, Philos. Trans. R. Soc. Lond. B Biol. Sci. **365**, 2289 (2010). K. Ohashi, J. D. Thomson and D. D’Souza, Behavioral Ecology **18**, 1 (2007). F. Bartumeus, J. Catalan, U. L. Fulco, M. L. Lyra and G. M. Viswanathan, Phys. Rev. Lett. **88**, 097901 (2002). E. A. Fronhofer, T. Hovestadt and H-J. Poethke, Oikos **122**, 857 (2013). W. H. Burt, J. Mamm. **24**, 346 (1943). G. Abramson, M. N. Kuperman, J. M. Morales and J. C. Miller, Eur. Phys. J. B **87**, 100 (2014). Z. J. Tan, X. W. Zou, S. Y. Huang, W. Zhang and Z. Z. Jin, Phys. Rev. E **65**, 041101 (2002). G. M. Schütz and S. Trimper, Phys. Rev. E **70**, 045101 (2004). P. Flory, *Principles of Polymer Chemistry* (Cornell University Press, Ithaca, 1953) P. G. de Gennes, *Scaling Concepts in Polymer Physics* (Cornell University Press, Ithaca, 1979). S. Hod and U. Keshet, Phys. Rev. E **70**, 015104(R) (2004). B. Schulz and S. Trimper, Phys. Lett. A **256**. 266 (1999). M. Schulz and S. Trimper, Phys. Rev. B, **64**, 233101 (2001). U. Keshet and S. Hod, Phys. Rev. E **72**, 046144 (2005). F. N. C. Paraan and J. P. Esguerra, Phys. Rev. E **74**, 032101 (2006). M. A. A. da Silva, J. C. Cressoni and G. M. Viswanathan, Physica A **364**, 70 (2006). J. C. Cressoni, M. A. A. da Silva and G. M. Viswanathan, Phys. Rev. Lett. **98**, 070603 (2007). V. B. Sapozhbikiv, J. Phys. A **27**, L151 (1994). A. Ordemann, G. Berkolaiko, S. Havlin and A. Bunde, Phys. Rev. E **61**, R1005 (2000). A. Ordemann, E. Tomer, G. Berkolaiko, S. Havlin and A. Bunde, Phys. Rev. E **64**, 046117 (2001). R. G. Ford and D. W. Krumme. J. Theor. Biol. **76**, 125 (1979). C. M. Grinstead and J. L. Snell, *Introduction to Probability* (American Mathematical Society, 1997). G. Pólya, Math. Ann. **84**, 149 (1921). E. W. Montroll, J. SIAM **4**, 241 (1956). L. Giuggioli and V. Kenkre, Movement Ecology **2**:20 (2014). J. A. Stamps and V. V. Krishnan, Q. Rev. Biol. **74**,291 (1999). B. van Moorter, D. Visscher, S. Benhamou , L. Börger, M. S. Boyce, J. M. Gaillard, Oikos **118**, 641(2009). T. W. Fawcett, B. Fallenstein, A. D. Higginson, A. I. Houston, D. E. W. Mallpress, P. C. Trimmer, J. M. McNamara, Trends Cogn Sci **18**, 153 (2014). E. A. Gaffan and J. Davies, Learning and Motivation **12**, 282 (1981) S. Eliassen, C. Jørgensen, M. Mangel, J. Giske, Oikos **116**, 513 (2007). W. J. Bell, *Searching Behaviour: The behavioural ecology of finding resources*, (Chapman & Hall, 1991).
--- abstract: 'In order to show the principle viability of a recently proposed relativistic positioning method based on the use of pulsed signals from sources at infinity, we present an application example reconstructing the world-line of an idealized Earth in the reference frame of distant pulsars. The method considers the null four-vectors built from the period of the pulses and the direction cosines of the propagation from each source. Starting from a simplified problem (a receiver at rest) we have been able to calibrate our procedure, evidencing the influence of the uncertainty on the arrival times of the pulses as measured by the receiver, and of the numerical treatment of the data. The most relevant parameter turns out to be the accuracy of the clock used by the receiver. Actually the uncertainty used in the simulations combines both the accuracy of the clock and the fluctuations in the sources. As an evocative example the method has then been applied to the case of an ideal observer moving as a point on the surface of the Earth. The input have been the simulated arrival times of the signals from four pulsars at the location of the Parkes radiotelescope in Australia. Some substantial simplifications have been made both excluding the problems of visibility due to the actual size of the planet, and the behaviour of the sources. A rough application of the method to a three days run gives a correct result with a poor accuracy. The accuracy is then enhanced to the order of a few hundred meters if a continuous set of data is assumed. The method could actually be used for navigation across the solar system and be based on artificial sources, rather than pulsars. The viability of the method, whose additional value is in the self-sufficiency, i.e. independence from any control from other operators, has been confirmed.' author: - Matteo Luca Ruggiero - Emiliano Capolongo - Angelo Tartaglia title: Pulsars as celestial beacons to detect the motion of the Earth --- Introduction {#sec:intro} ============ Soon after the discovery of pulsars, it was suggested that they could have been used as stellar beacons for spacecraft navigation in the Solar System and beyond [@oldiepuls]. Today there are proposals focusing on the use of X-ray pulsars for navigation [@xrays1; @xrays2]; they are based on the accurate measurement of the times of arrival of pulses or phase differences, in order to determine the position of the spacecraft. In previous papers [@corea; @pulsararc10] we operationally described how a relativistic positioning system can be build using electromagnetic signals emitted by pulsating sources, such as pulsars, thanks to the use of emission coordinates [@coll3]. The simplest way of understanding how emission coordinates work, is to consider four emitting clocks, in motion through space while broadcasting their proper times. The intersections of the past lightcone of an event with the world-lines of the emitting clocks can be labeled with the proper times of emission along the world-lines of the emitters: these proper times are the emission coordinates of the given event. We showed that, by receiving pulses from a set of different sources whose positions in the sky and periods are assumed to be known, it is possible to determine the user’s coordinates and spacetime trajectory, in the reference frame where the sources are at rest. In doing so, the phases of the received pulses play the role of emission coordinates. In particular, we developed a procedure that can be used to determine the user’s trajectory by assuming that its world-line is a straight line during a proper time interval corresponding to the reception of a limited number of pulses, which means that the effects of the acceleration are negligibly small. Our approach is based on the use of null frames in flat Minkowski spacetime, but we discussed its possible application to actual physical events provided that suitable approximations hold true. In this paper we want to practically develop an application of our method. In order to do so we actually imagine that our sources are four millisecond pulsars, then, simulating the arrival times of their signals, we show how the worldline of the receiver is reconstructed. First, for a sort of calibration, we imagine to have an observer at rest with respect to “fixed” stars and reconstruct his, in this case, trivial spacetime trajectory, with the uncertainties implied by the procedure. Then, making use of the TEMPO2 software [@tempo2], a pulsar-timing package that simulates the times of arrival of pulses at a given location on the Earth, we determine the trajectory of that location in spacetime, due to the combined motion of the Earth around the Sun and to its daily rotation. Eventually, we compare the reconstructed world-line with the one obtained by the ephemerides, and discuss the achieved accuracy. Both examples are meant as feasibility tests of the whole process; the problems with a real positioning system are discussed in the conclusion. The paper is organized as follows: in Section \[sec:method\] we outline the theoretical framework, then the results of our simulations are given in Section \[sec:numsim\]. Conclusions are given in Section \[sec:conc\]. The Method {#sec:method} ========== In this Section, we give an outline of the method that we will apply to reconstruct the spacetime trajectory. To begin with, let us introduce the basic null frame which allows to determine a worldline. In order to define a four-dimensional spacetime frame based on the use of null-coordinates, at least four sources of electromagnetic signals are needed. To fix ideas, we suppose to pick up a set of four pulsars, provided their periods and angular positions in the sky are known: namely, each of these sources is characterized by the frequency of its periodic signals and by their propagation directions in space; the emitters are supposed to be at rest, in a given reference frame, at spatial infinity (hence, their signals can be thought of as plane waves) In the reference frame where the sources are at rest we associate to each of them a null four-vector[^1] $\bm f $ whose Cartesian contravariant components are given by $$f^{\mu } \doteq \frac{1}{c T}(1,\vec{\mathbf{n}}), \label{eq:deff}$$ $T$ is the (proper) signal period, and $\vec{\mathbf{n}}$ is the unit vector describing the direction of propagation in the given frame. A receiver counts the periodic electromagnetic signals coming from the sources and measures the proper time intervals between successive arrivals. To each spacetime event defined by the position four-vector $$\bm{r}\doteq (ct,\vec{\mathbf{x}}), \label{eq:defr}$$it is possible to associate the scalar function $X(\bm r)$ $$X(\bm r)\doteq \bm f\cdot \bm r, \label{eq:defX}$$where the dot stays for Minkowski scalar product. The scalar $X$ might be thought of as the phase difference of the wave described by $\bm f$ with respect to its value at the origin of the coordinates, where $\bm r= \bm 0$. Given four emitters, the four wave four-vectors $\{\bm f_{(a)},\bm f_{(b)},\bm f_{(c)},\bm f_{(d)}\}$ in the form (\[eq:deff\]) constitute the null frame or null tetrad (where $a,b,c,d$ label the sources). In particular, at any event $\bm r$ the phases $$X_{(N)}\doteq \bm f_{(N)}\cdot \bm r,\quad N=a,b,c,d\ \label{eq:defXN}$$are defined. Then, starting from the definition of the symmetric matrix $$\eta _{(M)(N)}=\bm f_{(M)}\cdot \bm f_{(N)}, \label{eq:defgab}$$ and its inverse $\eta ^{(P)(N)}$ (such that $\eta _{(M)(P)}\eta ^{(P)(N)}=\delta _{(M)}^{(N)}$), we define the vectors $$\bm f^{(N)}=\eta ^{(N)(M)}\bm f_{(M)}, \label{eq:fcontraN}$$ Eventually, as we showed in [@pulsararc10], the position four-vector is expressed in the form $$\bm r = X_{(a)}\bm f^{(a)}+X_{(b)}\bm f^{(b)}+X_{(c)}\bm f^{(c)}+X_{(d)}\bm f^{(d)}. \label{eq:defXzero3}$$ where the phases $X_{(N)}$ play the role of coordinates with respect to the null frame; in other words, eq. (\[eq:defXzero3\]) shows that it is possible to obtain the coordinates of event $\bm r$, in terms of the measured phases and to reconstruct the world-line of the receiver. However, in actual situations, the received signals consist in a series of pulses so that, in general, the values of $X_{(N)}(\bm r)$ are not directly observable. In what follows we address this issue and describe how phases can be empirically determined, provided that some assumptions hold true.\ To begin with, we call reception the event corresponding to the arrival of a pulse from one of the sources. As a consequence, an arbitrary reception event can be written in the form (\[eq:defXzero3\]) with $$\begin{aligned} X_{(a)} &=&n_{(a)}+p, \label{eq:defp} \\ X_{(b)} &=&n_{(b)}+q, \label{eq:defq} \\ X_{(c)} &=&n_{(c)}+s, \label{eq:defs} \\ X_{(d)} &=&n_{(d)}+w, \label{eq:defw}\end{aligned}$$where we have expressed the phases $X_{(N)}$ in terms of an integer $n_{(N)}$, describing the sequence of cycles of signals, and a fractional value: e.g. $p$ means a fractional value of the cycle in $X^{(a)}$, and the equivalent holds for $q,s,w$, where $0<p,q,s,w$. This amounts to saying that, at any reception event, in eqs. (\[eq:defp\])-(\[eq:defw\]), only one of the $p,q,s,w$ is in general null. Once we choose an arbitrary origin, we may count the pulses in order to measure the $n_{(N)}$, but we have no direct means to measure the fractional values $p,q,s,w$. However, simple geometric considerations allow to introduce a procedure to determine these values: first, we suppose that the acceleration of the receiver is small during a limited series of reception events, so that we may identify the user’s world-line with a straight line; then, we suppose that, by means of its own clock, the receiver can measure the proper time interval $\tau _{ij}$ between the i-th and j-th arrivals. With these assumptions it is possible to determine the fractional values $p,q,s,w$. To this end, let us consider two sequences[^2] of arrival times from the sources: they consist of eight reception events, each of them in the form $$\bm r_{j}=X_{(a)j}\bm f^{(a)}+X_{(b)j}\bm f^{(b)}+X_{(c)j}\bm f^{(c)}+X_{(d)j}\bm f^{(d)}, j=1,..,8, \label{eq:arrivalj}$$where $X_{(N)j}$’s are expressions like (\[eq:defp\]–\[eq:defw\]). We arrange the events as follows: $\bm r_{1}$ is the arrival of the generic signal from pulsar (or any other equivalent source) a, $\bm r_{2}$ is the arrival of the first signal of pulsar b after $\bm r_{1}$, $\bm r_{3}$ is the arrival of the first signal of pulsar c after $\bm r_{1}$, and $\bm r_{4}$ is the arrival of the first signal of pulsar d after $\bm r_{1}$ (the sources are ordered from largest (a) to shortest (d) period); $\bm r_{5}$ is the arrival of the first asignal of the second sequence, and so on. The flatness hypothesis allows to write the displacement four-vector between two reception events in the form $$\bm r_{ij}\doteq \bm r_{i}-\bm r_{j}=\left( X_{(N)i}-X_{(N)j}\right) \bm f^{(N)}\doteq \Delta X_{(N)ij}\bm f^{(N)}. \label{eq:deltarij}$$In fact, the assumption that the world-line of the receiver is straight during a limited number of paces of the signals can be used also to provide further information. Let us consider three successive reception events i,j,k; we have $$\bm r_{ji}=\Delta X_{(N)ji}\bm f^{(N)},\quad \quad \bm r_{kj}=\Delta X_{(N)kj}\bm f^{(N)}. \label{eq:deltarjirkj}$$The straight-line hypothesis allows us to write $$\frac{\tau _{ji}}{\tau _{kj}}=\frac{\Delta X_{(a)ji}}{\Delta X_{(a)kj}}=\frac{\Delta X_{(b)ji}}{\Delta X_{(b)kj}}=\frac{\Delta X_{(c)ji}}{\Delta X_{(c)kj}}=\frac{\Delta X_{(d)ji}}{\Delta X_{(d)kj}}, \label{eq:rapporti1}$$where $\tau _{ji}$, $\tau _{kj}$ are the proper times elapsed between the i-th and j-th, and j-th and k-th reception events, respectively. These relations enable us to obtain the values we are interested in: in fact, we may write eq. (\[eq:defXzero3\]) in the form $$\bm r_{j}=X_{(a)j}\bm f^{(a)}+X_{(b)j}\bm f^{(b)}+X_{(c)j}\bm f^{(c)}+X_{(d)j}\bm f^{(d)},j=1,..,8, \label{eq:arrivalj1}$$where $X_{(N)j}$ is given by X\_[(N)i]{}=( [c|c|c|c]{} n\_[1]{}\^[(a)]{} & n\^[(b)]{}\_[1]{}+q\_[1]{} & n\^[(c)]{}\_[1]{}+s\_[1]{} & n\^[(d)]{}\_[1]{}+w\_[1]{}\ n\^[(a)]{}\_[2]{}+p\_[2]{} & n\^[(b)]{}\_[2]{} & n\^[(c)]{}\_[2]{}+s\_[2]{} & n\^[(d)]{}\_[2]{}+w\_[2]{}\ n\^[(a)]{}\_[3]{}+p\_[3]{} & n\^[(b)]{}\_[3]{}+q\_[3]{} & n\^[(c)]{}\_[3]{} & n\^[(d)]{}\_[3]{}+w\_[3]{}\ n\^[(a)]{}\_[4]{}+p\_[4]{} & n\^[(b)]{}\_[4]{}+q\_[4]{} & n\^[(c)]{}\_[4]{}+s\_[4]{} & n\^[(d)]{}\_[4]{}\ n\^[(a)]{}\_[5]{} & n\^[(b)]{}\_[5]{}+q\_[5]{} & n\^[(c)]{}\_[5]{}+s\_[5]{} & n\^[(d)]{}\_[5]{}+w\_[5]{}\ n\^[(a)]{}\_[6]{}+p\_[6]{} & n\^[(b)]{}\_[6]{} & n\^[(c)]{}\_[6]{}+s\_[6]{} & n\^[(d)]{}\_[6]{}+w\_[6]{}\ n\^[(a)]{}\_[7]{}+p\_[7]{} & n\^[(b)]{}\_[7]{}+q\_[7]{} & n\^[(c)]{}\_[7]{} & n\^[(d)]{}\_[7]{}+w\_[7]{}\ n\^[(a)]{}\_[8]{}+p\_[8]{} & n\^[(b)]{}\_[8]{}+q\_[8]{} & n\^[(c)]{}\_[8]{}+s\_[8]{} & n\^[(d)]{}\_[8]{} ) \[eq:tableXNi\] and the the fractional values are expressed in terms of observed quantities by: p\_[1]{}=0, q\_[1]{}=n\^[(b)]{}\_[2]{}-n\^[(b)]{}\_[1]{}-(n\^[(b)]{}\_[6]{}-n\^[(b)]{}\_[2]{} ) , s\_[1]{}=n\^[(c)]{}\_[3]{}-n\^[(c)]{}\_[1]{}-(n\^[(c)]{}\_[7]{}-n\^[(c)]{}\_[3]{} ) ,w\_[1]{}=n\^[(d)]{}\_[4]{}-n\^[(d)]{}\_[1]{}-(n\^[(d)]{}\_[8]{}-n\^[(d)]{}\_[4]{} ) , \[eq:ext11\] p\_[2]{}=n\^[(a)]{}\_[1]{}-n\^[(a)]{}\_[2]{}+(n\^[(a)]{}\_[5]{}-n\^[(a)]{}\_[1]{} ) , q\_[2]{}=0, s\_[2]{}=n\^[(c)]{}\_[3]{}-n\^[(c)]{}\_[2]{}+(n\^[(c)]{}\_[7]{}-n\^[(c)]{}\_[3]{} ) ,w\_[2]{}=n\^[(d)]{}\_[4]{}-n\^[(d)]{}\_[2]{}+(n\^[(d)]{}\_[8]{}-n\^[(d)]{}\_[4]{} ) , \[eq:ext12\] p\_[3]{}=n\^[(a)]{}\_[1]{}-n\^[(a)]{}\_[3]{}+(n\^[(a)]{}\_[5]{}-n\^[(a)]{}\_[1]{} ) , q\_[3]{}=n\^[(b)]{}\_[2]{}-n\^[(b)]{}\_[3]{}-(n\^[(b)]{}\_[6]{}-n\^[(b)]{}\_[2]{} ) , s\_[3]{}=0,w\_[3]{}=n\^[(d)]{}\_[4]{}-n\^[(d)]{}\_[3]{}+(n\^[(d)]{}\_[8]{}-n\^[(d)]{}\_[4]{} ) , \[eq:ext13\] p\_[4]{}=n\^[(a)]{}\_[1]{}-n\^[(a)]{}\_[4]{}+(n\^[(a)]{}\_[5]{}-n\^[(a)]{}\_[1]{} ) , q\_[4]{}=n\^[(b)]{}\_[2]{}-n\^[(b)]{}\_[4]{}-(n\^[(b)]{}\_[6]{}-n\^[(b)]{}\_[2]{} ) , s\_[4]{}=n\^[(c)]{}\_[3]{}-n\^[(c)]{}\_[4]{}-(n\^[(c)]{}\_[7]{}-n\^[(c)]{}\_[3]{} ) ,w\_[4]{}=0, \[eq:ext14\] and so on. The procedure that we have just described allows to give an operational meaning to the phases (\[eq:defXN\]) and, hence, to the determination of the position four-vector, through (\[eq:defXzero3\]). By simply measuring proper times and considering sequences of octets, it is possible to reconstruct the whole receiver’s world-line. In what follows, we test the reliability of this procedure; in particular, we consider different situations where four emitters send pulsed signals to the receiver. We are going to show how, by measuring the proper time intervals between reception events, it is possible to reconstruct the receiver’s trajectory in spacetime, once the emission directions and frequencies of the pulsating signals are known. Numerical Simulations {#sec:numsim} ===================== In order to test the procedure described above, it is necessary to define the null frame, that is to say the basis four-vectors in the form (\[eq:deff\]) for each source. In other words, we need to know the positions of the sources and their periods. Then, in order to apply the procedure, we need the arrival times of the pulses, as measured by the receiver. Our purpose is to demonstrate how the system works in practice, but we have no actual device at hands so we may follow two strategies: a) simulate the sources, giving them an arbitrary position in the sky and an arbitrary periodicity, then somehow mimicking the uncertainties associated with real sources; b) choose, as an example, four real millisecond pulsars with the data we find in the literature. In practice the difference between the two approaches is not really important, since the second choice is only nominally different from the first, so we decided to use the parameters of four real pulsars as they are listed in Table \[tab:table1\]. Next we proceed in two steps: - a\) we numerically simulate the reception events of the pulses, starting from the knowledge of the receiver’s trajectory and the definition of the null frame; this is in particular applied to the case of an observer at rest with “fixed” stars. - b\) we use a software which simulates the arrival times of the pulses received at a given terrestrial location, emitted by our set of pulsars; by this way we try and reconstruct the worldline of the receiver at the chosen position on Earth. In both cases a previous knowledge is used to generate a series of arrival times as they would be sensed by the receiver’s antenna. In the first case, the output of our algorithm is compared with the receiver’s worldline, that we know is a straight line, and we can test how any uncertainty present in the input arrival times is transferred to the outcome. In the second case, the simulator generates sequences of arrival times as they would be obtained at an antenna and we use them, applying our algorithm in order to rebuild the motion of the Earth or, more correctly, the trajectory of a terrestrial location where the pulses would be received, which moves because of the daily rotation and the motion of the Earth around the Sun: this trajectory is then compared to the one obtained by the ephemerides. More precisely we reconstruct the motion of the receiver with respect to the “fixed” stars, assuming as the origin the event where the reception has started: in a sense we produce a self-positioning. The position of the initial event with respect to any given reference frame must be known by other means. ------------ --------- -------------------- ------------------- -- Pulsar T (ms) Elong $(^{\circ})$ Elat $(^{\circ})$ J1730-2304 $8.123$ $263.19$ $0.19$ J2322+2057 $4.808$ $0.14$ $22.88$ B0021-72N $3.054$ $311.27$ $-62.35$ B1937+21 $1.558$ $301.97$ $42.30$ ------------ --------- -------------------- ------------------- -- : The parameters of the four pulsars we chose are listed as they were taken from the ATNF Pulsar Catalogue. The basis four-vectors are obtained after computing the direction cosines from the ecliptic coordinates; then use is made of the formula $\bm f _{(N)} =\frac{1}{c T_{(N)}}(1,\vec{\mathbf{n}}_{(N)})$, for $N=a,\ b,\ c,\ d$. Both the periods and the direction cosines are assumed to be known with an accuracy limited by the numerical precision only.[]{data-label="tab:table1"} User at rest {#ssec:rest} ------------ As written above, the first step is to consider the simple case of a receiver at rest in the reference frame of the sources: this can be thought of as a sort of calibration of our procedure. We use the parameters of the four real pulsars listed in Table \[tab:table1\], chosen from the ATNF Pulsar Catalogue [@catalogo]. As for the positions and periods, in the simulation they have an accuracy limited by the numerical precision only; actually the real uncertainty would produce here a systematic error which is irrelevant for our purposes. In any case, we introduced a Gaussian error with a 1 ns $\sigma$ in the determination of the times of arrival in order to account both for the accuracy of the receiver’s clock and for the uncertainty in the behaviour of the sources, due to the fluctuation of their periods. If we want to be realistic we should recognize that the chosen figure is rather optimistic or at least it would require a rather long integration time. Given the example we are working out here (observer at rest) the length of the integration time is indeed no problem, however we must also recall that our aim is simply to see how the assumed uncertainty is reflected in the final result, without worrying about the spectral composition of the noise or other details. For the same reasons we did not take into account either the proper motions of the sources or the natural decay of their periods, considering that these factors are irrelevant in such short times as the ones relevant for our simulation; if needed they could easily be included. The arrival times of the pulses are obtained numerically, by considering the intersection of the receiver’s world line $$\rho \equiv \left\{\begin{array}{ccc}x(\lambda) &=& 0 \\y(\lambda) &=& 0 \\z(\lambda) &=& 0 \\t(\lambda) & = & \lambda\end{array}\right. \label{eq:rholambda1}$$ with the constant phase hyperplanes from the sources f\_[(N)]{} r = \_[(N)]{}, N=a, b, c, d \[eq:elicapiani1\] whose zeros have been arbitrarily chosen. The arrival times have been ordered according to the procedure described in Section \[sec:method\] and, then, they have been used to compute the phases at the reception events. Eventually, the reconstructed receiver trajectory $\bar \rho $ has been obtained in terms of these phases, according to eq. (\[eq:defXzero3\]): $$\bm r [ {i}] = X[ {i}]_{(a)}\bm f^{(a)}+X[ {i}]_{(b)}\bm f^{(b)}+X[ {i}]_{(c)}\bm f^{(c)}+X[ {i}]_{(d)}\bm f^{(d)}. \label{eq:defXzero3elica}$$ where $[ i]$ is an index labeling the i-th reception event. In particular, from (\[eq:defXzero3elica\]) we obtain the Cartesian components $\bar t, \bar x, \bar y, \bar z$ for each reception event. The comparison between the analytic spacetime trajectory $ \rho$ and the reconstructed one $\bar \rho$ is given in Figure \[fig:resta\], in a bidimensional-plus-time view; as it can be seen the dispersion of the reconstructed positions is contained within one meter or less. In Figure \[fig:restb\] the components of the reconstructed trajectory are given, as functions of the sequence of the reception events. It is possible to see that the $x$-component has a bigger dispersion than the other two; this is due to the fact that the sources are not isotropically located in the sky (the ideal configuration should be tetrahedron-like, which cannot be in the real world). The RMS deviation of the reconstructed positions with respect to the ideal ones (vertical bar in the origin) is less than 0.40 m; this is consistent with the 1 ns uncertainty that we assumed for the arrival times. Reconstructing the motion of the Earth {#ssec:terra} -------------------------------------- The next step, as announced, is to use our method for a less trivial and a bit more realistic situation. It would be interesting to use real data, i.e. a sequence of arrival times from known pulsars, as measured at a radiotelescope. On doing so, it would be possible to rebuild the motion of the radiotelescope, determined by the combined effect of the terrestrial rotation and the motion of the Earth around the Sun. Actually, strictly speaking, our approach to the use of pulsating sources for positioning is defined in Minkowski spacetime and the sources are supposed to be ideal, i.e. at rest and emitting with a constant frequency. So, one could expect that it is not possible to use it to deal with a real situation where the gravitational field is present (such as it is the case of the Earth in the gravitational field of the Sun) and the sources are real pulsars. However, as discussed in [@pulsararc10], the effects of the gravitational field, the proper motion of the sources and the variability of their emission rate are small enough to be neglected for many actual physical situations, among which the one in which we are interested. Hence, in principle, it could be possible to define a null basis by taking four known millisecond pulsars (which have a highly regular emission rate, see e.g. [@kramer]), suitably distributed in the sky (in an as much tetrahedral as possible configuration). The receiver could be one of the terrestrial radiotelescopes, now spread worldwide, which are able to point, hook up and follow the faint signals emitted by the chosen pulsars. If it were possible for a given radiotelescope to simultaneously receive four signals from four sources, then the application of our procedure could determine the spacetime trajectory of the radiotelescope. Unfortunately, for the moment, it is technically impossible for a given radiotelescope to simultaneously track four distinct pulsars, located at very different positions in the sky.[^3] Given the purpose of our exercise we may keep the exterior dressing of a real measurement, but using simulated arrival times. To this aim we shall use TEMPO2 [@tempo2], a specific software environment, widely used nowadays by the astronomers and astrophysicists studying pulsars, which enables to simulate the pulsar timing. In particular, the TEMPO2 plug-in “*fake*” enables to simulate the time residuals expected from a given pulsar observation session. In fact, this code automatically generates a set of times of arrival for a specific pulsar at a predefined location on the Earth surface (corresponding, for instance, to a radiotelescope site), in a time window defined by the user, and starting from the transit time of the given pulsar through the local meridian (superior culmination point). It takes into account the contribution to timing of the gravitational field in the Solar System due to the Sun and the other bodies, and other kinematical effects (see e.g. [@straumann04]). The possibility to add various types of error, in particular the Gaussian one or the red noise one (a timing noise that is actually negligible for most millisecond pulsars), to the times of arrival is also allowed. Hence, we chose the four pulsars we already used for the previous exercise, introducing here a Gaussian 1 $\mu$s uncertainty. This is a conservative estimate of the error in the timing procedure, due both to the detection process and to the fluctuations of the sources. Then TEMPO2 has been for us the equivalent of an antenna where the sequences of pulses from our quartet of sources are received. We of course know that TEMPO2 presupposes a complete knowledge of the position of the sources, which in turn is obtained assuming the motion of the observatory with respect to fixed stars is already given by other means. In practice we have a logic loop here, however in a real situation instead of TEMPO2 we would have a receiving antenna and the downstream processing of the data is insensitive to the real nature of the input. Furthermore, as already mentioned, ours is, strictly speaking, a self-positioning rather than an absolute positioning. Given three independent space directions in the sky (directions of the axes of the reference frame) the origin is assumed to coincide with the starting point of the positioning process. The location of that origin with respect to some global reference frame, such as the International Celestial Reference System (ICRS), must be independently defined. The arrival times have been simulated during a time interval of about three days, at a given position on the surface of the Earth, that is the one of the Parkes observatory in Australia. In particular, we considered for each pulsar a set of about 28000 pulses, sampled out of the continuous sequence each 10 seconds. The duration of the simulation allows to evidence the actual motion of the observatory, due to the combined motion of the Earth around the Sun and of its daily rotation. The chosen pulsars define the null frame, and they are supposed to be at rest in the ICRS (where, in turn, the barycenter of the Solar System is at rest). By applying the procedure described above, we have rebuilt the trajectory of the observatory. To make a comparison, we consider $\zeta$, that is the trajectory of the observatory, as determined by the ICRS ephemerides having components $t,x,y,z$, while, as before, the reconstructed trajectory $\bar \zeta$ has been obtained according to eq. (\[eq:defXzero3\]): $$\bm r [ i] = X[ i]_{(a)}\bm f^{(a)}+X[ i]_{(b)}\bm f^{(b)}+X[ i]_{(c)}\bm f^{(c)}+X[ i]_{(d)}\bm f^{(d)}. \label{eq:defXzero3earth}$$ where $[ i]$ is an index labeling the i-th reception event. In particular, from (\[eq:defXzero3earth\]) we obtain the Cartesian components $\bar t, \bar x, \bar y, \bar z$. The results are shown in Figure \[fig:eartha\] where the reconstructed spatial trajectory is compared with the one determined by the ICRS Ephemerides of the chosen observatory. The scale of the figure does not permit to appreciate the differences between the two trajectories. Actually this application of the method is purely indicative. TEMPO2 has of course not been designed for our purposes, so the sampling of the data each 10 seconds may introduce some additional uncertainty; moreover, as we stressed in [@pulsararc10] referring to the Geometric Dilution Of Precision (GDOP), a crucial role in minimizing the uncertainty is played by the geometry of the sources: the uncertainty is minimized when the volume spanned by the sources directions is maximized. We stress once more the demonstrative purpose of our work, which has led us to disregard a series of aspects that should be taken into account for a real positioning system. We have for instance assumed that all four pulsars of our example are simultaneously visible in the sky of Parkes at any moment, which is not the case. We could have chosen a set of circumpolar pulsars in order to achieve the continuous visibility condition. In that case however the sources being located in the same region of the sky would have spoilt the quality of the positioning for pure geometric reasons. In a real system we would need redundancy, considering more than four sources (as it is the case for the terrestrial GPS) so that at least four are always above the horizon. The system would use several quartets and would have to smoothly pass from one to an another when some of the stars sets down or comes up. In any case the simultaneous use of more than one set of four sources would also allow to obtain the positioning by way of an averaging process, even in the case of a practically pointlike receiver, such as a spacecraft. Another problem would arise from the fact that the definition of the sequence of the arrival times, getting rid of perturbations and noise, requires an integration time that could be too long for the approximation of a linear worldline to hold. However if we consider the case of the motion of the Earth, we see that the deviation of the worldline from the rectilinear trend in 10 seconds is in the order of 1 part in 10$^{10}$, which means that there is reasonable room for the integration. With all that we thought that the 1 microsecond $\sigma$ noise we introduced can reasonably well account for most short time disturbances in the arrival times. Conclusions {#sec:conc} =========== We studied the possibility of using sources which emit pulsating electromagnetic signals for positioning purposes and, in particular, we focused on a fully relativistic approach that we introduced in a previous work, which allows positioning with respect to an arbitrary event in flat spacetime. This approach is based on the definition of a null frame, by means of the four-vectors associated to the signals in the inertial reference frame where the sources are at rest, which, in turn, are determined by the emission directions and the frequencies of the pulsating signals. The procedure for positioning determination rests upon the hypothesis that the receiver’s world-line is a straight line during a proper time interval corresponding to the reception of a limited number of pulses, which holds true if the effects of its acceleration are negligibly small during that time. This is indeed true for any solid system when the time span is only a fraction of a second, of the order of, say, one hundredth or less. Of course also the space-time curvature has to be conveniently small in order for the hypothesis to hold, but again this is the case in the Solar system and for the conditions of the simulation. We tested the reliability of our procedure by means of numerical simulations, on the basis of the definition of null frames where the sources can be thought of as pulsars. As a simple example, we showed that our algorithms can be used to reproduce the spacetime trajectory of an user at rest, with an accuracy that is of the same order as the accuracy with which the data are known. After this initial calibration, we considered a more interesting case, that is the motion of the Earth. In particular, on using a simulating plug-in of the TEMPO2 software, we sampled the times of arrival of the signals from a given set of pulsars, expected from an observation session at a specific location on the Earth, which in our case is the Parkes observatory in Australia. By collecting data which simulate an observation of about three days, we determined the trajectory of the observatory due to the combined rotation and revolution motion of the Earth. Then, we compared the reconstructed world-line with the one obtained by the ephemerides. The comparison was only for qualitative purposes since the use of TEMPO2 intermittently and for such a long (from the view point of our method) time may introduce some additional uncertainty; furthermore the choice and the use we made of pulsars corresponds to an idealized situation. Our method would in general not be used for precision astrometry, but rather for navigation and most likely the emitters would be artificial sources. Artificial pulsating sources would provide much higher frequencies than pulsars and much higher signal intensities. This would mean smaller antennas and receivers, then more portable devices. On the other side artificial “pulsars” could not be in anyway “fixed” into the sky and we would have to know very well their worldlines. Direction cosine will be functions of time as well as the frequencies received by an observer at rest. With all that the method and the algorithm would remain essentially the same. In any case our preliminary results show the feasibility of the use of pulsating sources for positioning purposes, in a fully relativistic framework. Of course, in order to deal with true pulsars as well as artificial signals further steps are necessary to study and define many technological aspects, if we wish to include an increasing number of practical and technical details of the acquisition of the signals and the subsequent data processing. It is important to repeat and stress that our approach can be applied to artificial sources, such as pulsating sources on board of spacecrafts or celestial bodies in the Solar System: to this end, our procedure should be generalized to take into account the fact that the sources are not fixed, but follow closed space orbits, and that signals propagate in a gravitational field. In any case, both the usage of pulsars signals and that of artificial sources require further investigations, which seem worthwhile since, as we have showed here, in any case the self-sufficiency of the method proves to be its main advantage, while giving quite acceptable results. Acknowledgements {#sec:ack .unnumbered} ================ We are grateful to Dr. Andrea Possenti for helpful discussions about pulsar timing and the use of the TEMPO2 software. Our research has been supported by Piemonte local government within the MAESS-2006 project “Development of a standardized modular platform for low-cost nano- and micro-satellites and applications to low-cost space missions and to Galileo” and by ASI. [99]{} Downs G.S., *NASA Technical Report* 32-1594 (1974) Sheikh S.I., Pines D.J., Ray P.S., Wood K.S., Lovellette M.N., Wolff M.T., *Journal of Guidance, Control, and Dynamics* **29**, 49 (2006) Sheikh S.I., Hellings R.W., Matzner R.A., in *Proceedings of ION 63rd Annual Meeting, Cambridge, Massachusetts*, 432 (2007) Tartaglia A., *Acta Astronautica* **67**, 539 (2010), \[`arXiv:0910.2758`\] Tartaglia A., Ruggiero M.L., Capolongo E., (2010), *Advances in Space Research* **47**, 645 (2011), \[`arXiv:1001.1068`\] Coll B., in *Proc. 18th Spanish Relativity Meeting ERE-2005 on A Century of Relativity Physics (AIP Conf. Proc.)*, AIP, New York (2006), \[`arXiv:gr-qc/0601110`\] Hobbs G.B., Edwards R.T., Manchester R.N., *Monthly Notices of the Royal Astronomical Society* **369**, 655 (2006), see also the web site `http://www.atnf.csiro.au/research/pulsar/tempo2` Manchester, R. N., Hobbs, G. B., Teoh, A. Hobbs, M., *The Astronomical Journal*, **129**, 1993 (2005), see also the web site `http://www.atnf.csiro.au/research/pulsar/psrcat/` Kramer M., Lorimer D., Lorimer D.R. *Handbook of Pulsar Astronomy*, Cambridge University Press, Cambridge (2004) Smits, R., Kramer, M., Stappers, B., Lorimer, D. R., Cordes, J., Faulkner, A., *Astronomy and Astrophysics* **493**, 1161 (2009) Straumann N., *General Relativity*, Springer, Berlin (2004) ![Reconstructed world-line of an user at rest. The straight vertical line is the expected result; the scattered blue points are the reconstructed positions obtained applying our method with a nanosecond Gaussian noise in the input. The RMS dispersion is of the order of less than 40 cm. The greater dispersion of the $x$ coordinates with respect to $y$ is due to the anisotropic distribution of the sources in the sky.[]{data-label="fig:resta"}](fermo_xyt_new.eps) ![The same as for Figure \[fig:resta\], but for each coordinate separately; $\Delta t \doteq \bar t-t$ is the difference between the reconstructed time component and the expected one. As it can be seen the accuracy is of the same order as the one on the arrival times. The greater dispersion of the $x$ coordinates with respect to the other two is due to the anisotropic distribution of the sources in the sky.[]{data-label="fig:restb"}](fermo_comp_new.eps) ![Space trajectory of the Earth with respect to the pulsars during three days. At this scale the ideal and the reconstructed curves are indistinguishable. []{data-label="fig:eartha"}](3days_alt1.eps) [^1]: Arrowed bold face letters like $\vec{\mathbf{x}}$ refer to spatial vectors, while boldface letters like $\bm f$ refer to four-vectors; Greek indices refer to spacetime components, while Latin letters label the sources. [^2]: They may be subsequent or not, provided the total time span does not spoil the hypothesis of linearity of the world-line. [^3]: However, the new radio telescopes, such as the planned square kilometer array[@ska] (SKA), will be able to do this.
--- abstract: 'This paper analyzes a situation which is common for magnetized technical plasmas such as dc magnetron discharges and HiPIMS systems, where secondary electrons enter the plasma after being accelerated in the cathode fall and encounter a nearly uniform bulk. An analytic calculation of the distribution function of hot electrons is presented; these are described as an initially monoenergetic beam that slows down by Coulomb collisions with a Maxwellian distribution of bulk (cold) electrons, and by inelastic collisions with neutrals. Although this analytical solution is based on a steady-state assumption, a comparison of the characteristic time-scales suggests that it may be applicable to a variety of practical time-dependent discharges, and it may be used to introduce kinetic effects into models based on the hypothesis of Maxwellian electrons. The results are verified for parameters appropriate to HiPIMS discharges, by means of time-dependent and fully-kinetic numerical calculations.' author: - Sara Gallian - Jan Tri­e­sch­mann - Tho­mas Mus­sen­brock - Ralf Peter Brink­mann - 'William N. G. Hitchon' bibliography: - 'main\_Fh.bib' title: Analytic model of the energy distribution function for highly energetic electrons in magnetron plasmas --- Introduction ============ Magnetron discharges are mainly sustained by highly energetic electrons that are emitted from the target and accelerated through the sheath acquiring a kinetic energy of about $q_e V{_{\rm b}}$. A negative bias $V{_{\rm b}}$ is applied to the metallic target (or cathode), which emits secondary electrons due to the intense ion bombardment. These energetic electrons reach the magnetized plasma bulk and cool down both by inelastic collisions with neutral species and by interaction with the colder electrons. All secondary electrons are accelerated by roughly the same potential difference and this acceleration occurs before there is any significant opportunity for inelastic collisions, so the electrons can be treated as a monoenergetic beam. The low pressure of operation of the discharge and the presence of this energetic electron population causes the electron distribution function to be non Maxwellian: a kinetic approach to the electron description is needed. The dense plasma in the negative glow region, i.e. the magnetized region close to the target surface, is generated and heated via the interaction with the secondary electron beam. In this work, the majority of the electrons are treated as a Maxwellian population at low temperature (of the order of a few eV). This assumption is supported by results reported in [@1991-GuimaraesAlmeidaBretagne] in the study of dc magnetron discharges and by the kinetic global model results reported and discussed in section \[AnVsNum\]. Moreover, experimental measurements performed on dc magnetrons [@1991-Sheridan; @2004-Seo] show that the eedf is usually well fitted by a Maxwellian in the vicinity of the cathode (the ‘magnetic trap’ region), and by a bi-Maxwellian further away from it. In fact, the theory developed in this paper is applicable to the magnetic trap region; here the electrons are trapped in the magnetic field and describe spiral like orbits around the field lines, until they experience a collision. Even though the discharge can be operated at very low pressures (as low as 0.1 Pa), the strong magnetic field effectively confines the electrons and allows them to experience a consistent number of collisions with neutrals, both elastic and inelastic. Therefore a kinetic description of the electrons cannot neglect neutral interactions. The conditions described above are valid both in conventional dc magnetron (dcMS) and in high power impulse magnetron sputtering (HiPIMS) [@1999-KouznetsovMacak] systems. For a review of HiPIMS discharges see e.g. [@2012-GudmundssonBrenning; @2011-Anders; @2010-SarakinosAlami]. HiPIMS discharges are driven to very high power densities (on the order of kW/cm$^2$) by driving large currents in short pulses: the secondary electrons emitted by the target are accelerated to higher energies with respect to dc magnetron discharges, making the assumption of distinct Maxwellian and energetic electron populations more accurate. Moreover, since the pulse duration is a few hundred microseconds, the electron distribution functions have enough time to reach a steady state configuration. In fact, according to [@1964-MontgomeryTidman], the Maxwellization time is of the order of few tens of nanoseconds (for a plasma density of $5 \cdot 10^{18}~\text{m}^{-3}$ and temperature of 3 eV, it is about 20 ns). The Maxwellization time being one of the fastest time scales, further supports the assumption of describing the cold population with a Maxwellian distribution. The other characteristic time scales of the systems will be discussed in the following paragraphs. The key factors to develop a description of the electron distribution function are: the electron acceleration in the cathode fall region, the interaction of the beam electrons with the Maxwellian background by Coulomb collisions and the inelastic collisions with the neutral species. Recently in [@2013-HuoLundin-OhmHeats], it was claimed that the heating of the electrons takes place in the presheath region as well, and that this contribution is significant in a large variety of cases. The model given here neglects heating of the beam by electric fields which might occur after the beam crosses the sheath, but instead assumes the beam acquires an energy of $q_e V{_{\rm b}}$ effectively instantaneously. In fact, overall the beam electrons could diffuse far enough across the field lines to gain only a fraction of the presheath voltage, which is about $10$-$20\%$ of $V{_{\rm b}}$ [@2012-Rauch-PotentialMap], while slowing down. Therefore any heating mechanism for the energetic electrons taking place outside of the sheath can be neglected. In this work, a Boltzmann equation for the distribution function of the energetic electrons is given and is solved analytically in a simplified closed form. The influence of the different cooling mechanisms is investigated, and finally the energetic electron distribution is compared with the result of a kinetic global model. Simplified Boltzmann equation for the energetic electrons ========================================================= It is assumed that the electron population is divided into two well separated species: cold or Maxwellian electrons and energetic or hot electrons. The Maxwellian population constitutes the majority of the electrons, therefore determining the electron density, and possesses a low temperature in the eV range. The energetic (or hot) electrons deposit their energy in the negative glow region and are confined by the magnetic field, until they are slowed down or leave the magnetized region by scattering with both the neutral species and the ions. The plasma is considered to be homogeneous in this region and both electron species are taken to have an isotropic distribution function since the electron/neutral elastic collisions make the distribution spherically symmetric in velocity space.\ In the paragraphs that follow, the mechanisms responsible for the hot electron cooling are taken into account separately, while their cumulative effect gives the simplified Boltzmann equation for the hot electron energy distribution function (eedf) with \[$F^{(h)}$\] = m$^{-3}$eV$^{-1}$ $$\dfrac{{\partial}F^{(h)}}{{\partial}t} = \dfrac{{\partial}J{_{\rm e-e}}}{{\partial}{\varepsilon}} + K{_{\rm ion}} + K{_{\rm exc}} + S - L,$$ where the terms on the right hand side describe respectively the electron flux in energy due to Coulomb interactions ${\partial}J{_{\rm e-e}}/{\partial}{\varepsilon}$, inelastic ionization $K{_{\rm ion}}$ and excitation $K{_{\rm exc}}$ collisions. The terms $L$ and $S$ represent the loss and source term of mass and energy to the domain. Coulomb collisions ------------------ In this paragraph the Coulomb interaction of the hot population eedf with the Maxwellian electrons is studied. The hot electron distribution function is allowed to interact only with the cold electron distribution, which is isotropic and Maxwellian with temperature $T{_{\rm eM}}$, $$f_M(v_M) = \dfrac{n{_{\rm e}}}{\pi^{3/2} v{_{\rm th}}^3} e^{-v_M^2/v{_{\rm th}}^2},$$ where $n_e$ is the background electron density and $v{_{\rm th}}=\sqrt{2 k_b T{_{\rm eM}}/m_e}$ is their thermal velocity, with $k_b$ the Boltzmann constant. In fact, the hot electrons are fast enough and have a low enough density so as not to interact with themselves, but only the Maxwellian background with $v{_{\rm th}} \ll v$. Starting from the Fokker Planck equation for isotropic distributions, considering the interaction of $f^{(h)}(v)$ with $f_M$ only, one can write [@1964-MontgomeryTidman] $$\dfrac{{\partial}f^{(h)}}{{\partial}t} = \dfrac{q_e^4 \ln \Lambda}{4 \pi {\varepsilon}_0^2 m_e^2} n{_{\rm e}} \dfrac{1}{v^2} \dfrac{{\partial}}{{\partial}v} \left( f^{(h)} +\dfrac{v{_{\rm th}}^2}{2v} \dfrac{{\partial}f^{(h)}}{{\partial}v} \right) \label{CC_cgi},$$ where $q_e$ and $m_e$ are the electron charge and mass, $\ln \Lambda$ is the Coulomb logarithm and $f^{(h)}$ is normalized so that the hot electron density is $n^{(h)} = \int f^{(h)}(v) 4 \pi v^2 dv$. Here the plasma parameter is defined as $\Lambda = b_\perp/\lambda{_{\rm D}}$, where $\lambda{_{\rm D}}$ is the Debye length due to the Maxwellian electrons, and $b_\perp$ is the impact parameter $b_{\perp} = q{_{\rm e}}^2/(2\pi\epsilon_0 m{_{\rm e}} v^2{_{\rm th}})$. The first term in brackets in equation represents a drift in velocity space, while the second is a diffusion. This expression can be rewritten in terms of the slowing down frequency $\nu_s^{hM}$ and parallel velocity diffusion $\nu_\parallel^{hM}$ [@2005-Helander] as $$\dfrac{{\partial}f^{(h)}}{{\partial}t} = \dfrac{1}{v^2} \dfrac{{\partial}}{{\partial}v} \left[ \dfrac{v^3}{2} \left(\nu_s^{hM} f^{(h)} + \nu_\parallel^{hM} v \dfrac{{\partial}f^{(h)}}{{\partial}v} \right) \right], \label{CC_SI}$$ where the superscript $'hM'$ refers to the interaction of electrons of species $h$, i.e. hot electrons, with Maxwellian electrons $M$. The frequency $\nu_s^{hM}$ represents the rate at which the hot electrons are decelerated by collision with the cold ones and in the case of energetic electrons interacting with Maxwellian ones can be written as $$\nu_s^{hM}({\varepsilon}) = \dfrac{q_e^4 \ln \Lambda}{4 \pi {\varepsilon}_0^2 m_e^2} n{_{\rm e}} \dfrac{2}{v^3}. \label{nushM}$$ Since $v{_{\rm th}} \ll v$, one can neglect the diffusion term, or the parallel velocity diffusion term. It is convenient to express equation with $f^{(h)}(v)$ in terms of the kinetic energy ${\varepsilon}= \frac{1}{2} m_e v^2/q_e$ in eV, and to normalize the distribution function accordingly as $n^{(h)} = \int F^{(h)}({\varepsilon}) d{\varepsilon}$. Therefore the eedf $F^{(h)}$ is related to $f^{(h)}$ by [@2005-LiebermanLichtenberg-pg191] $$F^{(h)}({\varepsilon}) = 2\pi \left(\dfrac{2 q_e}{m_e}\right)^{3/2} \sqrt{{\varepsilon}} \ f^{(h)}(v).$$ Then the evolution of $F^{(h)}({\varepsilon})$ due to the interaction with a Maxwellian electron background is given by $$\dfrac{{\partial}F^{(h)}}{{\partial}t} = \dfrac{{\partial}}{{\partial}{\varepsilon}} \left( {\varepsilon}\nu_s^{hM} F^{(h)} \right). \label{CC_eps}$$ It will be shown below that the solution has a very weak dependence on $v$, further helping to justify the neglect of the diffusive term. Inelastic collisions -------------------- In this paragraph inelastic collision processes are taken into account. First the ionization term is investigated, then that for excitation. Following [@1981-BretagneDelouya], the ionizing collision term for an isotropic distribution function has the integral form $$\begin{aligned} K{_{\rm ion}}({\varepsilon}, t) =&& \, n_n(t) \int_{{\varepsilon}{_{\rm ion}}+{\varepsilon}}^{\infty} v({\varepsilon}') \ \sigma{_{\rm ion}}({\varepsilon}',{\varepsilon}) f({\varepsilon}',t) \ d{\varepsilon}' \nonumber \\ - &&\, n_n(t) H({\varepsilon}-{\varepsilon}{_{\rm ion}}) \sigma_{\rm ion}({\varepsilon}) v({\varepsilon}) f({\varepsilon},t), \label{Kion}\end{aligned}$$ where the positive term gives the contribution of incident electrons with energy ${\varepsilon}'$ larger than ${\varepsilon}{_{\rm ion}}+{\varepsilon}$ that had an ionizing collision and produced an electron with energy ${\varepsilon}$. The probability of an ionizing collision that produces an electron at ${\varepsilon}$ when the incident energy was ${\varepsilon}'$ is given by the differential cross section $\sigma{_{\rm ion}}({\varepsilon}',{\varepsilon})$. The negative term refers to the incident electron having energy ${\varepsilon}$. Here $n_n(t)$ is the gas density, $H(x)$ the Heaviside step function and ${\varepsilon}{_{\rm ion}}$ the ionization threshold. By definition, the dimension of the differential cross section is $[\sigma{_{\rm ion}}({\varepsilon}',{\varepsilon})] =$ m$^2$/eV whereas the dimension of the total cross section is $[\sigma{_{\rm ion}}({\varepsilon})] =$ m$^2$. The two quantities are related by $$\sigma{_{\rm ion}}({\varepsilon}) = \int_0^{({\varepsilon}-{\varepsilon}{_{\rm ion}})/2} \sigma{_{\rm ion}}({\varepsilon},{\varepsilon}_s) \ d{\varepsilon}_s, \label{TotCS}$$ where ${\varepsilon}_s$ is the scattered electron energy, and the energy ${\varepsilon}-{\varepsilon}{_{\rm ion}}$ is the residual energy of the incident electron after an ionizing collision. Alternatively $\sigma{_{\rm ion}}({\varepsilon},{\varepsilon}_s)$ can be defined by introducing a function $g_{h}({\varepsilon}', {\varepsilon})$ $$\sigma{_{\rm ion}}({\varepsilon}') g_{h}({\varepsilon}', {\varepsilon}) = \sigma{_{\rm ion}}({\varepsilon}', {\varepsilon}), \ \int_0^{{\varepsilon}'-{\varepsilon}{_{\rm ion}}} g_{h}({\varepsilon}', {\varepsilon}) d{\varepsilon}= 2.$$ Given the considerable separation on the energy scale of cold and hot electrons, it is assumed that a hot electron with incident energy ${\varepsilon}'\gg{\varepsilon}{_{\rm ion}}$ experiencing an ionizing collision will produce a cold electron at zero energy and a hot electron at the residual energy ${\varepsilon}{_{\rm r}} = {\varepsilon}'-{\varepsilon}{_{\rm ion}}$. Expressing this mathematically $$g_{h}({\varepsilon}', {\varepsilon}) = \delta({\varepsilon}) + \delta({\varepsilon}-{\varepsilon}{_{\rm r}}).$$ Substituting into equation , gives $$\begin{aligned} \left. K{_{\rm ion}} \right|_h = && \, n_n(t) \int_{{\varepsilon}{_{\rm ion}}+{\varepsilon}}^{\infty} F^{(h)}({\varepsilon}',t) \sigma_{\rm ion}({\varepsilon}') \delta({\varepsilon}-{\varepsilon}{_{\rm r}}) v({\varepsilon}') \ d{\varepsilon}' \nonumber \\ - && \, n_n(t) F^{(h)}({\varepsilon},t) \sigma_{\rm ion}({\varepsilon}) v({\varepsilon}) \nonumber \\ = && \, F^{(h)}({\varepsilon}+{\varepsilon}{_{\rm ion}},t) \nu{_{\rm ion}}({\varepsilon}+{\varepsilon}{_{\rm ion}},t) \nonumber \\ - &&\, F^{(h)}({\varepsilon},t) \nu{_{\rm ion}}({\varepsilon},t).\end{aligned}$$ where the ionization frequency $\nu{_{\rm ion}}({\varepsilon},t) = n_n(t) \sigma_{\rm ion}({\varepsilon}) v({\varepsilon})$ was introduced. Since hot electrons possess large energies ${\varepsilon}\gg {\varepsilon}{_{\rm ion}}$, one can approximate $$\left. K{_{\rm ion}} \right|_h \approx {\varepsilon}{_{\rm ion}} \dfrac{{\partial}}{{\partial}{\varepsilon}}( \nu{_{\rm ion}} F^{(h)} ).$$ Repeating the same steps, the excitation term can be written as $$\left. K{_{\rm exc}} \right|_h \approx \sum_p {\varepsilon}^{(p)}{_{\rm exc}} \dfrac{{\partial}}{{\partial}{\varepsilon}} \left( \nu^{(p)}{_{\rm exc}} F^{(h)} \right),$$ under the assumption ${\varepsilon}\gg {\varepsilon}^{(p)}{_{\rm exc}}$, where the superscript $(p)$ represents the $p$-th excitation level. Loss term and boundary condition {#Bcs} -------------------------------- So far, the derivation of the Coulomb and inelastic collision terms is rather general and can be applied to all systems showing an energetic low density electron population interacting with a high density Maxwellian one. The loss term written in terms of scattering of the electrons out of the volume under consideration is peculiar to planar magnetron systems. Moreover, the boundary condition which introduces hot electrons with energy $q_e V_b$ is specific to magnetron systems (both in dc and HiPIMS mode). The HiPIMS case is addressed for the following reasons: the existence of a low temperature dense Maxwellian electron population is ensured by the high density of the plasma; the secondary electrons emitted by the target are highly energetic since the bias voltage (and therefore the sheath potential) can be larger than the typical dc case. The hot electron loss term in a magnetron is written hereafter [@1991-GuimaraesAlmeidaBretagne] as $$L({\varepsilon}) = \frac{1}{4} \frac{A_{\rm L}\ r_{\rm L}}{V} \left( \nu^{\rm tot}{_{\rm e-n}} + \nu{_{\rm ei}} \right) F^{(h)}, \label{Loss}$$ where $A_{\rm L}$ is the loss area, $r{_{\rm L}}$ is the Larmor radius, $\nu^{\rm tot}{_{\rm e-n}}$ is the total collision frequency of the electrons with the neutral species, and $\nu_{\rm ei}$ is the Lorentz scattering collision frequency for electron-ion Coulomb collisions [@2005-Helander]: $$\nu{_{\rm ei}}({\varepsilon}) = \frac{q_e^4 \ln \Lambda}{4 \pi m_{\rm e}^2 {\varepsilon}_0^2} n{_{\rm i}} \frac{1}{v^3}.$$ The introduction of electrons with the correct mass flow and energy is described by a term in the kinetic equation given by [@1991-GuimaraesAlmeidaBretagne] $$S({\varepsilon}) = \gamma{_{\rm sec}}\frac{I{_{\rm D}}}{V\ q_e} \delta({\varepsilon}-q_e V_b).$$ Here $\gamma{_{\rm sec}}$ is the secondary electron emission coefficient, $I{_{\rm D}}$ the discharge current, $V$ the region volume, and $V_b$ the bias applied at the target. This term has strictly the form of a boundary condition on the energy axis at ${\varepsilon}= q_e V_b$ and it represents the number density of electrons per unit time and energy that enters the volume after the acceleration in the sheath up to an energy of $q_e V_b$. Kinetic equation for the hot electrons {#Fh_sol} ====================================== Bringing the Coulomb and the inelastic collision terms together, the equation for the evolution of the hot electrons’ eedf reads $$\begin{split} \dfrac{{\partial}F^{(h)}}{{\partial}t} =& \, \dfrac{{\partial}}{{\partial}{\varepsilon}} \left( {\varepsilon}\nu_s^{hM} F^{(h)} \right) + \sum_p \dfrac{{\partial}}{{\partial}{\varepsilon}} \left( {\varepsilon}^{(p)}{_{\rm exc}} \nu^{(p)}{_{\rm exc}} F^{(h)} \right)\\ + &\, \dfrac{{\partial}}{{\partial}{\varepsilon}} \left( {\varepsilon}{_{\rm ion}} \nu{_{\rm ion}} F^{(h)} \right) + \gamma{_{\rm sec}}\frac{I{_{\rm D}}}{V q_e} \ \delta({\varepsilon}-q_e V_b) \\ - &\, \frac{1}{4} \frac{A_{\rm L}\ r_{\rm L}}{V} \left( \nu^{\rm tot}{_{\rm e-n}} + \nu{_{\rm ei}} \right) F^{(h)}. \label{Fhsimpl2} \end{split}$$ In steady state, equation can be written as $$\dfrac{d}{d {\varepsilon}} \left(v_{\varepsilon}({\varepsilon}) F^{(h)}\right) - \nu^*{_{\rm L}}({\varepsilon}) F^{(h)} + S \delta({\varepsilon}-q_e V_b) = 0, \label{Fhss}$$ where $S=\gamma{_{\rm sec}}{I{_{\rm D}}}/{(V q_e)}$ is a constant, $v_{\varepsilon}({\varepsilon})$ represents a drift velocity in energy space (a measure of the cooling speed of the hot electrons), and $\nu^*{_{\rm L}}$ a geometry weighed equivalent loss frequency. $v_{\varepsilon}({\varepsilon})$, strictly the energy loss velocity, is defined as $$v_{\varepsilon}({\varepsilon}) ={\varepsilon}^{(p)}{_{\rm exc}} \sum_p \nu^{(p)}{_{\rm exc}} +{\varepsilon}{_{\rm ion}} \nu{_{\rm ion}} + {\varepsilon}\nu{_{\rm s}}^{hM}, \label{veps}$$ and $\nu^*{_{\rm L}}({\varepsilon})$ is $$\nu^*{_{\rm L}}({\varepsilon}) = \frac{1}{4} \frac{A_{\rm L}\ r_{\rm L}}{V} \left( \nu^{\rm tot}{_{\rm e-n}} + \nu{_{\rm ei}} \right).$$ The sensitivity of $v_{\varepsilon}({\varepsilon})$ to the electron and neutral densities is addressed in section \[SensitivityAnalysis\]. Equation can be solved by using the integrating factor technique, and gives $$F^{(h)}({\varepsilon}) =\dfrac{S}{v_{\varepsilon}(q_e V_b)} \dfrac{M(q_e V_b)}{M({\varepsilon})} , \label{Fh}$$ where $M$ is the integrating factor, $$M({\varepsilon}) = \exp \left( \int_{q_e V_b}^{\varepsilon}\dfrac{v'_{\varepsilon}- \nu^*{_{\rm L}}}{v_{\varepsilon}} d{\varepsilon}' \right), \label{Meps}$$ with $v_{\varepsilon}' = dv_{\varepsilon}/d{\varepsilon}$. The assumption of steady state signifies that the hot electrons do not have direct memory of the discharge evolution, but they respond instantaneously to the input quantities, e.g. the species densities. Approximate kinetic equations for perfectly confined electrons -------------------------------------------------------------- It is interesting to simplify further equation , under the assumption that the electrons are perfectly confined by the magnetic field, and leave the magnetized region only after having deposited their energy in the negative glow region. The steady state equation reduces to $$\dfrac{d}{d {\varepsilon}} \left(v_{\varepsilon}({\varepsilon}) F^{(h)}\right) = S \delta({\varepsilon}-q_e V_b), \label{Fhss_veps}$$ with solution $$F_{LL}^{(h)} = \dfrac{S}{v_{\varepsilon}({\varepsilon})}. \label{Fh_noLoss}$$ This approximation may be considered as appropriate if the electron losses are negligible, i.e. the magnetic field strength is very large. Analytic solutions versus numerical results {#AnVsNum} =========================================== The expressions for $F_{LL}^{(h)}$ and $F^{(h)}$ are specified for the HiPIMS system described in [@2012-EhiasarianHecimovic] by relying on calculated physical parameters (i.e. the species densities). These values are calculated by employing a zero dimensional global (or volume averaged) model, which will be described in detail in a follow up publication. This global model evolves the eedf together with the densities of other species, namely: argon and aluminum neutral atoms, argon metastable species, single charge argon and aluminum ions and double charge aluminum ions. The electrons are heated in the same fashion described in [ section]{} \[Bcs\] by the insertion of secondary electrons at the energy corresponding to the bias voltage applied to the cathode. This bias voltage is given as input and it is kept constant during the on time of the pulse (400 $\mu$s). The discharge current is calculated self consistently from the ion fluxes to the cathode, as well as the metal (Al) production by sputtering and self sputtering. The discharge conditions are: bias voltage 360 V, Ar pressure 0.5 Pa, average magnetic field in the region 80 mT and magnetron diameter 50 mm. Following [@2011-RaaduAxnaes], the secondary electron emission coefficient is set to about 0.08 [@1983-Yamauchi], while the self sputtering and sputtering coefficients are estimated from [@1969-Hayward; @2007-Anders-VICharact; @1990-Ruzic]. ![Ar (blue), Al (green) and electron (dashed black) densities’ evolution in time. The dashed red line shows the discharge current. As the discharge current reaches a plateau, so do the species densities. The Ar is strongly depleted, the Al is initially created by sputtering and subsequently depleted by ionization. The species densities stabilize to the values reported in table \[table:table\_1\], sampled at the instant marked with the dark red line.[]{data-label="figure_1"}](figure_1.pdf){width="8cm"} Figure \[figure\_1\] shows the time evolution of some of the species densities in the control volume together with the discharge current (red dashed line). The discharge current shows a peak as the discharge ignites and then settles to a plateau region about 200 $\mu$s into the pulse. The Ar neutral density $n{_{\rm Ar}}$ (blue solid line) shows a fast depletion in the initial phases of the pulse, followed by a roughly constant value later in the pulse. The Al density $n{_{\rm Al}}$ (green solid line) increases quickly as the Ar ions begin to produce sputtered material, to be then depleted due to ionization collisions and finally settle to a constant value. The electron density (dashed black line) shows a similar time evolution. ![Ar and Al ionization degree (left axis) and discharge current (right axis) against time. The discharge current (dashed red curve) exhibits a plateau region after about 200 $\mu$s. After the same time the degrees of ionization settle to $50\%$ and $20\%$ for Al and Ar respectively. The dark red vertical line represents the instant at which the quantities in table \[table:table\_1\] are sampled.[]{data-label="figure_2"}](figure_2.pdf){width="8cm"} In figure \[figure\_2\] are plotted against time the ionization degrees for Ar $i{_{\rm Ar}} = n^+{_{\rm Ar}}/(n{_{\rm Ar}}+n^+{_{\rm Ar}})$ (blue solid line) and Al $i{_{\rm Al}} = n^+{_{\rm Al}}/(n{_{\rm Al}}+n^+{_{\rm Al}})$ (green solid line), together with the discharge current (red dashed line). The Al degree of ionization stabilizes at about $50\%$, with the Ar ionization degree at about $20\%$. As soon as the Al is produced by sputtering it is quickly ionized because of its low ionization threshold (${\varepsilon}_i \approx 6$ eV), therefore $i_{\rm Al}$ is high even at rather low power densities. One can notice that the Ar density shown in figure \[figure\_1\] (solid blue line) drops by an order of magnitude, while $i{_{\rm Ar}} \approx 20\%$: in the numerical simulation the Ar is rarefied by the sputtering wind as well as by the ionization process. The Al sputtered from the target, in fact, exchanges momentum with the Ar and effectively pushes the gas out of the ionization region (see e.g. [@2011-RaaduAxnaes]). ------------------------------------------------------------ ---------------------------------------------- $\text{\qquad} $\begin{alignedat}[t]{2} \begin{alignedat}[t]{2} V{_{\rm b}} &=&\, 360 &\, \text{eV}\\ n{_{\rm Ar}} &=&\, 1 \cdot 10^{19} &\,\text{m}^{-3}\\ j{_{\rm D}} &=&\, 0.41 &\, \text{A/cm}^2 n{_{\rm Al}} &=&\, 1 \cdot 10^{18} &\,\text{m}^{-3} \\ \end{alignedat} n{_{\rm e}} &=&\, 4 \cdot 10^{18} &\,\text{m}^{-3} \text{\qquad}$ \end{alignedat}$ ------------------------------------------------------------ ---------------------------------------------- : \[table:table\_1\] Physical parameters used in the calculation of the analytical hot electron eedf. The values are extracted from the results obtained by the volume global model at the instant t = 300 $\mu$s, marked with ‘sample time’ in the figures \[figure\_1\] and \[figure\_2\]. The physical quantities during the plateau phase, reported in table \[table:table\_1\], are used to calculate the eedf reported in and . The shape of $F_{LL}^{(h)}$ (red dashed line) and $F^{(h)}$ (black dashed line) are reported in figure \[figure\_3\]. In the same figure, the blue solid line is the numerical solution obtained by the global simulation. The numerical result shows two well separated populations of electrons: the cold Maxwellian ones, represented with a straight dashed blue line and the energetic electrons. The numerical result shows that the eedf starts to deviate from a Maxwellian at energies of about 30 eV, but that most of the electrons fall into the Maxwellian category: the eedf for the hot electrons is about 4 orders of magnitude smaller. This result supports the claim that the density of the cold or Maxwellian electrons is much larger than the one of the hot population: the hot electron density is less than $5\%$ of the total density. As for the analytic solutions, the agreement of $F^{(h)}$ and $F{_{\rm Num}}$ is excellent for electrons with energies larger than $\approx 60$ eV, i.e. within the region of validity of the energetic electrons assumption as shown in the zoomed in plot in figure \[figure\_4\]. In figure \[figure\_3\] it is interesting to notice the appearance of a plateau in energy which extends from about 75 eV to 350 eV. This feature should occur in general in systems where an initially monoenergetic beam interacts both with neutral species and Maxwellian electrons, provided that energetic electrons have a large energy with respect to the inelastic collision processes threshold and that the confinement time is large compared with the slowing down time. In the case under study, the average confinement time is $\langle 1/\nu^*{_{\rm L}}\rangle \approx 20 \ \mu$s, while the slowing down time for the most energetic electrons is $\langle q{_{\rm e}} V{_{\rm b}}/v_{\varepsilon}({\varepsilon}) \rangle \approx 5\ \mu$s. The lossless solution $F_{LL}^{(h)}$ approximates well, within a $20\%$ error, the more accurate $F^{(h)}$, with an error that increases for lower electron energies as the exponential decay in equation with decreasing energy becomes greater. Despite the simpler chemistry model employed in the analytic calculation with respect to the numerical global model, the former is able to capture the most relevant processes, and allows for a partial verification of the numerical model. Table \[table:table\_2\] reports a list of reactions, with the appropriate reference, and indicates with a checkmark () whether the reaction is included in the analytic or the numerical model. All calculations and the results reported in figures \[figure\_5\]-\[figure\_7\] employ the cross sections from the sources referenced in table \[table:table\_2\]. Figure \[figure\_5\] shows a plot of the terms of the type $(\nu_j {\varepsilon}_j)$ using the data reported in table \[table:table\_1\]; these terms determine the drift velocity in energy space and therefore the analytic eedf. The Coulomb collisions with the Maxwellian cold background and the Ar ionization terms are here the dominant processes. ----------------------- ------------------- ---------------- ------- ------- ----------------------- Analytical Numerical model model Ar+e $\longrightarrow$ Ar$^+$ + 2e [@2003-Hayashi-Ar] Ar$^*{_{\rm 4s}}$+e $\longrightarrow$ Ar$^+$ + 2e [@1989-Vlcek] Ar$^*{_{\rm 4s'}}$+e $\longrightarrow$ Ar$^+$ + 2e [@1989-Vlcek] Ar+e $\longrightarrow$ Ar$^*$ + e [@2003-Hayashi-Ar] Al+e $\longrightarrow$ Al$^+$ + 2e [@1975-Shimon-AlIon] Al+e $\longrightarrow$ Al$^{++}$ + 2e [@1982-Crandall-Al2p] Al+e $\longrightarrow$ Al$^*$ + e [@1982-McGuire] Ar$^+$+Al $\longrightarrow$ Ar + Al$^+$ [@2000-Lu-PennChExc] Ar$^*{_{\rm 4s}} $+Al $\longrightarrow$ Ar + Al$^+$ [@2000-Lu-PennChExc] Ar$^*{_{\rm 4s'}}$+Al $\longrightarrow$ Ar + Al$^+$ [@2000-Lu-PennChExc] Ar$^*{_{\rm 4s}} $+e $\longrightarrow$ Ar + e [@1995-Ashida] Ar$^*{_{\rm 4s'}}$+e $\longrightarrow$ Ar + e [@1995-Ashida] ----------------------- ------------------- ---------------- ------- ------- ----------------------- : \[table:table\_2\] List of reactions taken into account in the analytic and numerical models. The last column lists the references from which the cross sections or rate constants were obtained. ![Shape of the electron energy distributions against energy from the numerical calculation $F_{\rm Num}$ (blue) at the instant t = 300 $\mu$s, and the analytic solutions, lossless $F{_{\rm LL}}^{(h)}$ (dashed red) and $F^{(h)}$ (dashed black). On the vertical axis the distribution functions are normalized as $F/\sqrt{{\varepsilon}}/n{_{\rm e}}$, with \[$F$\] = m$^{-3}$eV$^{-1}$. Note that the quantity eedf/$\sqrt{{\varepsilon}}$ is sometimes referred to as the electron energy probability function (eepf).[@2005-LiebermanLichtenberg-pg191][]{data-label="figure_3"}](figure_3.pdf){width="8cm"} ![Linear zoomed in plot of the normalized electron energy distributions against energy from the numerical calculation $F_{\rm Num}$ (blue) at the instant t = 300 $\mu$s, and the analytic solutions, lossless $F{_{\rm LL}}^{(h)}$ (dashed red) and $F^{(h)}$ (dashed black). The agreement of $F^{(h)}$ and $F_{\rm Num}$ in the high energy region is evident. On the vertical axis the distribution functions are normalized as $F/\sqrt{{\varepsilon}}/n{_{\rm e}}$, with \[$F$\] = m$^{-3}$eV$^{-1}$.[]{data-label="figure_4"}](figure_4.pdf){width="8cm"} ![Terms of the type $\nu_j {\varepsilon}_j$ in equation against energy of the hot electrons, identified by the collisional process. The densities employed are the ones reported in table \[table:table\_1\] and obtained at t = 300 $\mu$s. The solid line identifies Ar, the dashed line Al and the point-dash line Maxwellian electrons. The contributions due to Coulomb and Ar ionization collisions dominate.[]{data-label="figure_5"}](figure_5.pdf){width="8cm"} Applicability of analytic solutions {#Applicability} =================================== As already mentioned, the analytic result for $F^{(h)}$ has been derived with the aim of describing the hot electrons in an HiPIMS system. HiPIMS discharges are generally dynamical systems, but the analytic result for $F^{(h)}$, obtained under the assumption of a stationary regime, can still be employed: the time scale of relaxation of the electron distribution function is of the order of ten microseconds and most of the current pulse characteristics remain constant for longer intervals as shown in [@2007-Anders-VICharact]. In particular, in [@2007-Anders-VICharact], it is shown that for an aluminum target the discharge current remains constant for time intervals larger than a hundred $\mu s$, for a wide range of applied voltages. One can assume that if the discharge current is stationary, the species densities relevant in the calculation of $v_{\varepsilon}({\varepsilon})$ also remain constant. The mechanism of heating the plasma via energetic electrons is common to both HiPIMS and dcMS discharges. In conventional dcMS systems, the dominant species is the discharge gas, whereas in the HiPIMS case the sputtered metallic species plays an important role as the self sputtering contribution to the discharge is strong. Therefore, provided that the bias voltage is sufficiently high (roughly above 300 V), the analytic solution for $F^{(h)}$ is applicable to both conventional dcMC and HiPIMS discharges.\ Magnetron systems are a particular type of glow discharges, which are characterized by a low temperature plasma that is sustained by energetic electrons [@1977-Thornton-MSreview]. Therefore it is expected that the analysis here presented could be applied to glow discharges, provided that they can be described by a non-local approach. For instance, the negative glow of a dc He discharge addressed in [@1991-Lawler] could be a candidate for the application of this analysis: the electrons are electrostatically confined in the positive column where the homogeneous assumption can be made. In this case, only the Coulomb collisions and the excitation of metastable levels should be considered in equation , and the boundary condition discussed in \[Bcs\] should also be written accordingly, as it determines the introduction of hot electrons at a given energy. Moreover, the assumption of a monoenergetic beam of electrons entering the volume after the acceleration in the cathode drop to the full sheath potential should be carefully considered, as the sheath thickness is in this case larger. Study of the beam-bulk interaction terms {#SensitivityAnalysis} ======================================== The velocity $v_{\varepsilon}({\varepsilon})$ given in equation represents the interaction between the hot electrons and the other species, and both analytic solutions and depend heavily on its value. On the other hand, $v_{\varepsilon}({\varepsilon})$ has a large range of variability depending on the neutral species and Maxwellian electron densities. Therefore, it is relevant to visualize the relative magnitude of the terms in for different values of the densities. The purely illustrative figure \[figure\_6\] shows a plot of the terms of the kind ($\nu_j {\varepsilon}_j$) for $n{_{\rm Ar}} = n{_{\rm Al}} = n{_{\rm e}} = 10^{19}$ m$^{-3}$. When all species have the same density, the Coulomb interaction of the hot electrons with the Maxwellian ones dominates all other processes, followed by Ar and Al ionization. On the other hand, in most realistic cases, the densities will differ quite significantly: at lower voltages (or during the first phases of the discharge) the Ar species dominates the Al, while in the limit of extreme self sputtering regimes the Al species dominates. As for the electron species, in the high power regime, the ionization degree is expected to approach unity, therefore the Coulomb collision contribution is the most conspicuous. Figure \[figure\_7\] shows the sensitivity of $\nu{_{\rm s}}^{hM}({\varepsilon})$ to the Maxwellian electron density: $n{_{\rm e}}$ is varied from $10^{17}$ to $10^{19}$ m$^{-3}$, [while $n{_{\rm Al}}$ is set to zero]{}. For comparison, the term $ \nu{_{\rm ion}} {\varepsilon}{_{\rm ion}}$ is also plotted as a solid line: it is interesting to notice that the two collisional processes have equal influence when $n{_{\rm Ar}} = 10^{19}~\text{m}^{-3}$ and $n{_{\rm e}} = 5 \cdot 10^{18}~\text{m}^{-3}$. ![Terms of the type $\nu_j {\varepsilon}_j$ in against energy of the hot electrons, identified by the collisional process. The densities employed are $n{_{\rm Ar}} = n{_{\rm Al}} = n{_{\rm e}} = 10^{19}~\text{m}^{-3}$. The solid line identifies Ar, the dashed line Al and the point-dash line Maxwellian electrons. The contributions due to Coulomb collisions with the Maxwellian electrons dominate.[]{data-label="figure_6"}](figure_6.pdf){width="8cm"} ![Terms of the type $\nu_j {\varepsilon}_j$ in against energy of the hot electrons, identified by the collisional process. The solid blue line identifies the Ar ionization term with $n{_{\rm Ar}}~=~10^{19}$ m$^{-3}$, while the point-dash lines show the Coulomb collision term for Maxwellian electrons. The Maxwellian electron density $n{_{\rm e}}$ is varied from $10^{17}$ to $10^{19}$ m$^{-3}$ and the Al density is set to zero.[]{data-label="figure_7"}](figure_7.pdf){width="8cm"} Conclusions and outlook ======================= The analytic solutions $F^{(h)}$ and $F{_{\rm LL}}^{(h)}$ describe systems that are characterized by a cold Maxwellian bulk and a hot electron population originating with secondary electron emission. The cooling of the secondary electrons is carried out by inelastic ionization and excitation collisions with neutral species, and by elastic Coulomb interactions with the Maxwellian electron population. This analysis is applicable, among other systems, to dc magnetron sputtering (dcMS) and high power impulse magnetron sputtering (HiPIMS): the secondary electrons emitted by the target reach the ionization region after being accelerated by the sheath potential. In this region close to the target, electrons are magnetically confined and exhibit a spiral like motion around the magnetic field lines while bouncing back and forth in the magnetic mirror. This motion allows the energetic electrons to experience a large number of collisions even at a low pressure (usually below 1 Pa). HiPIMS systems differ from dcMS because of the higher peak power density they achieve (several kW/cm$^2$): the bias voltage applied to the target is larger, resulting in a higher discharge current, which is applied in pulses up to a few hundred microseconds.\ Both analytic solutions have been used to describe a HiPIMS discharge. The species densities used in the expressions and are obtained from the numerical solution of a kinetic global model, to be addressed in a follow up publication. The simulated discharge shows a plateau region where both the discharge current and the species densities are constant: the stationary eedf is calculated in this region. The analytic result is then compared to the numerical one obtained by the model and shows excellent agreement despite the simplifications made. The applicability of the analytic solution together with its sensitivity to the main parameters is addressed in section \[Applicability\] and \[SensitivityAnalysis\]. The analytic solution can be approximated by its lossless version , provided that the neutral species density is low, the magnetic field is strong and the bias voltage is larger than about 300 eV. For ionization degrees larger than about $50\%$, the Coulomb collision term in the drift velocity in energy space $v_{\varepsilon}$ dominates the Ar and Al ionization terms.\ The analytic solution of the energetic electron edf can be used to calculate correction terms to be used in a fluid description of the electron species. In fact while the Maxwellian electrons determine the electron density and the ambipolar field, the energetic ones are responsible for the power coupling to the discharge and the majority of the ionization in the negative glow region. Knowledge of the hot electrons’ edf allows one to devise a fluid model with corrections for kinetic effects, i.e. a hybrid model. Acknowledgment ============== This work has been supported by the German Research Foundation (DFG) within the frame of the Collaborative Research Centre TRR 87 ’Pulsed high power plasmas for the synthesis of nanostructured functional layers’ (SFB-TR 87). The authors gratefully acknowledge fruitful discussions with Dr Güçlü, Prof A Smolyakov, Dr Teresa de Los Arcos and Dr D Eremin.\
--- abstract: 'Ward identities for extended objects are discussed. In the limit of dc transport it is rigorously proved that charge-density and spin-density fluctuations do not couple to electromagnetic field.' author: - 'O. Narikiyo [^1]' date: ' (Dec. 28, 2012) ' title: Ward identities for extended objects --- Introduction ============ In most cases of condensed matter physics Ward identities have been discussed for single-particle vertecies [@Sch; @Ono]. Recently I have reported the discussions of Ward identities for particle-particle or particle-hole pairs [@W1; @W2; @W3]. In this note I will summarize the essential point for extended pairs. Following description is based on refs. [@W1; @W2; @W3]. Ward identity for electric current vertex ========================================= The central quantity in the discussions of Ward identity for electric current vertex is the equal-time commutation relation between the electric charge $$j_{\vec k}^e = e \sum_{\vec p'} \Bigl( a_{{\vec p'}-{\vec k}}^\dag a_{\vec p'} + b_{{\vec p'}-{\vec k}}^\dag b_{\vec p'} \Bigr),$$ and the object which comes in or go out of the vertex. If the object is an electron, then the equal-time commutation relation is estimated as $$[ j_{\vec k}^e, a_{\vec p}^\dag ] = e a_{{\vec p}-{\vec k}}^\dag,$$ where $a_{\vec p}^\dag$ is the creation operator of an electron. If the object is a local Cooper pair, $$[ j_{\vec k}^e, P_{\vec q}^\dag ] = 2 e P_{{\vec q}-{\vec k}}^\dag,$$ where $P_{\vec q}^\dag$ is the creation operator of a pair. In the right-hand side of these equations the charge carried by the object appears; $e$ in the case of the electron and $2e$ in the case of the Cooper pair. Next we consider the case of extended Cooper pairs. An extended Cooper pair is represented as $$\Psi({\vec R}) = \int d {\vec r} \chi_l({\vec r}) \psi_\downarrow({\vec r_1}) \psi_\uparrow({\vec r_2}),$$ where $$\chi_l({\vec r}) = \sum_{\vec p} e^{i{\vec p}\cdot{\vec r}} \chi_l({\vec p}),\ \ \ \psi_\downarrow({\vec r_1}) = \sum_{\vec p_1} e^{i{\vec p_1}\cdot{\vec r_1}} b_{\vec p_1},\ \ \ \psi_\uparrow({\vec r_2}) = \sum_{\vec p_2} e^{i{\vec p_2}\cdot{\vec r_2}} a_{\vec p_2}.$$ Here ${\vec R}$ is the center-of-mass coordinate of the pair and ${\vec r}$ is the relative coordinate. Namely, ${\vec r_1}={\vec R}+{\vec r}/2$ and ${\vec r_2}={\vec R}-{\vec r}/2$. Performing the integral in terms of the relative coordinate ${\vec r}$ we obtain[^2] $$\Psi({\vec R}) = \sum_{\vec q} e^{i{\vec q}\cdot{\vec R}} P_{\vec q},$$ with $$P_{\vec q} = \sum_{\vec p} \chi_l({\vec p}) b_{{{\vec q} \over 2}-{\vec p}} a_{{{\vec q} \over 2}+{\vec p}}, \ \ \ \ \ P_{\vec q}^\dag = \sum_{\vec p} \chi_l({\vec p}) a_{{{\vec q} \over 2}+{\vec p}}^\dag b_{{{\vec q} \over 2}-{\vec p}}^\dag.$$ The equal time commutation relation is calculated as $$\begin{aligned} [ j_{\vec k}^e, P_{\vec q}^\dag ] = & e \sum_{\vec p'} \sum_{\vec p} \chi_l({\vec p}) \Bigl( a_{{\vec p'}-{\vec k}}^\dag \Big\{ a_{\vec p'} a_{{{\vec q} \over 2}+{\vec p}}^\dag + a_{{{\vec q} \over 2}+{\vec p}}^\dag a_{\vec p'} \Big\} b_{{{\vec q} \over 2}-{\vec p}}^\dag \nonumber \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - b_{{\vec p'}-{\vec k}}^\dag \Big\{ b_{\vec p'} b_{{{\vec q} \over 2}-{\vec p}}^\dag + b_{{{\vec q} \over 2}-{\vec p}}^\dag b_{\vec p'} \Big\} a_{{{\vec q} \over 2}+{\vec p}}^\dag \nonumber \\ = & e \sum_{\vec p} \chi_l({\vec p}) \Bigl( a_{{{\vec q} \over 2}+{\vec p}-{\vec k}}^\dag b_{{{\vec q} \over 2}-{\vec p}}^\dag + a_{{{\vec q} \over 2}+{\vec p}}^\dag b_{{{\vec q} \over 2}-{\vec p}-{\vec k}}^\dag \Bigr) \nonumber \\ = & e \sum_{\vec p} \chi_l({\vec p}) \Bigl( a_{{{\vec q}-{\vec k} \over 2}+({\vec p}-{{\vec k} \over 2})}^\dag b_{{{\vec q}-{\vec k} \over 2}-({\vec p}-{{\vec k} \over 2})}^\dag + a_{{{\vec q}-{\vec k} \over 2}+({\vec p}+{{\vec k} \over 2})}^\dag b_{{{\vec q}-{\vec k} \over 2}-({\vec p}+{{\vec k} \over 2})}^\dag \Bigr). \end{aligned}$$ By shifting the variable of the summation we obtain $$[ j_{\vec k}^e, P_{\vec q}^\dag ] \rightarrow 2 e \sum_{\vec p'} \chi_l({\vec p'}) a_{{{\vec q}-{\vec k} \over 2}+{\vec p'}}^\dag b_{{{\vec q}-{\vec k} \over 2}-{\vec p'}}^\dag = 2 e P_{{\vec q}-{\vec k}}^\dag,$$ in the limit of vanishing external momentum ${\vec k}\rightarrow 0$, where the shift in the argument of the form factor $\chi_l({\vec p'}\pm{{\vec k}\over 2})$ is negligible. Thus even in the case of extended Cooper pair the commutation relation in the limit of vanishing external momentum picks up the integrated charge of the object so that we obtain the same Ward identity as in the case of local pair. Next we consider the case of extended particle-hole pairs. An extended particle-hole pair is represented as $$A^\dag({\vec R}) = \int d {\vec r} \chi({\vec r}) \psi_\uparrow^\dag({\vec r_1}) \psi_\downarrow({\vec r_2}),$$ where $$\chi({\vec r}) = \sum_{\vec p} e^{i{\vec p}\cdot{\vec r}} \chi({\vec p}),\ \ \ \psi_\uparrow^\dag({\vec r_1}) = \sum_{\vec p_1} e^{-i{\vec p_1}\cdot{\vec r_1}} a_{\vec p_1}^\dag,\ \ \ \psi_\downarrow({\vec r_2}) = \sum_{\vec p_2} e^{i{\vec p_2}\cdot{\vec r_2}} b_{\vec p_2}.$$ Performing the integral in terms of the relative coordinate ${\vec r}$ we obtain $$A^\dag({\vec R}) = \sum_{\vec q} e^{i{\vec q}\cdot{\vec R}} A_{\vec q}^\dag,$$ with $$A_{\vec q}^\dag = \sum_{\vec p} \chi({\vec p}) a_{{\vec p}-{{\vec q} \over 2}}^\dag b_{{\vec p}+{{\vec q} \over 2}}.$$ The equal time commutation relation is calculated as $$\begin{aligned} [ j_{\vec k}^e, A_{\vec q}^\dag ] = & e \sum_{\vec p'} \sum_{\vec p} \chi({\vec p}) \Bigl( a_{{\vec p'}-{\vec k}}^\dag \Big\{ a_{\vec p'} a_{{\vec p}-{{\vec q} \over 2}}^\dag + a_{{\vec p}-{{\vec q} \over 2}}^\dag a_{\vec p'} \Big\} b_{{\vec p}+{{\vec q} \over 2}} \nonumber \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - a_{{\vec p}-{{\vec q} \over 2}}^\dag \Big\{ b_{{\vec p'}-{\vec k}}^\dag b_{{\vec p}+{{\vec q} \over 2}} + b_{{\vec p}+{{\vec q} \over 2}} b_{{\vec p'}-{\vec k}}^\dag \Big\} b_{{\vec p'}} \nonumber \\ = & e \sum_{\vec p} \chi({\vec p}) \Bigl( a_{{\vec p}-{{\vec q} \over 2}-{\vec k}}^\dag b_{{\vec p}+{{\vec q} \over 2}} - a_{{\vec p}-{{\vec q} \over 2}}^\dag b_{{\vec p}+{{\vec q} \over 2}+{\vec k}} \Bigr) \nonumber \\ = & e \sum_{\vec p} \chi({\vec p}) \Bigl( a_{({\vec p}-{{\vec k} \over 2})-{{\vec q}+{\vec k} \over 2}}^\dag b_{({\vec p}-{{\vec k} \over 2})+{{\vec q}+{\vec k} \over 2}} - a_{({\vec p}+{{\vec k} \over 2})-{{\vec q}+{\vec k} \over 2}}^\dag b_{({\vec p}+{{\vec k} \over 2})+{{\vec q}+{\vec k} \over 2}} \Bigr). \end{aligned}$$ By the same procedure as in the case of the Cooper pair we obtain $$[ j_{\vec k}^e, A_{\vec q}^\dag ] \rightarrow e \sum_{\vec p'} \chi({\vec p'}) \Bigl( a_{{\vec p'}-{{\vec q}+{\vec k} \over 2}}^\dag b_{{\vec p'}+{{\vec q}+{\vec k} \over 2}} - a_{{\vec p'}-{{\vec q}+{\vec k} \over 2}}^\dag b_{{\vec p'}+{{\vec q}+{\vec k} \over 2}} \Bigr) = 0.$$ Since the integrated charge of the particle-hole pair vanishes, the commutation relation vanishes. The current vertex for dc conductivity is obtained from the Ward identity in the limit of vanishing external momentum $ k \rightarrow 0 $. Thus the Ward identity derived from this commutation relation tells us that particle-hole pairs have no contribution to dc conductivity. Such a conclusion is natural, since particle-hole pairs are charge-neutral and carry no charge. Ward identity for heat current vertex ===================================== The central quantity in the discussions of Ward identity for heat current vertex in the limit of vanishing external momentum $ k \rightarrow 0 $ is the equal-time commutation relation between the Hamiltonian and the object which comes in or go out of the vertex. As the discussion in the previous section, we obtain the same Ward identity as in the case of local pair in this limit. Conclusion ========== In the discussion of the Ward identity an equal-time commutation relation plays the central role. In the case of the electric current vertex it picks up the integrated charge of the object in the limit of vanishing external momentum. In this limit the wavelength of the electromagnetic field exceeds the size of the object so that the object can be treated as a point with its integrated charge in the discussion of the electromagnetic response. Thus it is concluded that charge-neutral pairs do not couple to electromagnetic field. Namely, charge- and spin-density fluctuations do not carry charge. On the other hand, Cooper pairs carrying charge $2e$ couple to electromagnetic field. In the case of the heat current vertex it picks up the energy of the object. [99]{} J. R. Schrieffer: [*Theory of Superconductivity*]{} (Benjamin-Cummings, Massachusetts, 1964). Y. Ono: Prog. Theor. Phys. [**46**]{}, 757 (1971). O. Narikiyo: arXiv:1108.0815v2. O. Narikiyo: arXiv:1108.5272v1. O. Narikiyo: arXiv:1109.1404v2. [^1]: Department of Physics, Kyushu University, Fukuoka 812-8581, Japan [^2]: Eq. (18) in ref. [@W2] should be replaced by (6) in this note.
--- author: - 'Zhaopeng Wang$^{1}$, Manfred Cuntz$^{1}$' title: | Fitting Formulas for Determining the Existence of S-type and P-type\ Habitable Zones in Binary Systems: First Results --- Introduction ============ Since the first confirmed detection of exoplanets in 1992, more than 3500 exoplanets have been found, with the 1000th detection by the $Kepler$ mission announced[^1] by NASA on January 6th, 2015. With the number of discoveries sky-rocketing, it is adequate to continue studying exoplanets, as well as searching for exomoons, besides identifying habitable zones (HZs). It is also well-known that binary (as well as higher order) systems occur frequently in the local Galactic neighborhood (e.g., Duquennoy & Mayor 1991, and subsequent studies). Observations show that exoplanets can also exist in binary systems, and might also be orbitally stable for millions or billions of years. There are two types of possible orbits (e.g., Dvorak 1982): planets orbiting one of the binary components are said to be in S-type orbits, while planets orbiting both binary components are said to be in P-type orbits. For example, Kepler-413 b (Kostov et al. 2014) is in a P-type orbit, indicating that the planet is orbiting both stellar components of the binary system. Kepler-453 b (Welsh et al. 2015) also constitutes a transiting circumbinary planet. Planetary S-type orbits have more confirmed detections, such as Kepler-432 b (Oritz et al. 2015). Some of these planets are located within the stellar HZs, as, for example, Kepler-62 f (Borucki et al. 2013); those cases typically receive significant attention due to their potential of hosting alien life. In previous studies, focusing on habitable zones in stellar binary systems, presented by Cuntz (2014, 2015), denoted as Paper I and II, respectively, henceforth, a joint constraint of radiative habitable zones (RHZs, based on stellar radiation) and orbital stability was considered. Moreover, Paper II also takes into account the eccentricity of binary components. RHZs, including conservative, general and extended habitable zones (therefore referred to as CHZ, GHZ, and EHZ, respectively), are defined in the same way as for the solar system (see Section 2.1). Our paper is structured as follows. In Section 2, we briefly describe the theoretical approach for the calculation of HZs adopted from Paper I and II; however, our work also takes into account revised HZ limits for the Solar System from updated climate models. In Section 3, we present some case studies with fitting equations for identifying the existence of HZs. Our summary will be given in Section 4. Theoretical Approach ==================== Habitability limits ------------------- In Paper I and II, habitable zones have been defined based on habitability limits for the Solar System from previous studies by Kasting et al. (1993) \[Kas93\] and Mischna et al. (2000). Thereafter, Kopparapu et al. (2013, 2014) \[Kop1314\] presented updated results on habitability limits by introducing new climate models. Table 1 conveys the meaning of each habitability limit. In this paper, we consider results for two types of RHZs: GHZ and RVEM. The GHZ is the region between the habitability limits of runaway greenhouse effect and maximum greenhouse effect (without clouds). The RVEM, explicitly, has the recent Venus position as inner limit and the early Mars position as outer limit. [ l @ c c c c l]{} Description & Indices & & This work\ ... & l & & Kop1314 & ...\ ... & ... & 5700 K & 5780 K & 5780 K & ...\ ... & ... & (AU) & (AU) & (AU) & ...\ Recent Venus & 1 & 0.75 & 0.77 & 0.750 & RVEM Inner Limit\ Runaway greenhouse effect & 2 & 0.84 & 0.86 & 0.950 & GHZ Inner Limit\ Moist greenhouse effect & 3 & 0.95 & 0.97 & 0.993 & ...\ Earth-equivalent position & 0 & 0.993 & $\equiv$1 & $\equiv$1 & ...\ First CO$_2$ condensation & 4 & 1.37 & 1.40 & ... & ...\ Maximum greenhouse effect, no clouds & 5 & 1.67 & 1.71 & 1.676 & GHZ Outer Limit\ Early Mars & 6 & 1.77 & 1.81 & 1.768 & RVEM Outer Limit\ Calculation of the HZs ---------------------- In order to achieve the same condition as in Solar System on behave of radiation, the planet should receive the same amount of radiative energy fluxes from both binary stellar components in total. Thus the equation for calculating RHZs (see Paper I) yields $$\setlength{\abovedisplayskip}{5pt} \setlength{\belowdisplayskip}{5pt} \frac{L_{1}}{S_{{\rm rel},1l}d^{2}_{1}} + \frac{L_{2}}{S_{{\rm rel},2l}d^{2}_{2}}= \frac{L_{\odot}}{s^{2}_{l}}$$ with $L_{1}$ and $L_{2}$ denoting the stellar luminosities, $d_{1}$ and $d_{2}$ denoting the distances to the binary components (see Figure 1) and $s_{l}$ being one of the solar habitability limits (see Table 1). $S_{\rm rel}$, as a function of effective temperature of binary stars, represents stellar flux in units of solar constant. Because $d_{1}$ and $d_{2}$ can be converted into a function of $z$, the distance from the center of binary system, a quartic equation for $z$ can be obtained after algebraic transformations (see Paper I and II for details). \ The RHZ, as an annulus around each star (S-type) or both stars (P-type), is therefore given as $$\setlength{\abovedisplayskip}{5pt} \setlength{\belowdisplayskip}{5pt} \textrm{RHZ(z)} = \textrm{Min}({\cal R}(z,\varphi))|s_{l,{\rm out}} - \textrm{Max}({\cal R}(z,\varphi))|s_{l,{\rm in}}$$ If a planet is assumed to stay in the HZ for a long period of time, it is mandated that it has a stable orbit. The planetary orbital stability limit is an upper limit as the distance from the main star for S-type orbit. For P-type orbit, it becomes a lower limit which is measured from the center of the binary system. The orbital stability limits are obtained from the fitting equations developed by Holman & Wiegert (1999); see Paper II for details. Moreover, according to the terminology of Paper I, ST-type and PT-type HZs denote the cases when the HZs are smaller than the corresponding RHZs due to truncation owing to the orbital stability limits, while S-type and P-type HZs refer to the cases where the HZs coincide with the RHZs. ![Flow diagram of the calculation (akin to Paper II). See also Cuntz & Bruntz (2014) for information on the webpage BinHab, hosted by the University of Texas at Arlington, allowing the calculation of stellar HZs.[]{data-label="fig:fig_narrow"}](RHZ.png){width="0.85\linewidth"} Case Study ========== Existence of HZs ---------------- Following the method as dicussed, various sets of systems, including systems of equal and non-equal masses, have been studied to examine their HZs. The stellar parameters can be found in Table 2. [0.85]{}[l @ c l]{} $M_{*}$ & Spectral Type & $L_{*}$\ ($M_{\odot}$) & ... & ($L_{\odot}$)\ 1.25 & F7V & 2.1534\ 1.00 & G2V & 1.0000\ 0.75 & K2V & 0.35569\ 0.50 & M0V & 0.043478\ Figure 3 shows the requirements for the GHZ and RVEM to exist for selected binary systems, i.e., systems with masses $M_{1} = M_{2} = 0.50~M_{\odot}$; $M_{1} = 1.00~M_{\odot}$, $M_{2} = 0.50~M_{\odot}$; $M_{1} = M_{2} = 1.00~M_{\odot}$, and $M_{1} = 1.25~M _{\odot}$, $M_{2} = 0.50~M_{\odot}$. For systems with masses $M_{1} = M_{2} = 0.50~M_{\odot}$, in case of $e_{b}$ = 0, $2a$ is required to be smaller than 0.44 AU for the P/PT-type GHZ to exist and smaller than 0.48 AU for the P/PT-type RVEM to exist. Regarding S/ST-type HZs, $2a$ needs to be larger than 1.59 AU and larger than 1.25 AU for the GHZ and RVEM to exist, respectively. Keeping $e_{b}$ = 0, the other equal-mass system considered in our study with $M_{1} = M_{2} = 1.00~M_{\odot}$, requires $2a$ to be smaller than 2.03 AU and smaller than 2.14 AU for the GHZ and RVEM to exist, respectively, regarding P/PT-type HZs. For S/ST-type cases, $2a$ needs to be larger than 7.76 AU and larger than 6.12 AU, respectively, for the GHZ and RVEM to exist. Non-equal mass cases, which generally are more significant, have also been investigated in this study. In systems with $M_{1} = 1.00~M_{\odot}, M_{2} = 0.50~M_{\odot}$, $2a$ is required to be smaller than 0.95 AU and smaller than 1.29 AU for the P/PT-type GHZ and RVEM, respectively, in case of $e_{b}$ = 0. Furthermore, $2a$ needs to be larger than 6.45 AU and larger than 5.09 AU for the S/ST-type GHZ and RVEM, respectively. Moreover, we also considered systems with $M_{1} = 1.25~M _{\odot}$ and $M_{2} = 0.50~M_{\odot}$. The case of $e_{b}$ = 0 requires $2a$ to be smaller than 1.12 AU and smaller than 1.56 AU for the GHZ and RVEM, respectively, in case of P/PT-type HZs. Regarding S/ST-type HZs, $2a$ is required to be larger than 7.96 AU for GHZ and 6.24 AU for RVEM. If larger eccentricities are considered, the sizes of the P/PT HZs are barely affected, whereas the sizes of the S/ST HZs are significantly reduced. This behavior is found for all four stellar mass combinations and for both types of HZs. Initial work on fitting equations for determinating the existence of HZs ------------------------------------------------------------------------ In the previous section, we present the requirements for HZs to exist (see Figure 3). The curves consist of pairs of critical values of $2a$ and $e_{b}$ and form regions for each figure that indicate the existence of the corresponding HZ. In this section, we present the fitting equations of these $2a$ versus $e_{b}$ curves allowing the efficient determination of the existence of each HZ. Linear least square method has been used for the development of each fitting equation. Moreover, the coefficient of determination ($R^{2}$) is used to check the goodness of fitting. For P/PT-type cases, the equation reads $$\setlength{\abovedisplayskip}{5pt} \setlength{\belowdisplayskip}{5pt} 2a = \alpha_{1} + \alpha_{2}e_{b} + \alpha_{3}e_{b}^{2}$$ The equation for S/ST-type yields $$\setlength{\abovedisplayskip}{5pt} \setlength{\belowdisplayskip}{5pt} 2a = e^{\beta_{1} + \beta_{2}e_{b} + \beta_{3}e_{b}^{2}}$$ The coefficients of systems discussed in the previous section are given in Table 3–6, as well as coefficients of determination. Fitting results are plotted and compared with the data in Figure 4. [0.85]{}[l @ c c c c]{} HZ & $\alpha_{1}$ & $\alpha_{2}$ & $\alpha_{3}$ & $R^{2}$\ P-GHZ & 0.44 & -0.44 & 0.31 & 0.9975\ P-RVEM & 0.47 & -0.47 & 0.31 & 0.9994\ HZ & $\beta_{1}$ & $\beta_{2}$ & $\beta_{3}$ & $R^{2}$\ S-GHZ & 0.61 & -0.14 & 2.97 & 0.9949\ S-RVEM & 0.40 & -0.23 & 3.06 & 0.9953\ [0.85]{}[l @ c c c c]{} HZ & $\alpha_{1}$ & $\alpha_{2}$ & $\alpha_{3}$ & $R^{2}$\ P-GHZ & 2.00 & -1.94 & 1.19 & 0.9984\ P-RVEM & 2.11 & -2.00 & 1.18 & 0.9982\ HZ & $\beta_{1}$ & $\beta_{2}$ & $\beta_{3}$ & $R^{2}$\ S-GHZ & 2.06 & 1.02 & 1.28 & 0.9998\ S-RVEM & 1.84 & 0.87 & 1.54 & 0.9995\ [0.85]{}[l @ c c c c]{} HZ & $\alpha_{1}$ & $\alpha_{2}$ & $\alpha_{3}$ & $R^{2}$\ P-GHZ & 0.95 & -0.84 & 0.41 & 0.9992\ P-RVEM & 1.27 & -1.34 & 0.88 & 0.9974\ HZ & $\beta_{1}$ & $\beta_{2}$ & $\beta_{3}$ & $R^{2}$\ S-GHZ & 1.88 & 0.98 & 1.46 & 0.9995\ S-RVEM & 1.66 & 0.82 & 1.74 & 0.9992\ [0.85]{}[l @ c c c c]{} HZ & $\alpha_{1}$ & $\alpha_{2}$ & $\alpha_{3}$ & $R^{2}$\ P-GHZ & 1.11 & -0.98 & 0.47 & 0.9987\ P-RVEM & 1.56 & -1.68 & 1.09 & 0.9971\ HZ & $\beta_{1}$ & $\beta_{2}$ & $\beta_{3}$ & $R^{2}$\ S-GHZ & 2.08 & 1.11 & 1.23 & 0.9997\ S-RVEM & 1.85 & 0.99 & 1.48 & 0.9996\ As all the coefficient of determination are close to 1, the fitting results should thus be very close to the data. In Figure 4, fitting results plotted as well as the data for comparison. Some of them are virtually indistinguishable from the data. Table 7 shows the precent errors of a few selected data points for $M_{1} = M_{2} = 1.00~M_{\odot}$ and $M_{1} = 1.00~M_{\odot}, M_{2} = 0.50~M_{\odot}$ cases. $$\setlength{\abovedisplayskip}{5pt} \setlength{\belowdisplayskip}{5pt} \textrm{Percentage Error} = \left| \frac{data - fitting}{data} \right|$$ [0.85]{}[c || @ c c c c | c c c c]{} $e_{b}$ & &\ ... & P-GHZ & P-RVEM & S-GHZ & S-RVEM & P-GHZ & P-RVEM & S-GHZ & S-RVEM\ 0.0 & 1.35% & 1.33% & 1.11% & 2.96% & 0.21% & 1.51% & 1.60% & 3.34%\ 0.1 & 0.37% & 0.25% & 0.27% & 0.09% & 0.56% & 0.59% & 0.49% & 0.14%\ 0.2 & 0.58% & 0.67% & 0.55% & 0.87% & 0.64% & 0.99% & 1.15% & 1.58%\ 0.3 & 0.09% & 0.28% & 0.16% & 0.64% & 0.40% & 0.40% & 0.90% & 1.51%\ 0.4 & 0.60% & 0.46% & 0.38% & 0.08% & 0.01% & 0.50% & 0.01% & 0.30%\ 0.5 & 1.01% & 0.99% & 0.04% & 0.68% & 0.31% & 1.08% & 0.23% & 0.77%\ 0.6 & 0.83% & 1.03% & ... & 0.94% & 0.19% & 0.91% & ... & 0.27%\ 0.7 & 0.42% & 0.21% & ... & ... & 0.57% & 0.61% & ... & ...\ 0.8 & 2.89% & 1.79% & ... & ... & 2.21% & 4.16% & ... & ...\ \ \ Summary ======= In this study, we explore the requirements for HZs to exist for selected examples of binary systems based on the method given in Paper I and II with updated results for terrestrial climate models obtained by Kopparapu et al. (2013, 2014). Thus, we developed fitting equations to efficiently determine the existence of HZs. Utilizing the fitting equations allows us to identify if the respective HZs is able to exist without the need for cumbersome calculations. Future work will deal with improving the fitting equations for enhanced accuracy. We also plan to have $M_{1}$ and $M_{2}$ as parameters in the fitting equations instead of being fixed values as for now. This will make the fitting equation more useful and applicable. Acknowledgments {#acknowledgments .unnumbered} =============== [This work has been supported by the Department of Physics, University of Texas at Arlington (UTA).]{} [1]{} Borucki, W. J., Agol, E., Fressin, F., et al. 2013, Science, 340, 587 Cuntz, M. 2014, , 780, 14 \[Paper I\] Cuntz, M. 2015, , 798, 101 \[Paper II\] Cuntz, M., & Bruntz, R. 2014, in Cool Stars, Stellar Systems, and the Sun: 18th Cambridge Workshop, ed. G. van Belle & H. Harris (Flagstaff: Lowell Observatory), p. 831 (arXiv:1409.3449) Duquennoy, A., & Mayor, M. 1991, , 248, 485 Dvorak, R. 1982, OAWMN, 191, 423 Holman, M. J., & Wiegert, P. A. 1999, , 117, 621 Kasting, J. F., Whitmire, D. P., & Reynolds, R. T. 1993, Icarus, 101, 108 Kopparapu, R. K., Ramirez, R., Kasting, J. F., et al. 2013, , 765, 131; Erratum 770, 82 Kopparapu, R. K., Ramirez, R. M., SchottelKotte, J., et al. 2014, , 787, L29 Kostov, V. B., McCullough, P. R., Carter, J. A., et al. 2014, , 784, 14; Erratum 787, 93 Mischna, M. A., Kasting, J. F., Pavlov, A., & Freedman, R. 2000, Icarus, 145, 546 Ortiz, M., Gandolfi, D., Reffert, S., et al. 2015, , 573, L6O Welsh, W. F., Orosz, J. A., Short, D. R., et al. 2015, , 809, 26 [^1]: [http://science.nasa.gov/science-news/science-at-nasa\ /2015/06jan\_kepler1000]{}.
--- abstract: | Prediction intervals in supervised Machine Learning bound the region where the true outputs of new samples may fall. They are necessary in the task of separating reliable predictions of a trained model from near random guesses, minimizing the rate of False Positives, and other problem-specific tasks in applied Machine Learning. Many real problems have heteroscedastic stochastic outputs, which explains the need of input-dependent prediction intervals. This paper proposes to estimate the input-dependent prediction intervals by a separate Extreme Learning Machine model, using variance of its predictions as a correction term accounting for the model uncertainty. The variance is estimated from the model’s linear output layer with a weighted Jackknife method. The methodology is very fast, robust to heteroscedastic outputs, and handles both extremely large datasets and insufficient amount of training data. author: - 'Anton Akusok$^1$, Yoan Miche$^2$, Kaj-Mikael Björk$^3$' - Amaury Lendasse$^4$ bibliography: - 'pi.bib' date: | $^1$Arcada University of Applied Sciences, Helsinki, Finland\ $^2$Nokia Solutions and Networks Group, Espoo, Finland\ $^3$Risklab at Arcada UAS, Helsinki, Finland\ $^4$Department of Mechanical and Industrial Engineering and\ The Iowa Informatics Initiative, The University of Iowa, Iowa City, USA title: 'Per-sample Prediction Intervals for Extreme Learning Machines' --- Introduction ============ Practical applications of machine learning can be problematic in the sense that developers and practitioneers often do not fully trust in their own predictions. A fundamental reason for this mistrust can be found in the fact that Mean Squared Error (MSE) and other error measures averaged over a dataset are commonly used to evaluate performance of a method or compare different methods. Averaged error measures are unfit for business processes where each particular sample is important, as it represents a customer or other existing entity [@akusok_twostage_2014]. On the other hand, applied Machine Learning models might skip some data samples, because they are only a part of a bigger process structure, and uncertain data might be given to human experts to be handled [@j._hegedus_methodology_2011]. The trust problem can be solved by computing a sample-specific confidence value [@pevec_input_2014]. Then predictions with high confidence (and enough trust in them) are used, while data samples with uncertain predictions are passed to the next analytical stage. The Machine Learning model works as a filter, solving “easy cases” automatically with confident predictions, and reducing the amount of data remaining to be analyzed [@akusok_arbitrary_2015]. Let $\{{\ensuremath{\mathbf{x}}}_i, y_i\}, \ i \in [1,N]$ be a dataset where outputs $y$ are independently drawn from a normal distribution conditioned on inputs ${\ensuremath{\mathbf{x}}}$: $$y = \mathcal{N}(f({\ensuremath{\mathbf{x}}}), \ \sigma^2({\ensuremath{\mathbf{x}}})) = f({\ensuremath{\mathbf{x}}}) + \mathcal{N}(0, \ \sigma^2({\ensuremath{\mathbf{x}}})) \label{eq:1}$$ This dataset has heteroscedastic noise because the variance is not constant. A common homoscedasticity assumption simplifies formula  to $y = f({\ensuremath{\mathbf{x}}}) + N(0, \ \sigma^2)$ but removes the ability to separate confident predictions from uncertain ones. The heteroscedastisity of outputs is a reasonable assumption because applied Machine Learning problems often have stochastic outputs. Such outputs do not have a single correct value for the given input. The variance of random noise in outputs may be assumed equal because the noise is independent of the inputs, but the same assumption cannot be made about the variance of the stochastic outputs because they certainly depend on the inputs. This work focuses on prediction intervals specifically for Extreme Learning Machines (ELM) [@huang_extreme_2004; @lendasse_advances_2016]. ELM is a fast non-linear model with universal approximation ability [@huang_universal_2006]. It has a feed-forward neural network structure but with randomly fixed hidden layer weights, so only the linear output layer needs to be trained. With a large hidden layer and L2-regularization [@tikhonov_solution_1963] ELM exhibit stable predictions [@miche_tropelm_2011], that are not affected by a particular initialization of the random hidden layer weights. It is an excelled Machine Learning tool to solve applied problems [@akusok_mdelm_2015; @termenon_brain_2016] with simple formulation, little to no hyper-parameters, performance at the state-of-the-art level [@huang_local_2015; @sovilj_extreme_2016; @z._huang_efficient_2017] and scalable to Big Data [@akusok_highperformance_2015; @swaney_efficient_2015]. The idea of the method is to use an ELM to predict an output $f({\ensuremath{\mathbf{x}}})$, and a second ELM to estimate its conditional variance $\sigma^2({\ensuremath{\mathbf{x}}}) = (y - f({\ensuremath{\mathbf{x}}}))^2$. Furthermore, a variance analysis is done on the predictions of the second ELM. It provides upper and lower boundaries for the predicted variance. These boundaries describe the model uncertainty for samples with little similar training data available, and make the methodology uniformly applicable to different problems. The rest of the paper is organized as following. The following section describes state-of-the-art in prediction intervals estimation, and how the proposed solution differs from the rest. Section 2 describes Extreme Learning Machines and the proposed methodology. Section 3 analyses the method performance on small artificial and real world datasets. Section 4 presents the results on huge real world dataset, and describes the runtime requirements compared to the original ELM. Section 5 summarizes the findings. State-of-the-Art ---------------- Prediction with uncertainty in a well-known task. Probabilistic methods can obviously formulate a solution. Prediction intervals are available in Bayesian formulation of ELM [@e._soria-olivas_belm:_2011; @chen_variational_2016], including per-sample PI [@shang_confidence-weighted_2015] though the applicability is limited due to the quadratic computational cost in the number of data samples. Fuzzy nonlinear regression [@he_fuzzy_2016] approach exists for problems having fuzzy inputs or outputs. It applies random weights neural networks with non-iterative training similar to ELM, but formulates the solution in terms of fuzzy sets theory [@asai_linear_1982]. Such a native fuzzy approach allows for a detailed investigation of the effects of uncertainty on learning of a method [@wang_study_2015; @wang_noniterative_2017], and has important practical applications [@ashfaq_fuzziness_2017] for fuzzy data problems. Without runtime limitation, good results are achieved with model independent methods [@pevec_prediction_2015] based on clustering of input data and re-sampling. Clustering of inputs and repetitive model re-training during the re-sampling both scale poorly with data size, and would limit the performance of ELM otherwise capable of processing billions of data samples [@akusok_highperformance_2015]. A specific case [@Lin2017] of model-independent approach limited to linear models (with arbitrary solution algorithm and hyper-parameters) provides good results for heteroscedastic datasets ([@Lin2017], *supplementary materials*), and suits for ELM output layer solution as well. The method applies to any amount of training data, and will benefit from huge datasets by producing more independent models in its ensemble part. Unfortunately, it does not output prediction intervals directly. The scope of this paper is constrained to fast ways of computing prediction intervals of outputs, tailored specifically for Extreme Learning Machine. The proposed solution works especially well in conjunction with ELM, re-using some heavy computational parts as shown in the next section. A fast runtime is one of the the key features of ELM, making it valuable for practical applications and Big Data processing. Another key feature of ELM is approximation of complex unknown functions, and the proposed method approximates prediction intervals of model outputs in similar fashion without probabilistic or fuzzy set notations. Methodology {#sec:method} =========== This section starts by introducing the Extreme Learning Machine. It continues with the prediction intervals idea, and its implementation suitable for ELM. The section concludes with a formal description of an algorithm. Extreme Learning Machine ------------------------ The Extreme Learning Machine [@huang_extreme_2006] model is formulated as a feed-forward neural network with a single hidden layer. It has $d$ input and $L$ hidden neurons. Solution is given for one output neuron; in case of many output neurons each one has an independent solution. The hidden layer weights $\mathbf{W}_{d \times L}$ are initialized with random noise and fixed. Often an extra input neuron with the constant $+1$ value is added to function as bias. Hidden layer neurons apply a non-linear transformation function $\phi(\cdot)$ to their output. Typical functions are sigmoid or hyperbolic tangent, but this function may be omitted to add linear neurons. For $N$ input data samples gathered in a matrix $\mathbf X_{N \times d}$, the hidden layer output matrix $\mathbf H_{N \times L}$ is: $$\mathbf H_{i,j} = \phi(\sum_{k=1}^d \mathbf X_{i,k} \mathbf W_{k,j}), \ i \in [1, N], \ j \in [1,L]$$ where the function $\phi()$ is applied element-wise. In matrix notation, the formula simplifies to $\mathbf H = \phi(\mathbf X \mathbf W)$. The output layer of ELM is a linear regression problem $\mathbf H {\ensuremath{\boldsymbol{\beta}}}= \mathbf y$, that is over-determined in real cases with more data samples than hidden neurons ($N > L$). The output weights ${\ensuremath{\boldsymbol{\beta}}}$ are given by an ordinary least squares solution ${\ensuremath{\boldsymbol{\beta}}}= {\ensuremath{\mathbf{H}}}^\dagger {\ensuremath{\mathbf{y}}}$ computed with the Moore-Penrose pseudoinverse [@rao_generalized_1972] ${\ensuremath{\mathbf{H}}}^\dagger$ of matrix ${\ensuremath{\mathbf{H}}}$. Random initialization may decrease the performance of a naive ELM. This problem is completely solved by including L2 regularization in the output layer solution. The linear regression problem becomes: $$(\mathbf H^T \mathbf H + \gamma \mathbf I)\beta = (\mathbf H^T \mathbf y) \label{eq:ELM1}$$ where $\gamma$ is L2-regularization parameter optimized by validation. With L2 regularization and a large number of hidden neurons, ELM performance becomes stable and unaffected by a particular random initialization of $\mathbf W$ [@huang_extreme_2012]. Prediction Intervals -------------------- Assume a stochastic output $y$ with i.i.d. distribution conditioned on the inputs $\mathbf x$ as in equation . Model prediction $\hat{y} = \hat{f}(\mathbf x)$ estimates only the mean value of an output, and ignores its stochastic nature. Prediction intervals (PI) offer a simple way of describing the uncertainty of the output $y$ by estimating the boundaries on its value, such that the true output $y$ lies between those boundaries with the given probability $\alpha$. For normally distributed outputs  the prediction intervals at the confidence level $\alpha$ can be modelled by $$\text{PI}({\ensuremath{\mathbf{x}}}) = \hat{f}({\ensuremath{\mathbf{x}}}) \pm \Phi^{-1}(\alpha)\sigma({\ensuremath{\mathbf{x}}}), \label{eq:4}$$ where $\Phi^{-1}()$ is an inverse cumulative distribution function, i.e. $\Phi^{-1}(95\%) \approx 1.96$. The maximum likelihood estimator for the variance $\sigma^2$ of a homoscedastic output $y$ is given by Mean Squared Error [@bishop_pattern_2006]. However, it provides uniform prediction intervals that fit poorly to practical applications of Machine Learning. An estimation of variance in linear regression is a well-researched topic, with plethora of theoretical [@shao_heteroscedasticity-robustness_1987] and experimental [@pevec_prediction_2015] results available. Variance of heteroscedastic model predictions $\hat{y}$ can be computed with the Bienaymé formula [@loeve_probability_1955; @johnson_applied_2002] from the variance of model weights ${\ensuremath{\boldsymbol{\beta}}}$. However, variance of the predicted outputs corresponds to confidence intervals and does not describe the range of possible true outputs $y$. The relation between the heteroscedastic prediction intervals and other methods is illustrated on Figure \[fig:pi\_vs\_ci\]. [0.49]{} ![image](myPI_basic.pdf){width="\textwidth"}   [0.49]{} ![image](myPI_MSE.pdf){width="\textwidth"} [0.49]{} ![image](myPI_CI.pdf){width="\textwidth"}   [0.49]{} ![image](myPI_PI.pdf){width="\textwidth"} Prediction Intervals for Extreme Learning Machines -------------------------------------------------- The idea of this paper is to estimate the variance of heteroscedastic outputs $\sigma^2(\mathbf x)$ using a second ELM model. The model predictions $\hat{y}$ are computed by the first ELM, then the squared residuals $r^2 = (\hat{y} - y)^2$ are used as training outputs for the second ELM that learns to predict the conditional variance of outputs. However, ELM predictions can be inaccurate, and their quality must be taken into account. For that reason, variances of the predictions for the first ELM $\sigma^2_y(\mathbf x)$ and the second ELM $\sigma^2_r(\mathbf x)$ are added to the predicted squared residuals $\hat{r}^2({\ensuremath{\mathbf{x}}})$ to bound the true variance of the outputs $\sigma^2({\ensuremath{\mathbf{x}}})$: $$\sigma^2({\ensuremath{\mathbf{x}}}) \le \hat{r}^2(\mathbf x) + \sigma^2_r({\ensuremath{\mathbf{x}}}) + \sigma^2_y({\ensuremath{\mathbf{x}}})$$ In addition to directly estimating the input-dependent variance $\sigma^2({\ensuremath{\mathbf{x}}})$, this expression has the desired properties of giving larger variance for models with insufficient amount of training data. With an excessive amount of training data $\{{\ensuremath{\mathbf{x}}}_i, y_i\}, \ i \in [1,N]$, variances of the predicted residuals $\sigma^2_r(x)$ and the predicted outputs $\sigma^2_y(x)$ decrease to zero and the variance of true outputs is given by its ELM estimation: $\lim_{N \rightarrow \inf} \big( \hat{r}^2 \big) = \sigma^2({\ensuremath{\mathbf{x}}})$. A similar approach to the prediction intervals exist in feed-forward neural networks [@nix_learning_1995], however it is valid only for the case $N \rightarrow \inf$. The output layer of ELM is a linear regression. Bienaymé formula [@loeve_probability_1955; @johnson_applied_2002] provides the variance of outputs in linear regression, and in ELM: $$\sigma^2_y(\mathbf x_i) = {\ensuremath{\mathbf{h}}}_i \Sigma_{{\ensuremath{\boldsymbol{\beta}}}} {\ensuremath{\mathbf{h}}}_i^T, \ i \in [1, \ N],$$ where ${\ensuremath{\mathbf{h}}}_i$ is the hidden layer output of an ELM for an input sample ${\ensuremath{\mathbf{x}}}_i$. There is plethora of methods for estimating covariance $\Sigma_{{\ensuremath{\boldsymbol{\beta}}}}$ of normally distributed linear system weights ${\ensuremath{\boldsymbol{\beta}}}\sim \mathcal{N}(\hat{{\ensuremath{\boldsymbol{\beta}}}}, \Sigma_{{\ensuremath{\boldsymbol{\beta}}}})$. The method of choice is weighted Jackknife estimator [@wu_jackknife_1986]. It is unbiased, robust against heteroscedastic noise [@shao_heteroscedasticity-robustness_1987; @horn_robust_1998; @flachaire_bootstrapping_2005; @davidson_wild_2008], as fast as an ELM, and scales well with the data size. Another good method for variance estimation is Wild Bootstrap [@davidson_wild_2008] with nice theoretical properties, but it is slower as the bootstrap part requires several repetitions to converge. Weighted Jackknife for Big Data ------------------------------- A summary of the Weighted Jackknife methods is presented below. Its inputs are an ELM hidden layer outputs ${\ensuremath{\mathbf{H}}}$ and residuals $\mathbf{r} = {\ensuremath{\mathbf{y}}}- \hat{{\ensuremath{\mathbf{y}}}}$. $$\begin{aligned} \mathbf{P} &=& ({\ensuremath{\mathbf{H}}}^T{\ensuremath{\mathbf{H}}}+ \gamma \mathbf{I})^{-1} \label{eq:P} \\ \mathbf{S} &=& {\ensuremath{\mathbf{H}}}\mathbf{P}\\ \mathbf{H}'_i &=& \frac{\mathbf{r}_i^2}{1 - \mathbf{S}_i \cdot {\ensuremath{\mathbf{H}}}_i} {\ensuremath{\mathbf{H}}}_i, \ i \in [1,N] \label{eq:w}\\ \mathbf{A} &=& {\ensuremath{\mathbf{H}}}'^T\mathbf{S}\\ \Sigma &=& \mathbf{P}\mathbf{A}\end{aligned}$$ The method uses three auxiliary matrices: $\mathbf{P}$, $\mathbf{S}$ and $\mathbf{A}$. Equation  creates a weighted data matrix ${\ensuremath{\mathbf{H}}}'$ by scaling every row of the original data ${\ensuremath{\mathbf{H}}}$, its denominator includes a dot product between two vectors $\mathbf{S}_i \cdot {\ensuremath{\mathbf{H}}}_i$. Weighted Jackknife works well together with ELM and Big Data. First, an auxiliary matrix $\mathbf{P}$ in  is an inverse of the already computed matrix in an ELM solution . Second, Big Data applications with huge number of samples are often limited by memory size, especially if the matrix computations are run on GPUs with very limited memory pool. Weighted Jackknife avoids such limitation by batch computations. Let the data matrix ${\ensuremath{\mathbf{H}}}$ split in $k$ equal parts with $N/k$ samples each: $${\ensuremath{\mathbf{H}}}= \begin{pmatrix} {\ensuremath{\mathbf{H}}}^1 \\ {\ensuremath{\mathbf{H}}}^2 \\ \cdots \\ {\ensuremath{\mathbf{H}}}^k \end{pmatrix}$$ Then auxiliary matrix $\mathbf{S}$ can be computed in the corresponding parts $\mathbf{S}^j = {\ensuremath{\mathbf{H}}}^j\mathbf{P}, \ j \in [1,k]$, and an auxiliary matrix $\mathbf{A}$ becomes a summation over all the parts $\mathbf{A} = \sum_{j=1}^k ({\ensuremath{\mathbf{H}}}'^{j})^T\mathbf{S}^j$. Size of matrices $\mathbf{A}$ and $\mathbf{P}$ does not depend on the number of samples $N$, and the weighting  may be done in-place without consuming additional memory. Having only one data part ${\ensuremath{\mathbf{H}}}^j$ in memory at a time reduces the total memory requirements by a factor of $k$. Large enough $k$ allows a single workstation to process billions of samples with Weighted Jackknife, the same way as presented for ELM in [@akusok_highperformance_2015]. The practical value of $k$ is limited by the minimum size $N/k$ of a single batch, that cannot fully utilize CPU/GPU computational potential for small data batches of $N/k < 1000$ [@akusok_highperformance_2015]. ELM Prediction Intervals Algorithm {#sec:summary} ---------------------------------- Prediction intervals are computed in two stages. The first stage uses training data to learn the two necessary ELM models $m_\text{data}, \ m_\text{var}$, and estimate the covariances of output weights $\Sigma_y, \ \Sigma_r$ in these models: 1. Train an ELM model $m_\text{data}$ on the training data ${\ensuremath{\mathbf{X}}}, {\ensuremath{\mathbf{y}}}$ 2. Predict outputs $\hat{{\ensuremath{\mathbf{y}}}}$ for the training data 3. Use weighed Jackknife to estimate covariance $\Sigma_y$ of the output weights ${\ensuremath{\boldsymbol{\beta}}}_\text{data}$ 4. Compute residuals ${\ensuremath{\mathbf{r}}}= {\ensuremath{\mathbf{y}}}- \hat{{\ensuremath{\mathbf{y}}}}$ for the training data 5. Train another ELM model $m_\text{var}$ to predict the residuals ${\ensuremath{\mathbf{X}}}, {\ensuremath{\mathbf{r}}}^2$ 6. Use weighed Jackknife to estimate covariance $\Sigma_r$ of the output weights ${\ensuremath{\boldsymbol{\beta}}}_\text{var}$ The training data ${\ensuremath{\mathbf{X}}}, {\ensuremath{\mathbf{y}}}$ and auxiliary vectors $\hat{{\ensuremath{\mathbf{y}}}}, {\ensuremath{\mathbf{r}}}, \hat{{\ensuremath{\mathbf{r}}}}^2$ can be discarded at this point. The second stage uses the previously trained models to predicts test outputs, their squared residuals and all variances. Then the prediction intervals are estimated with an equation . 1. Compute the hidden layer outputs ${\ensuremath{\mathbf{H}}}_{\text{data}}, {\ensuremath{\mathbf{H}}}_{\text{var}}$ for test inputs ${\ensuremath{\mathbf{X}}}_\text{test}$\ using the two ELM models $m_\text{data}, \ m_\text{var}$ 2. Predict test outputs $\hat{{\ensuremath{\mathbf{y}}}}_\text{test} = {\ensuremath{\mathbf{H}}}_{\text{data}} {\ensuremath{\boldsymbol{\beta}}}_\text{data}$ 3. Compute variance of the predicted outputs ${\ensuremath{\boldsymbol{\sigma}}}^2_y = \text{diag} ({\ensuremath{\mathbf{H}}}_{\text{data}} \Sigma_\text{data} {\ensuremath{\mathbf{H}}}_{\text{data}}^T)$ 4. Predict squared residuals $\hat{{\ensuremath{\mathbf{r}}}}^2 = {\ensuremath{\mathbf{H}}}_{\text{var}} {\ensuremath{\boldsymbol{\beta}}}_\text{var}$ 5. Compute variance of the predicted square residuals ${\ensuremath{\boldsymbol{\sigma}}}^2_r = \text{diag} ({\ensuremath{\mathbf{H}}}_{\text{var}} \Sigma_\text{var} {\ensuremath{\mathbf{H}}}_{\text{var}}^T)$ 6. Compute prediction intervals for a desired confidence level $\alpha$: $$\text{PI} = \hat{{\ensuremath{\mathbf{y}}}}_\text{test} \pm \Phi^{-1}(\alpha) \sqrt{ \hat{{\ensuremath{\mathbf{r}}}}^2 + {\ensuremath{\boldsymbol{\sigma}}}^2_r + {\ensuremath{\boldsymbol{\sigma}}}^2_y } \label{eq:8}$$ Models $m_\text{data}, \ m_\text{var}$ can have different optimal number of neurons, that should be validated. Using L2 regularization prevents numerical instabilities. Note that the predicted squared residuals $\hat{{\ensuremath{\mathbf{r}}}}^2$ might have negative values, that are replaced by zero. Experimental Results {#sec:experimental} ==================== Artificial Dataset ------------------ An artificial dataset with heteroscedastic noise is shown on Figure \[fig:mydata\]. Additional tests are done on homoscedastic versions of the same dataset with the same projection function $f()$ with an input-independent normally distributed noise. All experiments used ELM with one linear and 10 hyperbolic tangent hidden neurons, in both $m_\text{data}$ and $m_\text{var}$. ![Artificial dataset with true 95% intervals for noise.[]{data-label="fig:mydata"}](myPI_orig.pdf){width="60.00000%"} Figure \[fig:myresults\] shows the computed PI on the heteroscedastic artificial dataset at 95% confidence level. The figure also presents the standard deviation of the predicted residuals $1.96\sigma^2_r$ at 95% confidence, to show how it is affected by the amount of training data. As the amount of training data increases, PI are given more precisely by $\hat{{\ensuremath{\mathbf{r}}}}^2$ and depend less on $\sigma^2_r$ (Figure \[fig:myresults\], right). ![Estimated PI for heteroscedastic stochastic outputs. Variance of the predicted residuals $\hat{{\ensuremath{\mathbf{r}}}}^2$ (shaded area) captures model uncertainty with less training data. Thin dash lines are actual PI, solid line is the projection function, thick dash line is an estimated output, and black dots are training data samples.[]{data-label="fig:myresults"}](myPI_200.pdf "fig:"){width="49.00000%"} ![Estimated PI for heteroscedastic stochastic outputs. Variance of the predicted residuals $\hat{{\ensuremath{\mathbf{r}}}}^2$ (shaded area) captures model uncertainty with less training data. Thin dash lines are actual PI, solid line is the projection function, thick dash line is an estimated output, and black dots are training data samples.[]{data-label="fig:myresults"}](myPI_1000.pdf "fig:"){width="49.00000%"} Similar results obtained for the datasets with homoscedastic noise, presented on Figure \[fig:myresults\_homo\]. Larger variance of outputs makes the prediction task harder, leading to larger errors in $\hat{y}$ (Figure \[fig:myresults\_homo\], upper left). At the same time the variance of $\hat{{\ensuremath{\mathbf{r}}}}^2$ increases (Figure \[fig:myresults\_homo\], shaded area), and the true PI rarely go beyond their estimated boundaries. Smaller variance of noise leads to more more precise PI, that still cover the true PI most of the time. ![Estimated PI and their variance (shaded area) for homoscedastic stochastic outputs with difference variance; more data leads to more precise PI. Thin dash lines are actual PI, solid line is the projection function, thick dash line is estimated projection function, and black dots are training data samples.[]{data-label="fig:myresults_homo"}](myPI_large_200.pdf "fig:"){width="49.00000%"} ![Estimated PI and their variance (shaded area) for homoscedastic stochastic outputs with difference variance; more data leads to more precise PI. Thin dash lines are actual PI, solid line is the projection function, thick dash line is estimated projection function, and black dots are training data samples.[]{data-label="fig:myresults_homo"}](myPI_large_1000.pdf "fig:"){width="49.00000%"}\ ![Estimated PI and their variance (shaded area) for homoscedastic stochastic outputs with difference variance; more data leads to more precise PI. Thin dash lines are actual PI, solid line is the projection function, thick dash line is estimated projection function, and black dots are training data samples.[]{data-label="fig:myresults_homo"}](myPI_small_200.pdf "fig:"){width="49.00000%"} ![Estimated PI and their variance (shaded area) for homoscedastic stochastic outputs with difference variance; more data leads to more precise PI. Thin dash lines are actual PI, solid line is the projection function, thick dash line is estimated projection function, and black dots are training data samples.[]{data-label="fig:myresults_homo"}](myPI_small_1000.pdf "fig:"){width="49.00000%"} In the extreme case of a training set with only 30 samples (which is not enough for learning the correct shape of the true projection function), the predicted squared residuals $\hat{{\ensuremath{\mathbf{r}}}}^2$ become unreliable. However, including their variance in the predictions compensates for the model uncertainty (see Figure \[fig:myresults\_extreme\]). It sometimes leads to over-estimation of the true PI, but this is a desired property that prevents an uncertain model from predicting false highly confident outputs $\hat{y}$. ![Estimated PI and their variance (shaded area) with an insufficient amount of training data; PI are over-estimated in poorly predicted areas. Thin dash lines are actual PI, solid line is the projection function, thick dash line is estimated projection function, and black dots are training data samples.[]{data-label="fig:myresults_extreme"}](myPI_30.pdf "fig:"){width="49.00000%"} ![Estimated PI and their variance (shaded area) with an insufficient amount of training data; PI are over-estimated in poorly predicted areas. Thin dash lines are actual PI, solid line is the projection function, thick dash line is estimated projection function, and black dots are training data samples.[]{data-label="fig:myresults_extreme"}](myPI_large_30.pdf "fig:"){width="49.00000%"} Comparison on Real World Datasets --------------------------------- ELM Prediction Intervals are compared on four real datasets with four other methods presented in [@khosravi_lower_2011]. Details of the datasets are given in Table \[tab:data\]. The paper uses two common metrics: Prediction Intervals Coverage Probability (PICP) that is a percentage of test samples whose outputs lie between the PI, and the Normalized Mean Predicted Interval Width that is an average width of PI on a test set divided by the range of the test targets. PICP shows what percentage of targets actually lie within PI, and it should correspond to the target coverage. NMPIW presents how optimal are the PI for the given task, compared to a naive approach of simply taking the full range of targets as an interval. Ideal PI have a small NMPIW with PICP equals to $\Phi^{-1}(\alpha)$ target coverage. Dataset Samples Features Reference ------------------------------------- --------- ---------- --------------------------------- Concrete compressive strength 1030 8 [@yeh_modeling_1998] Plasma beta-carotene 315 12 [@nierenberg_determinants_1989] Powerplant - Steam pressure 200 5 [@guidorzi_identification_1974] Powerplant - Main steam temperature 200 5 [@guidorzi_identification_1974] : Real-world datasets used for comparison.[]{data-label="tab:data"} The two measures PICP and NMPIW are inter-dependent as increasing PI width also increases the coverage. The comparison work [@khosravi_lower_2011] proposed a combined measure to replace PICP and NMPIW, but it is subjective due to two arbitrary hyper-parameters. This paper rather presents PICP and NMPIW on the same plot. ELM PI method proposed in the paper is compared to four other methods of computing PI for neural networks. The Delta method [@chryssolouris_confidence_1996] linearizes a neural networks model around a set of parameters, then applies an asymptotic theory to construct the PI. An extension of the Delta method to heteroscedastic noise is available [@ding_backpropagation_2003], although still limited due to linearization. Bayesian learning of neural network weights allows for direct derivation of variance for particular predicted values [@mackay_evidence_1992], but at a very high computational cost. Bootstrap method is directly applicable to any machine learning method including neural networks, although caution should be taken in selecting bootstrap parameters to make the method resilient to heteroscedastic noise [@davidson_wild_2008]. Finally, the Lower Upper Bound Estimation (LUBE) method proposed by [@khosravi_lower_2011] uses two additional outputs in a neural network to predict lower and upper PI, training the network with a custom cost function that includes both PICP and NMPIW. Experimental setup uses L1 regularized ELM model [@miche_opelm_2010] for automatic model structure selection on relatively small datasets, implemented in HP-ELM toolbox [@akusok_highperformance_2015]. The datasets are randomly split in 70% training and 30% test samples, median results over 30 initializations are reported. Numerical experimental results are given in Table \[tab:results\]; comparison numbers for other methods are available in the corresponding paper [@khosravi_lower_2011]. Runtime is reported for a 1.4GHz dual-core laptop. Dataset PICP(%) NMPIW(%) Runtime(ms) ------------------------------------- --------- ---------- ------------- Concrete compressive strength 91.59 34.01 92 Plasma beta-carotene 92.63 40.66 36 Powerplant - Steam pressure 93.33 39.29 27 Powerplant - Main steam temperature 88.33 18.38 35 : Experimental results of ELM Prediction Intervals.[]{data-label="tab:results"} Performance of the methods is shown as points in NMPIW/PICP coordinates, presented on Figure \[fig:real\_comparison\]. An ideal method would be at the left edge of the dashed line (low NMPIW with precise PICP). As shown on the figure, ELM PI method performs better on Steam pressure dataset, a little worse on Plasma beta-carotene datasets, and about average on the other two. ![Comparison of the ELM PI method (*filled star*) with four other methods from [@khosravi_lower_2011]. Best performing methods have low NMPIW and the target coverage (points close to the upper left corner).[]{data-label="fig:real_comparison"}](concrete.pdf "fig:"){width="49.00000%"} ![Comparison of the ELM PI method (*filled star*) with four other methods from [@khosravi_lower_2011]. Best performing methods have low NMPIW and the target coverage (points close to the upper left corner).[]{data-label="fig:real_comparison"}](carotene.pdf "fig:"){width="49.00000%"}\ ![Comparison of the ELM PI method (*filled star*) with four other methods from [@khosravi_lower_2011]. Best performing methods have low NMPIW and the target coverage (points close to the upper left corner).[]{data-label="fig:real_comparison"}](steam1.pdf "fig:"){width="49.00000%"} ![Comparison of the ELM PI method (*filled star*) with four other methods from [@khosravi_lower_2011]. Best performing methods have low NMPIW and the target coverage (points close to the upper left corner).[]{data-label="fig:real_comparison"}](steam2.pdf "fig:"){width="49.00000%"} A further analysis shows possible reasons for good performance on Steam pressure, and bad one on Plasma beta-carotene. The analysis compares against uniform PI using the same ELM predictions for a dataset. Such PI estimate homoscedastic noise correctly, but cannot learn heteroscedastic noise. Let a uniform PI grow starting from zero, then as they grow both coverage and the interval width will increase, generating many pairs of {NMPIW, PICP} points. These points are then connected by a line that represents homoscedastic PI performance boundary. Homoscedastic PI performance boundary and ELM PI for the two datasets in question are shown on Figure \[fig:real\_analysis\]. ![Comparison of ELM PI (*black marker*) with uniform PI of varying width (*solid line*). Heteroscedastic ELM PI perform better on the Steam pressure dataset, while uniform PI are enough for the Plasma beta-carotene dataset.[]{data-label="fig:real_analysis"}](steam1_plot.pdf "fig:"){width="49.00000%"} ![Comparison of ELM PI (*black marker*) with uniform PI of varying width (*solid line*). Heteroscedastic ELM PI perform better on the Steam pressure dataset, while uniform PI are enough for the Plasma beta-carotene dataset.[]{data-label="fig:real_analysis"}](carotene_plot.pdf "fig:"){width="49.00000%"} Obviously, useful heteroscedastic PI must be above this boundary – but in practice they may end up below due to poorer parameter estimation. Indeed, heteroscedastic PI need interval width per sample while homoscedastic PI only have interval width per dataset, that is easier to estimate precisely. As seen from Figure \[fig:real\_analysis\], this is the situation for ELM PI on the Plasma beta-carotene dataset where uniform PI perform better. On Steam pressure however, heteroscedastic PI perform better than uniform ones as they have higher coverage with the same average width. Another possible reason for the difference in performance is that Plasma beta-carotene dataset has homoscedastic noise, while Steam pressure dataset has actually a heteroscedastic noise (or heteroscedastic stochastic outputs), so heteroscedastic PI provide the most benefit when computed on the latter dataset. Minimizing False Positives on a Large Real Dataset ================================================== This experiment uses PI to minimize the amount of false positive predictions on a large classification task. Note that the proposed PI methodology applies equally well to regression, and monotonic classification tasks are handled even better using purposely developed [@ZHU2017205] implementations of ELM as $m_\text{data}$. A 4,000,000-sample dataset of pixel colors for skin/non-skin classification is created from the FaceSkin Images dataset [@phung_skin_2005]. The inputs are colors of the target pixel and its $7 \times 7$ neighbors with $7 \times 7 \times 3 \ \text{(RGB)} = 147$ input features total, and the outputs are +1 for skin pixels and -1 for non-skin ones. The dataset uses photos of various people under different lighting conditions, without any pre-processing. True skin masks are created manually and are highly accurate. Half of the dataset is used for training, and the other half for test. The applied ELM model uses 147 linear + 200 sigmoid neurons. Predictions of ELM are real values, that are turned into classes by taking their sign. Due to a simple model and input features (that are not tailored for image processing) the performance is average at about 87% accuracy. The goal of the experiment is to check whether the per-sample PI can be used to significantly improve the accuracy at a cost of coverage, compared to per-datasets PI computed by MSE. To trade coverage for precision, a threshold $\theta$ is introduced. ELM predictions with an absolute value less than $\theta$ are ignored. A value of $\theta$ corresponding to the desired coverage percentage is found by scalar optimization methods. For per-sample PI, threshold $\theta$ is multiplied by the value of the corresponding $\text{PI}_i$ for a prediction $y_i$. The results are shown on Figure \[fig:skin\_large\]. Here, an ELM models with a total of 347 hidden neurons is trained on a dataset with two million samples. The per-sample PI improves the true positive rate slightly. However, they reach almost zero false positives with 3% coverage, and exactly zero at 1%. Contrary to the proposed method, uniform PI computed with MSE cannot achieve zero false positives. Although one percent of coverage seems very little, it represents 20,000 test samples for that dataset, and it is a surprising achievement for a simple ELM model that is not optimized for False Positives reduction like in custom applications [@akusok_twostage_2014]. A specifically designed model, or an ensemble of multiple models could achieve zero False Positives with a larger coverage – a significant result for practical use of ELM, and Machine Learning algorithms in general. ![True Positive versus False Positive rate for the most confident part of the predictions (depicted by percentage) for a MSE-based threshold (dash line), and sample-specific threshold based on PI (solid line). Per-sample PI give almost zero False Positives for 3% best predictions, and exactly zero for 1% best. True Positives rate is overall higher than for an MSE-based threshold.[]{data-label="fig:skin_large"}](ROC_skin_2000000.pdf){width="\textwidth"} Runtime Analysis ---------------- The runtime of per-sample PI is examined on the pixel classification dataset explained above. The experiments are run on a desktop machine with 4-core Intel Skylake CPU, using an efficient ELM toolbox from [@akusok_highperformance_2015]. With 2,000,000 training samples and 347 hidden neurons, training an ELM takes 12 seconds (for both $m_{\text{data}}$ or $m_{\text{var}}$). Computing covariance matrices $\Sigma_y$ and $\Sigma_r$ with weighted Jackknife method takes 25 seconds each, or only twice longer that training an ELM itself. Test predictions take 8 seconds to compute, and test per-sample PI take 32 seconds. In total, prediction intervals increase the ELM runtime by a constant factor of about 5. Runtime on the real-world datasets is not directly comparable with the other methods as they are run on different machines, but it is the same order of magnitude as Bootstrap, an order of magnitude faster than Delta or Bayesian methods, but also an order of magnitude slower than the LUBE method. Replacing L1 regularized ELM with standard ELM reduces the runtime to the level of LUBE method, however it degrades the results on small datasets with a few hundreds samples. Extremely large datasets that do not need regularization benefit from the faster run speed. Conclusion {#sec:conclusion} ========== The paper proposed a method of computing per-sample prediction intervals for Extreme Learning Machines. It successfully evaluates variance of heteroscedastic stochastic outputs, using only ELM models and the weighted Jackknife method. The proposed framework works well for homoscedastic outputs, making the proposed method applicable on a general level. ELM PI is comparable to other methods of computing PI in neural networks on small datasets, while keeping it possible to have very fast runtimes and scalability for Big Data. On a real dataset, the method has shown to allow for a better precision and lower False Positives rate. Heteroscedastic PI performs in a similar way as uniform PI from Mean Squared Error on 50%-70% of dataset samples, but they make a huge difference on the most confidently predicted 1%-10% of samples. For these samples, the proposed PI allowed to achieve zero False Positives rate even with a basic ELM model, which is an extremely useful feature in many practical applications. The runtime is comparable to the runtime of an ELM itself that makes it feasible for large datasets of Big Data problems. ELM PI can be easily extended to non-symmetric PI by using two ELM models in the second stage for predicting upper and lower boundaries separately. An ensemble of ELMs may increase the coverage for zero False Positives data predictions. These extensions will be examined and evaluated in future works on this topic.
--- abstract: | We compute curvature-dependent graviton correlation functions and couplings as well as the full curvature potential $f(R)$ in asymptotically safe quantum gravity coupled to scalars. The setup is based on a systematic vertex expansion about metric backgrounds with constant curvatures initiated in [@Christiansen:2017bsy] for positive curvatures. We extend these results to negative curvature and investigate the influence of minimally coupled scalars. The quantum equation of motion has two solutions for all accessible numbers of scalar fields. We observe that the solution at negative curvature is a minimum, while the solution at positive curvature is a maximum. We find indications that the solution to the equation of motions for scalar-gravity systems is at large positive curvature, for which the system might be stable for all scalar flavours.\ \[.3cm\] [*Preprint: CP$^3$-Origins-2019-44 DNRF90*]{} author: - Benjamin Bürger - 'Jan M. Pawlowski' - Manuel Reichert - 'Bernd-Jochen Schaefer' bibliography: - 'flatgravity.bib' title: Curvature dependence of quantum gravity with scalars --- Introduction ============ Asymptotically safe gravity [@Weinberg:1980gg] is a highly interesting candidate theory for quantum gravity. It relies on an interacting ultraviolet (UV) fixed point of the renormalisation group (RG) flow, the Reuter fixed point [@Reuter:1996cp]. It renders the high-energy behaviour of gravity finite. For reviews see [@Niedermaier:2006wt; @Litim:2011cp; @Reuter:2012id; @Bonanno:2017pkg; @Eichhorn:2017egq; @Percacci:2017fkn; @Wetterich:2019qzx]. The physics of asymptotically safe gravity is encoded in the full, diffeomorphism invariant quantum effective action of the theory, $\Gamma[g]$. Its $n$-point functions, evaluated on the quantum equations of motion (EoM), describe graviton correlation functions, from which general observables can be constructed. Accordingly, the knowledge of the quantum EoM is chiefly important for computing observables. However, the quantum EoM is also very interesting by itself as it provides the non-trivial dynamical background metric, pivotal for the physics of the early universe, inflation and a resolution of the cosmological constant problem. In most approaches to quantum gravity the diffeomorphism invariant action $\Gamma[g]$ can only be computed within a split of the full metric, typically a linear split $g_{\mu\nu}= \bar g_{\mu\nu}+ h_{\mu\nu}$. Here, $\bar g_{\mu\nu}$ is a generic background and $h_{\mu\nu}$ are quantum fluctuations about this background. This leads us to an effective action $\Gamma[\bar g, h]$ with the diffeomorphism invariant action $\Gamma[g]=\Gamma[g,0]$. In most approaches, the split is inevitable and allows to compute the propagation and scattering of fluctuations $h$, hence the quantum gravity effects, in terms of background covariant momentum modes. Accordingly, this choice is part of a gauge fixing, and the gauge and background independence of the diffeomorphism invariant effective action encode physical diffeomorphism invariance and are hence chiefly important for quantum gravity approach. Moreover, the diffeomorphism invariant effective action can only be computed from the gauge-fixed correlation functions of the fluctuation field. This is simple to see as the latter carries the quantum fluctuations. While also the correlation functions of the fluctuation field carry the full quantum dynamics of the theory, it is rather difficult to directly construct observables from it. This leaves us with the task to compute $\Gamma[g]$ from the background-dependent correlation functions of $h$. In the present work, this is done with the functional renormalisation group approach [@Wetterich:1992yh], leading to exact and closed one-loop relations between full correlation functions. While the background metric $\bar g_{\mu\nu}$ can be kept generic in these relations, it is technically challenging and limits the size of the truncation. In almost all computations that disentangle background and fluctuation field, a flat Euclidean background was chosen, see, e.g., [@Christiansen:2012rx; @Christiansen:2014raa; @Christiansen:2015rva; @Christiansen:2016sjn; @Denz:2016qks; @Eichhorn:2018ydy; @Eichhorn:2018nda], which is technically highly advantageous, and the computations resemble standard non-perturbative ones. We emphasise that the correlation functions in the flat background can be used to construct the diffeomorphism invariant action. However, from the viewpoint of convergence of an expansion in powers of $h$, an expansion about or close to the minimal solution to the EoM is much wanted, see [@Christiansen:2017bsy; @Knorr:2017fus; @Knorr:2017mhu]. Hence, getting access to $\Gamma[g]$ and the solutions to the EoM serves a twofold purpose. It allows us to directly access the interesting physics questions mentioned above as well as answering the question of how close the flat metric is to the full dynamical metric that solves the EoM. The computation of full, background-dependent correlation functions of the fluctuation field, and the computation of the diffeomorphism invariant action from the correlation functions of the fluctuation field has been initiated in [@Christiansen:2017bsy]. There the $f(R)$-potential for backgrounds with positive constant curvature has been computed, for respective results within the background field approximation see, e.g., [@Codello:2007bd; @Machado:2007ea; @Codello:2008vh; @Benedetti:2012dx; @Dietz:2012ic; @Falls:2013bv; @Benedetti:2013jk; @Dietz:2013sba; @Demmel:2014sga; @Falls:2014tra; @Demmel:2015oqa; @Ohta:2015efa; @Ohta:2015fcu; @Falls:2016wsa; @Falls:2016msz; @Gonzalez-Martin:2017gza; @deBrito:2018jxt; @Alkofer:2018baq; @Falls:2018ylp]. In the present work we extend the computation to negatively curved backgrounds, which allows us to study global minima. Moreover, we also study the impact of matter fields to the system by adding minimally coupled scalar fields. Scalar-gravity systems have attracted a lot of attention [@Dona:2013qba; @Meibohm:2015twa; @Dona:2015tnf; @Eichhorn:2018akn; @Alkofer:2018fxj; @Henz:2013oxa; @Percacci:2015wwa; @Labus:2015ska; @Oda:2015sma; @Henz:2016aoh; @Wetterich:2016uxm; @Biemans:2017zca; @Hamada:2017rvn; @Becker:2017tcx; @Eichhorn:2017sok; @Eichhorn:2017als; @Alkofer:2018fxj; @Pawlowski:2018ixd; @Knorr:2019atm; @Wetterich:2019rsn], in particular, since these systems seemingly have a divergence in the Newton coupling [@Dona:2013qba; @Meibohm:2015twa; @Dona:2015tnf; @Eichhorn:2018akn]. As discussed in [@Meibohm:2015twa; @Christiansen:2017cxa], this is expected to be an artefact of the truncation. We discuss this issue for the first time for curvature dependent couplings. Indeed, we find indications that the best expansion point for scalar-gravity systems is at large positive curvature, which might solve the stability issues encountered in previous works. This paper is structured as follows. In be briefly explain our ansatz for the graviton $n$-point functions and the use of the functional renormalisation group (FRG). In we recap the approximation of the momentum space on curved backgrounds, initiated in [@Christiansen:2017bsy], and describe the evaluation of traces of the Laplacian on positive and negative background curvatures. The fixed-point solutions and their dependence on the number of scalars are displayed in . In we discuss the background and the quantum EoM, their solutions and their asymptotic behaviour. We summarise our findings in . General Framework {#sec:gen-framework} ================= We start from the gauged-fixed Einstein-Hilbert action with $N_s$ minimally coupled scalar fields, $$\begin{aligned} \label{eq:EHaction} S_{\text{EH}}={}&\frac{1}{16\pi G_\text{N}}\int \text{d}^4 x \sqrt{g}\left(2 \Lambda-R\right)+S_{\text{gf}}+S_\text{gh} \notag \\[1ex] &+\sum_{i=1}^{N_\text{s}}\frac{1}{2}\int \text{d}^4x\sqrt{g}\,\nabla_\mu\varphi_i\nabla^\mu\varphi_i \,.\end{aligned}$$ The action in is expanded about a maximally symmetric background metric with background curvature $\bar R$. The gauge fixing is given by $$\begin{aligned} \nonumber S_\text{gf} =\01{2\alpha}\int\mathrm d^4x\, \sqrt{\bar g}\,\bar g^{\mu\nu}F_\mu F_\nu\,,\\[1ex] S_\text{gh}=\int\mathrm d^4 x \,\sqrt{\bar{g}}\, \bar g^{\mu \mu'} \bar g^{\nu \nu'}\bar{c}_{\mu'} {\cal M}_{\mu\nu} c_{\nu'} \,.\end{aligned}$$ with the Faddeev-Popov operator ${\cal M}_{\mu\nu}(\bar g, h)$ of the gauge fixing $F_\mu(\bar g, h)$. We employ a linear, de-Donder type gauge fixing, $$\begin{aligned} \nonumber F_\mu ={}& \bar \nabla^\nu h_{\mu\nu} - \0{1+\beta}4 \bar \nabla_\mu {h^\nu}_\nu \,, \\[1ex] {\cal M}_{\mu\nu} ={}& \bar\nabla^\rho\left(g_{\mu\nu}\nabla_\rho+g_{\rho\nu}\nabla_\mu \right)-\bar\nabla_\mu\nabla_\nu\,. \label{eq:gravitygaugefixing} \end{aligned}$$ We work with $\beta=0$ and the Landau-deWitt gauge limit $\alpha\to0$, which is a fixed point of the RG flow [@Litim:1998qi], for more details see e.g. [@Christiansen:2017cxa; @Christiansen:2017bsy]. We use a linear split of the full metric $g_{\mu\nu}$ abound a background $\bar g_{\mu\nu}$ with constant curvature and the fluctuation $h_{\mu\nu}$, $$\begin{aligned} g_{\mu\nu}=\bar{g}_{\mu\nu}+\sqrt{Z_h G_\text{N}}\,h_{\mu\nu}\,.\end{aligned}$$ The fluctuation field is rescaled with a wave-function renormalisation $Z_h$ and the Newton coupling $G_\text{N}$. The latter gives the field the mass dimension one. Note that our setup also allows for expansions about more general backgrounds. In the present work we restrict ourselves to backgrounds with constant curvature for simplicity. The main objective of this work is the computation of the gauge-fixed correlation functions of the fluctuations fields, $$\begin{aligned} \frac{\delta^n \Gamma_k[\bar g,\Phi]}{\delta \Phi_1\cdots \delta\Phi_n}\equiv\Gamma_k^{(\Phi_1...\Phi_n)}\,,\end{aligned}$$ where $\Phi=\{h_{\mu\nu},c_{\mu},\bar c_\mu, \varphi\}$. These correlation functions can be expanded in respective general tensor bases which can be significantly reduced by Slavnov-Taylor or Ward identities for the tensors. This reduces the general tensor basis to one which can be mapped to the tensor basis derived from diffeomorphism invariant terms. In the current work we only consider the leading classical tensor structure for $n$-point functions, the Einstein-Hilbert tensor structures derived by $n$ metric derivatives from . At each order we replace Newton’s coupling and the cosmological constant by their corresponding level $n$ couplings $G_n$ and $\Lambda_n$ [@Christiansen:2014raa; @Christiansen:2015rva; @Christiansen:2017bsy]. Their respective flow equations are obtained by differentiating the Wetterich equation [@Wetterich:1992yh; @Ellwanger:1993mw; @Morris:1993qb] $n$ times with respect to the fluctuation field. The Wetterich equation with the given field content is given by $$\begin{aligned} \label{eq:Wetterich} \partial_t\Gamma_k=&\, \frac12 \text{Tr}\, G_{k,hh}\,\partial_t R_{k,h} - \text{Tr}\, G_{k,\bar{c}c}\,\partial_t R_{k,c} \notag \\[1ex] &\,+ \012 \text{Tr}\,G_{k,\varphi\varphi}\,\partial_t R_{k,\varphi} \,,\end{aligned}$$ with the regularised fluctuation propagators $$\begin{aligned} G_{k,\Phi_1\Phi_2}=\left[\0{1}{\Gamma_k^{(\Phi\Phi)}+R_{k}}\right]_{\Phi_1\Phi_2}\,.\end{aligned}$$ In , $\partial_t$ is the derivative with respect to RG-time $t = \ln k/k_0$, where $k_0$ is some reference scale. We truncate and close the flow equations for the higher vertices by setting $G \equiv G_{n \geq 3}$ and $\Lambda_4 = \Lambda_3$ with $\Lambda_{n >4} =0$. From the corresponding $n$-point function we concentrate in the following on the curvature dependent mass-parameter $\mu(r)$ (from the graviton two-point function), the gravitational coupling $g(r)$ (from the three-point function) and the momentum-independent coupling $\lambda_3(r)$, which are dimensionless and defined by $$\begin{aligned} r & = \bar{R} /k^2 \,, & g & = G\,k^2 \,, \notag \\[1ex] \mu & = -2 \Lambda_2 /k^2 \,, & \lambda_3 & = \Lambda_3 /k^2 \,.\end{aligned}$$ Here, $\bar R$ denotes the background curvature. The beta functions for these couplings are obtained by an appropriate projection procedure, where we concentrate on the transverse traceless part of the flow, see [@Christiansen:2017bsy]. The anomalous dimensions for all fields are set to zero. We choose the regulator $R_k$ proportional to the two-point functions at vanishing cosmological constant and background curvature $$\begin{aligned} R_k = \Gamma_k^{(2)}(\mu=0,r=0) \cdot r_k(\bar \nabla_\mu^2/k^2)\,.\end{aligned}$$ The shape function $r_k$ is chosen to be an exponential $$\begin{aligned} r_k(x) = \frac{\mathrm e ^{-x^2}}{x}\,.\end{aligned}$$ The gravity parts of the flows for $g(r)$, $\mu(r)$ and $\lambda_3(r)$ are the same as in [@Christiansen:2017bsy], see App. B therein. In the present matter-gravity system with minimally coupled scalars, the flows of the graviton $n$-point functions receive contributions from the scalars. Thus all flow equations depend on the number of scalar fields $N_s$. For $r=0$, this dependence was already investigated in [@Meibohm:2015twa; @Eichhorn:2018akn]. Here, we extend the analysis and consider the curvature dependence of the couplings. Vertices and trace evaluation on curved backgrounds {#sec:curved-background} =================================================== Propagators on a non-trivial background metric $\bar{g}$ depend on the Laplacian $\Delta_{\bar{g}} \equiv -\bar{\nabla}^2$ and on curvature invariants. For the hyperbolic or spherical background considere here, the dependence on the curvature invariants reduces to a dependence on the background Ricci scalar $\bar R$, $G \equiv G(\Delta_{\bar g},\bar R)$. All Laplacians can be expressed by the scalar Laplacian and the Ricci scalar. The higher-order vertices $\Gamma^{(n \geq 3)}$ also depend on covariant derivatives $\bar{\nabla}_{\mu}$. In curved backgrounds the Laplacian and the covariant derivative do not commute and the lack of a common eigenbasis also determines the lack of a momentum space. As in [@Christiansen:2017bsy] we construct an approximate momentum space. All covariant derivatives are symmetrised, which produces further background curvature terms $$\begin{aligned} \bar{\nabla}^{\mu}\bar{\nabla^\nu}&= \frac{1}{2}\lbrace \bar{\nabla}^{\mu},\bar{\nabla}^{\nu}\rbrace +\bar{R}\text{-terms}\,.\end{aligned}$$ The symmetrised covariant derivatives are expressed by the spectral value of the Laplacian $p^2$ and an spectral angle $x$ between them $$\begin{aligned} \bar{\nabla}_1 \cdot \bar{\nabla}_2 = x \sqrt{p_1^2} \sqrt{p_2^2} \,.\end{aligned}$$ The spectral angle depends non-trivially on the spectral values of the covariant derivative and the background curvature. Until here, we have not performed any approximation and we have simply stored the background curvature dependence in the spectral angle. Now we approximate the spectral angle with the corresponding flat spacetime angle $x\approx\cos \theta_\text{flat}$. The flat spacetime angles are integrated using the usual volume element of the spacetime, such that the approximation becomes exact in the limit $\bar{R}\rightarrow 0$. In [@Christiansen:2017bsy] it was estimated that this approximation should give reasonable results for $|\bar{R}/k^2| \lesssim 2$. The external spectral values of the three-point function are evaluated at the symmetric point $$\begin{aligned} p:=|p_1|=|p_2|=|p_3| \,,\end{aligned}$$ with the flat angle $\theta_{\text{flat}}=\frac{2\pi}{3}$ between them. This procedure leaves us with functions that depend on the Laplacian and the background curvature, but not anymore on the covariant derivative. The spectrum of the Laplacian on hyperbolic or spherical backgrounds is known and the traces can be evaluated using spectral sums (integrals) for positive (negative) background curvature [@Rubin:1984tc; @Camporesi:1994ga; @Percacci:2017fkn]. Positive curvature ------------------ For positive background curvature the spectrum of the Laplacian is discrete. The dimensionless eigenvalues of the scalar Laplacian are given by [@Rubin:1984tc], $$\begin{aligned} \omega(\ell)=\frac{\ell(\ell+3)}{12}r \,,\end{aligned}$$ with multiplicities $$\begin{aligned} m(\ell)=\frac{(2\ell+3)(\ell+2)!}{6l!} \,,\end{aligned}$$ for positive integers $\ell>0$. A trace of a function $F$ that depends on the Laplacian and the background curvature is evaluated by $$\begin{aligned} \frac1{V}\, \text{Tr}F(\Delta,r)\rightarrow \frac{r^2}{384\pi^2} \sum_{\ell=2}^{\ell_\text{max}}m(\ell)F(\omega(\ell),r) \,.\end{aligned}$$ The factor $V=\frac{384\pi^2}{r^2}$ is the dimensionless four-sphere volume. We cut the sum at some finite $\ell_\text{max}$ where the sum is sufficiently converged. We further start the sum at $\ell=2$ and exclude the modes $\ell=0,1$. This is correct for the transverse-traceless mode of the graviton but an approximation for the trace mode. Our procedure does not allow to distinguish between these modes. The approximation is well justified as the low modes only contribute at large background curvature and we are interested in the small curvature regime. Negative curvature ------------------ For negative curvature spacetime is unbounded and the spectrum of eigenvalues is continuous. For the scalar Laplacian, the spectrum is given by [@Camporesi:1994ga], $$\begin{aligned} \lambda(\sigma)=\frac{|r|}{12}\left(\sigma^2+\frac{9}{4}\right) \,,\end{aligned}$$ with the spectral density $$\begin{aligned} \rho(\sigma)=\left[\sigma^2+\frac{1}{4}\right]\sigma\tanh(\pi\sigma)\,.\end{aligned}$$ In this case a trace over a function $F$ yields the spectral integral $$\begin{aligned} \frac1{V}\, \text{Tr} F(\Delta,r)\rightarrow\frac{1}{3}\frac{r^2}{384\pi^2} \int \text{d}\sigma \rho(\sigma) F(\lambda(\sigma),r) \,, \end{aligned}$$ where again the volume prefactor gives the correct $r\rightarrow 0$ limit. Heat-kernel expansion --------------------- The beta functions of the coupling functions are partial differential equations in the RG scale $k$ and the background curvature $r$. The search for fixed-point functions reduces the beta functions to ordinary linear differential equations. We use a heat-kernel expansion about the flat background to provide initial conditions to these differential equations. The heat-kernel expansion for positive and negative $r$ is, in the case of a scalar Laplacian, given by $$\begin{aligned} \frac1{V} \, \text{Tr}F(\Delta,r)&\rightarrow \frac{1}{V}\int_0^\infty \!\! \text{d}s \, \text{Tr}\!\left[\mathrm e^{-s \Delta}\right]\tilde{F}(s,r) \notag \\[1ex]  & =\frac{1}{(4\pi)^2}\left(Q_2[F]+Q_1[F]\frac{r}{6}+...\right) \,,\end{aligned}$$ where the $Q_n$ functionals for $n>0$ are given by $$\begin{aligned} Q_n[F]=\frac{1}{\Gamma(n)}\int_0^\infty \!\text{d}\lambda\,\lambda^{n-1}F(\lambda,r) \,.\end{aligned}$$ We compute the zeroth and first order in the background curvature in order to have a smooth continuation to the spectral sums and integrals at finite but small $r$. ![ Fixed-point functions $g^*(r)$, $\mu^{*}_{\text{eff}}(r)= \mu(r)+\frac{2}{3}r $ and $\lambda^{*}_{3,\text{eff}}(r)=\lambda_3(r)+\frac{1}{12}r$ for $N_s=0$. []{data-label="fig:fullFPsolution"}](allfixedpoints){width="\linewidth"} Asymptotic safety {#sec:AS} ================= Pure gravity ------------ We begin the discussion with the fixed-point functions $g^*(r)$, $\mu^*(r)$ and $\lambda^*_3(r)$ in the curvature regime $|r|\leq 2$ without additional scalars. The fixed-point functions are defined by the roots of the corresponding beta functions $\beta_{g_i}(r)=\partial_t g_i(r)$ as functions of $r$. The initial values for the beta functions are extracted from a heat-kernel expansion up to linear order in $r$ $$\begin{aligned} \begin{pmatrix} g^{*}(r) \\[1ex] \mu^{*}(r) \\[1ex] \lambda^{*}_{3}(r) \end{pmatrix} = \begin{pmatrix} \phantom{-}0.60 \\[1ex] -0.38 \\[1ex] -0.12 \end{pmatrix}+r \begin{pmatrix} -0.43 \\[1ex] -0.71\\[1ex] -0.13 \end{pmatrix} +\mathcal{O}(r^2)\,.\end{aligned}$$ In the following we work with the effective couplings $$\begin{aligned} \mu_\text{eff}(r) & = \mu(r)+\frac{2}{3}r \,, & \lambda_{3,\text{eff}}(r) & = \lambda_3(r)+\frac{1}{12}r \,,\end{aligned}$$ which are defined such that they include the explicit $r$ dependence in the corresponding graviton two-/three-point function. In we display the fixed-point functions in terms of these effective couplings. Interestingly, the effective couplings are almost curvature independent, except for the Newton coupling. This means that the explicit curvature dependence of the $n$-point functions is almost exactly counterbalanced by the implicit curvature dependence of the couplings. This was already shown in [@Christiansen:2017bsy] for positive curvature. Here we extend this non-trivial result to negative curvature. We emphasise that this non-trivial approximate curvature-independence supports the expansion scheme about flat backgrounds in pure gravity. This implies the self-consistency of the flat-space angular approximation used here. Both properties are highly welcome and provide non-trivial reliability checks for existing fluctuation results. ![image](FPs-Ns-crop){width="\linewidth"} Matter dependence ----------------- Let us now discuss the influence of $N_\text{s}$ minimally coupled scalar fields. This allows us to study whether the independence of the fluctuation correlation functions on the background carries over to gravity-matter systems. Moreover, gravity-scalar systems are known to lose the asymptotically safe fixed point in the given approximation for a sufficiently large number of scalars $N_s\approx 10 - 10^2$ in an expansion about the flat background, [@Dona:2013qba; @Meibohm:2015twa; @Dona:2015tnf; @Eichhorn:2018akn; @Alkofer:2018fxj]. Note in this context, that a heat kernel expansion in powers of the curvature is precisely the expansion about the flat background. However, this breakdown happens for $N_s$, which are already beyond hard reliability bounds of the approximations used in these works, see [@Meibohm:2015twa; @Christiansen:2017cxa]. Moreover, in [@Christiansen:2017cxa] it has been shown that in the current approximation one should be able to map the theory to a pure gravity setup, hinting at further deficiencies of the FRG-treatment in the given approximation. Accordingly, it is also highly interesting to see whether the non-trivial background at least partially takes care of these deficiencies. To begin with, our results for the flat background $r=0$ are compatible with the results from [@Meibohm:2015twa]. There are small differences due to the missing anomalous dimensions, the different gauge and regulator shape function. In the present approximation, we find that the fixed point at $r=0$ is vanishing at $N_\text{s}\approx 38$, corroborating earlier findings in [@Dona:2013qba; @Meibohm:2015twa; @Dona:2015tnf; @Eichhorn:2018akn; @Alkofer:2018fxj]. The full curvature dependent fixed-point functions are displayed in . Naively, we find an even stricter bound on the number of scalar fields since $g^*(r)$ diverges at finite negative curvature for $N_\text{s}>16$. This smaller bound is easily explained by the fact that the Newton coupling rises towards negative curvature and technically the disappearance of the asymptotically safe fixed point in gravity-scalar systems is related to the divergence of the Newton coupling. However, the latter also leads to the breakdown of the current approximation as does the limit of sufficiently large curvature $|r|$. The divergent behaviour of $g^*(r)$ for $r<0$ is possibly triggered by neglecting the anomalous dimension in our approximation. We have chosen a regulator $R_k$ proportional to the two-point function and hence proportional to the wave-function renormalisation. In the UV the regulator scales as $$\begin{aligned} \lim\limits_{k\rightarrow \infty}R_{k,h}\sim Z_{k,h} k^2 \,,\end{aligned}$$ which should tend to infinity. However, if the anomalous dimension $\eta_h=-\partial_t \ln Z_{k,h}$ exceeds the value two this is not the case anymore as the wave-function renormalisation scales as $Z_{k,h}\sim k^{-\eta_h}$, which leads to a vanishing regulator for $k\rightarrow \infty$. Consequently, we interpret the divergence in $g^*(r)$ as a breakdown of the truncation and not as a physical bound on the compatibility of asymptotically safe gravity with scalar fields, following the argument in [@Meibohm:2015twa]. In turn, the effects of scalar fields on $g^*(r)$ in the positive curvature regime are small. Accordingly, as in the pure gravity the effective graviton mass parameter $\mu^*_\text{eff}(r)$ remains almost curvature independent and is only shifted by a constant when scalars are included. For $\lambda^*_{3,\text{eff}}(r)$ the fixed-point value at $r=0$ is almost constant while for positive curvature the fixed-point function decreases while for negative curvature it increases when additional scalar fields are included. Background effective action and quantum equation of motion {#sec:EoMs} ========================================================== With the curvature-dependent fluctuation correlation functions, we can compute the fixed point diffeomorphism invariant background effective action $\Gamma^*[g]$. For constant curvatures the background effective action $\Gamma_k[g]=\Gamma_k[g,h=0]$ is given by $$\begin{aligned} \label{eq:Gamma-f} \Gamma[{g}] = \int \mathrm d^4x \sqrt{{g}}\, k^4 \tilde f(r) = V \tilde f(r) \,,\end{aligned}$$ where $V$ is the spacetime volume and $f(R)= k^4 \tilde f(r)$. From now on, we only discuss the dimensionless function $\tilde f(r)$. We drop the tilde for being coherent with the notation in the literature. At vanishing cutoff scale, directly comprises the physics information of asymptotically safe quantum gravity in terms of diffeomorphism-covariant correlation functions $\Gamma^{(n)}_{k=0}$. As has been discussed in detail in [@Christiansen:2017bsy; @Lippoldt:2018wvi], for $k \neq 0$ there are two EoMs to be considered: that of the background metric $$\begin{aligned} \frac{\delta \Gamma_k}{\delta \bar g_{\mu\nu}}\bigg|_{\bar{g}=\bar{g}_{\overline{\text{EoM}}}, h=0}=&\,0\,, \label{eq:EoMbarg}\end{aligned}$$ and that of the fluctuation field $$\begin{aligned} \frac{\delta \Gamma_k}{\delta h_{\mu\nu}}\bigg|_{\bar{g}=\bar{g}_\text{EoM}, h=0}=&\,0\,. \label{eq:EoMh}\end{aligned}$$ Their respective solutions $\bar{g}_{\overline{\text{EoM}}}$ and $\bar{g}_{{\text{EoM}}}$ agree for $R_k=0$ due to the underlying diffeomorphism invariance, but they differ for $R_k\neq 0$. The latter property reflects the fact that the regularisation procedure in the functional RG breaks diffeomorphism invariance despite the persistence of diffeomorphism invariance of the background effective action: only the Slavnov-Taylor identities without the cutoff modification elevate the auxiliary background diffeomorphism invariance to physical diffeomorphism invariance carried by the symmetry properties of the fluctuation field. This leads to the counter-intuitive fact that for $k\to\infty$ it is arguably the EoM for the fluctuation field carries the physics information. In turn, the background EoM is regulator-dependent due to its dependence on the background metric, while this dependence is sub-dominant for the fluctuation EoM. Background equation of motion ----------------------------- For constant curvatures, as in , the background EoM is given by $$\begin{aligned} \label{eq:explicit-beom} \Gamma_k^{(\bar g_\text{tr})}[{g},0] \sim r f'(r) - 2 f(r) = 0 \,.\end{aligned}$$ The flow equation for $f(r)$ is given by $$\begin{aligned} \partial_t f(r) + 4 f(r) - 2 r f'(r) =&\, \mathcal{F}\left(r, \mu(r),N_s\right)\,, \label{eq:Flowf}\end{aligned}$$ where we have denoted the right-hand side of the Wetterich equation with $\mathcal{F}$, including the volume factor from the left-hand side. Intriguingly, at the fixed point, $\partial_t f =0$, the left-hand side is proportional to the background EoM . Thus a root in $\mathcal{F}$ corresponds to a solution of the background EoM. The function $\mathcal{F}$ depends on $\mu(r)$ and $N_s$ but not on the fixed-point functions of the three-point vertex, $g(r)$ and $\lambda_3(r)$. In the computation of $\mathcal{F}$ we neglect the zero mode of the trace. This mode develops an unphysical pole in our approximation, which would not happen if we would disentangle $\mu_\text{tt}$ and $\mu_{tr}$ or if we would use a more extended ansatz for the bare action . The zero mode only contributes significantly for large $r$ and thus this is a good approximation. ![Background EoM for different number of scalar fields. []{data-label="fig:bEoM"}](backgroundEOMexact){width="\linewidth"} We display the function $ r f'(r) - 2 f(r)$ at the fixed point in . A zero of this function corresponds to a solution of the background EoM, see . There are no solutions to the background EoM for any background curvature and any number of scalar flavours. This is extending the results from [@Christiansen:2017bsy], where the same result was found for positive curvature and without scalar fields. We can compare this to results in the background field approximation. There different results have been obtained depending on the choice of regulator and parameterisation. For example in [@Falls:2014tra] with the linear split, only a solution at large negative curvature was found, compatible with our results. However, in [@Falls:2017lst; @Falls:2018ylp], two further solutions at positive curvature were found due to a different choice of regulator. A solution at positive curvature was also found in [@Demmel:2015oqa] and with the exponential parameterisation in [@Ohta:2015fcu]. Importantly, we see that the existence of a solution to the EoM might crucially depend on the matter content of the theory. This was already partially discussed in [@Christiansen:2017bsy; @Alkofer:2018baq]. Quantum equation of motion -------------------------- We now turn to the quantum EoM given in , which is the more important EoM for the reasons discussed above. The fluctuation one-point function can be parameterised analogously to the background effective action in . Only the trace-part is non-vanishing for constant curvature metrics. We arrive at $$\begin{aligned} \label{eq:Gamma-f1} \Gamma_k^{(h_\text{tr})}[{g},0]=\int \text{d}^4 x\sqrt{g}\, k^3 f_1(r)=\frac{V}{k} f_1(r) \,,\end{aligned}$$ where the spatial integration can be performed because the curvature scalar is constant. The flow of $\Gamma_k^{(h_{\text{tr}})}$ is obtained in the same way as for the two- and three-point functions and at the fixed point we are left with an ordinary differential equation for $f^*_1(r)$. The initial conditions for $f_1(r)$ are again obtained by a heat-kernel expansion. The quantum EoM reduces to $$\begin{aligned} \label{eq:explicit-qeom} \Gamma_k^{(h_\text{tr})}[{g},0] \sim f_1(r) = 0 \,.\end{aligned}$$ The flow equation for $f_1(r)$ is given by $$\begin{aligned} \partial_t f_1(r)+ 3 f_1(r) - 2 r {f_1}'(r) =&\, \mathcal{F}_1(r, \mu(r),N_s)\,, \label{eq:Flowf1}\end{aligned}$$ where we have denoted the right-hand side of the Wetterich equation by $\CF_1$ again including the volume factor from the left-hand side. Note the difference to due to the mass dimension of the fluctuation field. ![Quantum EoM for different number of scalar fields. []{data-label="fig:qEoM"}](quantumEOMexact){width="\linewidth"} We display the fixed-point functions $f_1^*$ in . For all $N_s$, we find two solutions to the quantum EoM, one at negative curvature and one at positive curvature. It turns out that the solution at negative curvature is a minimum, while the one as positive curvature is a maximum. For increasing $N_s$ these solutions move towards each other, however, just before they merge, we lose the solution of the fixed-point functions as shown in . The stability of a solution to the quantum EoM is described by the second fluctuation field derivative, in this case with respect to the trace mode. The function $f_2$, which is defined in straight analogy to and , is precisely the graviton mass parameter of the trace mode $\mu_\text{tr}(r)$. In our approximation, we have set $\mu_\text{tr} (r) = \mu_\text{tt}(r)$. In , we display $\mu_\text{tr} (r)$. We have a negative $\mu_\text{tr}$ for the solution at positive curvature, which is thus a maximum. The solution at negative curvature starts negative but then becomes positive in this naive approximation. This is an unphysical feature of our approximation and we expect this to be cured once the fully coupled system with $\mu_\text{tr}(r)$ is computed. We test this by evaluating $\mu_\text{tr}(r=0)$ on the solution of the given fixed-point functions. This allows us to improve our approximation and we can evaluate the trace mode by $$\begin{aligned} \mu_\text{tr}(r) \approx \mu_\text{tr}(0) + \left[ \mu_\text{tt}(r) - \mu_\text{tt}(0) \right] \,. \label{eq:shift-mutr}\end{aligned}$$ For $N_s=0$, we find that $\mu_\text{tr} (r) \approx \mu_\text{tt}(r)$, while for $N_s=16$ they are shifted by $\mu_\text{tr} (r) \approx \mu_\text{tt}(r)-0.2$. We included this shift in where now the solution at negative curvature is a minimum for all $N_s$. It is actually remarkable that the two graviton mass parameters, $\mu_\text{tt}$ and $\mu_\text{tr}$, agree so well, in particular for small $N_s$, despite being related by non-trivial modified Slavnov-Taylor identities. This is yet another sign of effective universality as introduced and discussed in [@Eichhorn:2018akn; @Eichhorn:2018ydy]. Asymptotic behaviour and stability of scalar-gravity systems ------------------------------------------------------------ The functions $f$ and $f_1$ are not independent but are related by modified Slavnov-Taylor and Nielsen (or split Ward) identities [@Litim:2002ce; @Litim:2002hj; @Pawlowski:2005xe; @Folkerts:2011jz; @Donkin:2012ud; @Bridle:2013sra; @Dietz:2015owa; @Safari:2015dva; @Labus:2016lkh; @Eichhorn:2018akn]. These identities carry $R_k$-dependent terms that reflect the additional $\bar g$-terms in $f(r)$ originating in the background dependence of the regulator. For $R_k\neq 0$ these identities are non-trivial. However, the right-hand sides of the flow equations for $f(r)$ and $f_1(r)$, and , vanish for $| r| \to \infty$, i.e., for curvatures far bigger than the square of the cutoff scale, $k^2$, with the exception of possible zero modes. In this limit, the regulator is vanishing, $\lim_{r\to \infty} R_k(\bar g) \to 0$, and we are in the unregularised regime without cutoff effects. Within the present approximation the effective action is diffeomorphism invariant for large curvatures, $\lim_{r\to \infty}\Gamma^*[\bar g, h]-S_\textrm{gf}= \Gamma^*[\bar g+h,0]-S_\textrm{gf}$, with vanishing ghosts. ![Stability of the solutions to the EoM. We included a shift in the $\mu_\text{tr}$ mode according to .[]{data-label="fig:stab-eom"}](FP-mutr){width="\linewidth"} The right-hand sides of the flow equations for $f(r)$ and $f_1(r)$ are vanishing for large curvatures and consequently, both, the background and the quantum EoM, asymptotically approach a solution. Moreover, these solutions to the EoMs have to agree, $\bar{g}_{\overline{\text{EoM}}}=\bar{g}^{\ }_{{\text{EoM}}}$, due to the diffeomorphism invariance of the effective action for large curvatures. This also entails that for $|r|\to\infty $ the fixed-point solutions $f_1^*$ and $f^*$ are related by a metric derivative, which leads to $$\begin{aligned} \label{eq:f-f1} f^*_1(r) = r {f^*}'(r) - 2 f^* (r) \quad \text{for} \quad |r|\to \infty \,. \end{aligned}$$ Now we use that the fixed-point solution of $f(r)$ is asymptotically either vanishing, $f(|r|\to\infty )= 0$, or $f(|r|\to\infty)\propto r^2$, see . In both cases is follows that $f_1(|r|\to \infty)=0$. The differential equation also allows for the solution $f_1(|r|\to \infty)\propto |r|^{3/2}$, which is not compatible with the Nielsen identities. Whether the fixed-point functions in have the correct $f_1(|r|\to \infty)=0$ behaviour depends on $\mu_\text{eff}(r)$. In turn, $\mu_\text{eff}(r)=f_2(r)$ is also constrained by a Nielsen identity, leading to $f^*_2(r) = r {f^*_1}'(r) - 2 f_1^* (r)$ for $|r|\to\infty$. The differential equation for $\mu_\text{eff}$ allows again for two asymptotic solutions, $\mu_\text{eff}(|r|\to\infty )= 0$ or $\mu_\text{eff}(|r|\to\infty)\propto |r|$. Only $\mu_\text{eff}(|r|\to\infty )= 0$ is compatible with the Nielsen identity. In conclusion, we can determine the asymptotic behaviour of the full tower of differential equations with the use of Nielsen identities. With the latter properties, the fixed-point fluctuation Newton coupling $g^*(r)$ equals the background Newton coupling for $r\to\infty$. They are positive and decay proportional to $1/r$ for positive curvature, see . This entails a positive $r^2$ contribution in the background potential for large positive background curvature, $r/g^*(r) \propto r^2$. We determined the large curvature behaviour to be $$\begin{aligned} \label{eq:glarger} \lim_{r\to \infty}g^*(r) &= \frac{c_+}{r} \,,\end{aligned}$$ with $$\begin{aligned} c_+& = 0.92 - 8.6\cdot 10^{-3} N_s - 3.4 \cdot 10^{-4} N_s^2\,.\end{aligned}$$ This behaviour was determined from the available fixed-point solutions with $N_s=0,\ldots,16$. For $N_s>41$ this coefficient turns negative indicating the breakdown of the extrapolation. The coefficient can never become negative since the flow equation for the Newton coupling has a Gaußian fixed point. In summary, this analysis leads us to the following global picture. For $|r|\leq 2$ we find two solutions of the quantum EoM, a minimum at negative curvature and a maximum at positive curvature. For $N_s\approx17$ these solutions merge and disappear, see . For reasons of stability and due to the above considerations of the asymptotic behaviour, we argue that at least one further solution to the EoM (a minimum) has to be present for $r> 2$. While it might not be the absolute minimum in pure quantum gravity and small $N_s$, it is for large enough $N_s$ as the other minimum disappears. Importantly, for large positive curvature, the fixed point has a nice stable behaviour and the Newton coupling remains small for large $N_s$, see . This leads to the important and exciting conclusion that the dynamics of gravity stabilises the fixed point and effectively dominates over the matter fluctuations. This has first been seen and discussed in [@Meibohm:2015twa] and extended in [@Christiansen:2017cxa]: *one force to rule them all*. In [@Meibohm:2015twa] the respective behaviour for fermions with a decreasing Newton coupling was already seen at $r=0$. In turn, for scalars, this behaviour was not seen at $r=0$ and the approximation breaks down for large $N_s$ due to the increasing fixed-point value of the Newton coupling. The present scenario hints at a *geometrical* first-order phase transition in the curvature with the number of scalars. This scenario will be investigated in more details elsewhere. Summary and Outlook {#sec:summary} =================== In the present work, we have discussed fixed-point solutions of quantum gravity including an $f(r)$-term for constant background curvatures $r\in\mathbb{R}$ with minimally coupled scalar fields. This extends the work initiated in [@Christiansen:2017bsy] to negative curvatures, $r< 0$. In [@Christiansen:2017bsy], a novel expansion scheme was put forward that allows for the computation of (background) curvature-dependent correlation functions of the fluctuation field. This gives access to the diffeomorphism invariant background effective action and, in particular, the $f(r)$ potential without resorting to the background-field approximation. We found fixed-point functions for the couplings of the graviton two- and three-point functions, $g(r)$, $\mu(r)$ and $\lambda_3(r)$, up to $N_s=16$. For larger $N_s$, there is a divergence in the Newton coupling at negative background curvature. This divergence is not a physical feature but signals the breakdown of the truncation. Indeed, we found that the system is remarkably stable for positive background curvature. We emphasise that this can be seen as an indication for the persistence of the asymptotically safe UV fixed point for $N_s\to\infty$. We have discussed the solution to the background and quantum EoM for curvatures $|r| \lesssim 2$ as well as their asymptotic behaviour. The background EoM has no solution in this curvature regime for any $N_s$. The quantum EoM has two solutions for $N_s\leq 16$ in this regime, a minimum at negative background curvature and a maximum at positive background curvature. We have argued that the global structure of the fixed-point solution should admit a second minimum at large positive curvatures also for large $N_s$. Since the fixed-point solutions are well behaved in this curvature regime and the Newton coupling remains small, this stabilises asymptotically safe gravity for a large number of scalars in line with the mechanism introduced in [@Meibohm:2015twa; @Christiansen:2017cxa]: *one force to rule them all*. The disappearance of the minimum at negative curvature can be interpreted as a hint for a geometrical first-order phase transition with the number of scalars. In summary, together with earlier results on matter-gravity systems with only gravitationally interacting matter, also including fermions and gauge bosons, we conclude that there are indications for the global stability of gravity-matter systems. This is an important result as, for the first time, we can pinpoint why previous works [@Dona:2013qba; @Meibohm:2015twa; @Dona:2015tnf; @Eichhorn:2018akn; @Alkofer:2018fxj] have observed a seeming divergence in the Newton coupling with increasing $N_s$. With an expansion about the minimum at large positive curvature these systems should show the stability properties that have been derived from the path-integral representation in [@Christiansen:2017cxa]. This now allows for a reliable discussion of the stability of gravity-matter systems and phenomenological high energy and phenomenological applications such as [@Eichhorn:2017ylw; @Eichhorn:2018whv; @Reichert:2019car]. [**Acknowledgements**]{} This work is supported by the Danish National Research Foundation under grant DNRF:90. It is part of and supported by the DFG Collaborative Research Centre SFB 1225 (ISOQUANT) as well as by the DFG under Germany’s Excellence Strategy EXC - 2181/1 - 390900948 (the Heidelberg Excellence Cluster STRUCTURES).
--- abstract: 'A critical appraisal is presented of developments in MOND since its introduction by Milgrom in 1983 to the present day.' --- [**[MOND - A REVIEW]{}**]{}\ [ Department of Physics, Queen Mary, University of London, London E14NS, UK]{}\ -1mm Introduction ============ It is not possible to reference all papers on MOND. The discussion here follows the historical perspective except in a few places where later papers provide explanations of details in earlier ones. Rotation curves of galaxies disagree with Newton’s laws. The standard cosmological model $\Lambda CDM$ interprets this in terms of Cold Dark Matter. There is a wide range of proposals what this Dark Matter may be. However, a long series of experiments has not located significant evidence for it. MOND (Modified Newtonian Dynamics) is a competing empirical scheme. Famaey and McGaugh [@Famaey] have produced a review of the data up to 2011 with extensive references, which will be updated here. By 1983, Tully and Fisher [@Tully] had established that rotation curves of galaxies over a wide range of masses $M$ have asymptotic rotational velocities $V_\infty = (GMa_0)^{1/4} $; $G$ is the gravitational constant and $a_0$ an empirical constant. Masses are derived assuming $a_0$ is proportional to luminosity $L$. Also Faber and Jackson [@Faber] had studied the velocity dispersion $\sigma$ of random motion of stars in high surface brightness elliptical galaxies and shown that $\sigma^4 \simeq MGa_0$. Milgrom based his scheme on these observations. He pointed out in his first paper [@MilgromA] that flatness of rotation curves remains constant down to radii well within galaxies. He parametrised the full rotational acceleration $a$ in terms of the Newtonian acceleration $g_N$ as $$a = g_N/\mu (\chi);$$ $\mu$ is an empirical smooth function of $\chi = a/a_0$, where $a_0$ is a universal constant $\sim 1.2 \times 10^{-10}$ m s$^{-2}$ for all galaxies; $\mu \to 1$ for accelerations $\gg a_0$ and for $a/a_0$, $\mu (x) \propto x$. Three forms are in common use, and all go to $ \sqrt {a_0g_N}$ as $g_N \to 0$. From these relations, a star with rotational velocity $v$ in equilibrium with centrifugal force has $$v^2/r = \sqrt {a_0 GM/r^2};$$ the factor $r$ cancels and $v^4 = MGa_0$, the Tully-Fisher relation. Milgrom’s paper comments on the fact that the Milky Way has the shape of a thin disc, but it is unnecessary to account for motions of stars in the $z$ direction, perpendicular to the disc. These were studied by Oort [@Oort]. The $z-$ excursions of stars are small compared with the orbital radius and deviations of velocities from circular motion are small. One can determine $g_N$ and then via Poisson’s equation find the total gravitational mass density in the central plane or the surface mass density in the disc. Milgrom suggested that deviations from Newton’s law may arise from variations of inertial mass. He was led to this conclusion by the fact that within a star or nucleus, Newton’s law applies, while on the scale of a galaxy, the Tully-Fisher relation is required. Later, in 1997, he introduduced the idea that equations of motion are invariant under the conformal transformation $(t, \bf {r}) \to (\lambda t, \lambda \bf {r})$ in the limit of weak gravity; radii and accelerations change under scaling, but velocities do not. The derivation is from Poisson’s equation for a 2-D distribution like a disc galaxy, but is rather technical [@Milgrom97]. In 1984, Beckenstein and Milgrom considered a non-relativisitic potential $\phi$ for gravity differing from Newtonian gravity using a Lagrangian formalism [@Beckenstein]. They derived a Poisson equation from this Lagrangian. One result appeared from this work which would later prove to be very important. Using the approximation that $\mu (x) \propto x$ for small $x$, they derived the result that $$\phi \to \sqrt {GMa_0} \ln (r/r_0) + 0(r^{-1}),$$ where $r_0$ is an arbitrary radius. This leads to the asymptotically constant rotational velocity $V_\infty = (MGa_0)^{1/4}$. They also considered the center-of-mass motion of two bodies, an issue which later became important for QUMOND. Further progress was slow while astrophysicists accumulated statistics on galaxies and examined systematics. Meanwhile studies were made with collaborators using models of galaxies. In 1996, Milgrom [@action] considered the virial equation for theories governed by an action, again laying the foundations for later developments. The same year, Milgrom [@MilgromFJ] studied low surface density galaxies and accounted for the Faber-Jackson relation $\sigma ^4 \propto MGa_0$ for the mean velocity dispersion $\sigma$ of self-gravitating systems supported by random motions. This later proved important in understanding elliptical galaxies. From this point onwards, various morophologies were explored. In rapidly rotating galaxies, thin discs evolve. Those with lower accelerations develop central bulges. Elliptical galaxies rotate slowly as do many dwarf galaxies. In 1997, Sanders [@Sanders] reported work on two sets of data obtained in Groningen. He reported on rotation curves of 22 spiral galaxies measured in the 21 cm line of neutral hydrogen, together with 11 galaxies in an earlier selected sample of Begeman et al. [@Begeman]. The Mond formula fitted the overall shape and amplitude of the 22 rotation curves. One free parameter was used per galaxy and a second if a bulge was present. A commentary is given on several galaxies and a figure shows fits to 22 galaxies with three components of the fit. In 1998, de Blok and McGaugh produced data testing MOND with Low Surface Brightness galaxies [@deBlok]. After some good detective work, they found that rotation curves of 15 galaxies fitted neatly to MOND after making small adjustments to the inclination angles of galaxies appearing nearly side-on; this affected the observed luminosities. Their Fig. 1 illustrates the Newtonian contribution from gas and stars and fits after correction for the inclination of the disc. This inclination needs fine tuning within the errors to conform with Milgrom’s fitting function $\mu (x)$. In 2000, McGaugh et al. explored the Tully-Fisher relation over 5 decades of stellar masses in galaxies [@McGaugh00]. They recognised the fact that rotational velocities depend on the number of baryons in the galaxy after using observed HI masses for galaxies with large gas content. This is well illustrated in the difference between their Figs. 1(a) and 1(b). They comment that this direct connection with the number of baryons was an argument against a significant mass in ‘dark’ baryons of any form. In 2001, Milgrom [@Milgrom01] grappled with the question how a Dark Matter halo could describe a thin disc. Such discs have large orbital angular momentum. He followed the approach used with Beckenstein in 1984, which derives equations of motion from a Lagrangian. Mond predicts that the ‘Dark halo’ needs to have a disc component and a rounder component with radius-dependent flattening, becoming spherical at large radii. He comments that this structure is at odds with what one naturally expects from advertised halo-formation simulations. This is at the heart of the question how different morphologies of galaxies develop in the Dark Matter scenario. If Newton’s laws apply to the Dark Matter halo, it needs to obey a Poisson equation. This implies the structure outlined above for a thin disc. It will require different types of halo to describe large elliptical galaxies and dwarf spherical galaxies. That could happen, but looks wierd. It is a question which still continues today: how Dark Matter can reproduce the observed range of morphologies. The problem is that the parameter $a_0$ does not appear in the Dark Matter scenario. This is a recurrent question in papers of Sanders. In a year 2002 paper, Milgrom [@Milgrom02] moves away from the asymptotic range of MOND. He makes the point that galaxies with high central densities should show no non-Newtonian acceleration at small radius $r$. This emerges later as a fundamental point. He also makes the point that $a_0 \simeq cH_0/2\pi$, where $H_0$ is the Hubble constant and $c$ is the velocity of light. A simple explanation of this value will be presented later. In late 2003, Milgrom and Sanders pointed out a ‘Dearth of Dark Matter in Ordinary Elliptical Galaxies’ [@MilgromSa]. They refer to new data of Romanovsky et al. [@Romanovsky] on three elliptical galaxies. As pointed out by Milgrom in his first paper [@MilgromA], the shape of MOND rotation curves depends on ${\xi} = (MG/R_e^2 a_0)^{1/2} = V^2_{\infty}/R_e a_0$, where $R_e$ is a measure of the size of the baryonic galaxy; they take this as the half-mass radius. Galaxies with ${\xi}\, {\gg} 1$ have internal accelerations $\geq {a_0}$ in their main body, which is thus in the Newtonian regime. At the other end, low surface density galaxies with ${\xi} {\ll} 1$ are in the MOND regime throughout. This was the first time the high acceleration regime of galaxies had been probed. Line-of-sight dispersions vary slowly, as they show in their Fig. 1. In 2004, McGaugh [@McGaugh04] carried out a careful study of disk galaxies. In a masterly presentation of the data, he outlines what is well determined and what is not. He concentrates on rotationally supported disk galaxies. The essential conclusion is that the invisible Dark Matter contribution is proportional to the visible number of baryons: the tail wags the dog! Data in the upper four panels of his Fig. 4 are close to scatter plots, whereas the bottom two show a clear correlation with acceleration. On his Fig. 5, the light-to-mass ratio is tightest when the MOND prescription is used. High surface brightness gives results close to MOND. Only where the mass-to-light ratio falls is there large scatter. This indicates how baryons are crucial to the interpretation of the data. A continuation of this point appears in a following paper of McGaugh [@McGaugh05]. He comments that including gas as well as stars, the Tully-Fisher relation works even for low brightness dwarf spherical galaxies. The basic point is that the Tully-Fisher relation depends on the acceleration. He shows that Cold Dark Matter halos give a poor fit using a parametrisation close to the Navarro, Frenk and White parametrisation [@Navarro]. In his Fig. 1, he shows that MOND gives an excellent fit to NGC 2403 of rotation curves and mass distribution. The Dark Matter prediction is far from the data fitted by MOND. In 2007, Milgrom and Sanders presented MOND analyses for several of the lowest mass disc galaxies below $4 \times 10^8 M_\odot$ [@Milgrom07]. They show close fits to rotation curves of 4 galaxies from the work of Begum et al. [@Begum]. These galaxies are in the deep MOND regime, with low accelerations at all radii. They comment that the MOND result in such cases is close to a pure prediction as opposed to a one parameter fit. Sanders and Noordermeer extend the MOND analysis to 17 high surface brightness, early-type disc galaxies derived from a combination of 21 cm HI lines observations and optical spectroscopy data [@SandNoor]. These are data of Noordermeer and van der Hulst in Groningen [@Noordermeer]. Fits are close to data and they show the breakdown on velocity of rotation into Newtonian, stellar and gas components. In 2008, Sanders and Land [@SandLand] made use of data of Bolton et al. [@Bolton] from the Sloan Lens Survey. Whole foreground galaxies function as strong gravitational lenses to produce multiple images (the “Einstein ring”) of background sources. Bolton et al. measured the “fundamental plane”: an observed relations between effective radius, surface brightness and velocity dispersion, using 36 strongly lensing galaxies. The lensing analysis was combined with spectroscopic and photometric observations of individual lens galaxies, in order to generate a “more fundamental plane” based on mass surface density rather than surface brightness. They found that this [*mass-based*]{} fundamental plane exhibits less scatter and is closer to expectations of the Newtonian virial relation than the usual luminosity based on the fundamental plane. The conclusion is that the implied Mass/Luminosity values within the Einstein ring do not require the presence of a substantial component of Dark Matter. Sanders and Land show in their Figs. 1 and 2 close linear correlations between masses derived from lensing and surface brightness, Milgrom himself reviewed the status of MOND at that time [@Milgrom08]. He argues that MOND predictions imply that baryons alone determine accurately the full field of each and every individual galaxy. He comments that this conflicts with the expectations of the Dark Matter paradigm because of the haphazard formation and evolution of galaxies and the very different influences that baryons and Dark Matter are subject to during their evolution, e.g. the very small baryon to Dark Matter fraction assigned by $\Lambda\, CDM$. In MOND, all physics is predicted to remain the same under a change of units of length ${\ell \to \lambda \ell}$, of time ${t \to \lambda t}$ and no change in mass units, $m \to m$; in words, if a certain configuration is a solution of the equations, so is the scaled configuration. Likewise, if $\hat {\bf r}(t) = \lambda {\bf r}(t/\lambda)$ is a trajectory where $m_i$ are at $\lambda {\bf r}_i (t/\lambda)$, the velocities on that trajectory are ${\bf V}(t) = {\bf V}(t/\lambda)$; i.e. a point mass remains a point mass of the same value. Another relation is that if $m_i \to \lambda m_i$, $\bf {r}_i \to {\bf r}_i$ and $t \to \lambda ^{-1/4} t$. So scaling all the masses leaves all trajectories unchanged, but all velocities scale as $m^{1/4}$ and accelerations then scale as $m^{1/2}$. The bottom line is that rotation curves of individual galaxies are based only on observed baryonic masses. In 2009, Stark, McGaugh and Swaters [@Stark] examined the Baryonic Tully-Fisher relation using gas dominated galaxies. They assembled a sample from 7 sources totalling $\sim 40$ low surface brightness and dwarf galaxies, which have a high percentage of gas. The stellar mass was not zero, so they considered a wide range of stellar population models. Since these galaxies are gas dominated, the difference in stellar mass-to-light ratio from the different models had little impact. They were careful to select galaxies with inclinations $\geq 45^\circ$, i.e. approaching face-on. They checked that observed rotation curves flattened out at large radii within errors. The conclusion is that the exponent in the data is $3.94 \pm 0.07(stat.) \pm 0.08(syst.)$, compared with the predicted value 4 from the Tully-Fisher relation. In 2010, Milgrom developed a new formulation of MOND called QUMOND [@qumon]. This handles, for example, a large galaxy distorting a small one. The fundamental problem goes back to the work of Beckenstein and Milgrom in 1984 [@Beckenstein]. It is not clear what the Hamiltonian or Lagrangian is that controls MOND. Milgrom follows the idea that the MOND potential $\phi$ produced by a mass distribution $\rho$ satisfies the Poisson equation for the modified source density. This is an idea which appears to work. He develops the algebra in Sections 3 and 4 of his paper. It includes the constraint that the centre of mass motion of a pair of galaxies is correctly described. This deals with the important case of perturbations between a pair of interacting galaxies. The algebra is actually formulated in a general way so that it can in principle cope with interactions between many galaxies. For a large galaxy like the Milky Way, the effects of General Relativity are at the level of $\sim 2 \times 10^{-4}$. A following version BIMOND included the constraints of General Relativity, so as to be able to cope with effects on the scale of the Universe [@Bimond]. These two procedures provide a formalism which becomes valuable when models of galaxies develop further. In 2011, Scarpa et al. made observations of 6 globular clusters [@Scarpa]. Hernandez and Jim' enez and Allen report a detailed study of the velocity dispersions of stars at radii of 8 globular clusters [@Hernandez]. Like Scarpa et al., they conclude that tidal effects are significant only at radii larger by factors of 2–10 than the radius where MOND flattens the curves. They also show that the velocity dispersion $\sigma$ varies with the mass $M$ of the cluster as $M^{-4}$ within errors; this is the expected analogue of the Tully-Fisher relation arising from Jeans’ Law. This result is independent of luminosity measurements used in interpreting galactic rotation curves. In galaxies, the mass $M$ within a particular radius is not easy to determine, and is usually taken as the mass where rotation curves flatten out. Further study of globular clusters is desirable. In 2012, McGaugh [@McGaugh12] reports an updated analysis of the Baryonic Tully-Fisher relation using 41 gas-rich galaxies. My Fig. 1 shows the observed results compared with MOND. McGaugh reports $a_0 = (1.3 \pm 0.3) \times 10^{-10}$ m s,$^{-2}$, where errors covers both statistics and systematics. Angus et al. [@Angus] report a useful code for calculating QUMOND. It solves the Poisson equation on a 2-dimensional grid. It uses Kuzmin disks as defined by Binney and Remaine with a surface density $\Sigma (R) = \frac {aM}{2\pi}(R^2 + a^2)^{-3/2}$. Milgrom tested MOND over a wide acceleration range in X-ray ellipticals [@Milgrom12]. Two galaxies have been measured over very large galactic radii ($\sim 100$ and $\sim 200$ kpc) assuming hydrostatic balance of the hot gas enshrouding them. Measured accelerations span a wide range, from ${\ge 10a_0}$ to $\sim 0.1a_0$. He shows two figures comparing the fit to MOND with data up to unusually high masses of $12.4 M_\odot$; contributions from stars and x-ray gas are shown. He comments that in the context of the Dark Matter paradigm, it is unexpected that the relation between baryons and dark matter is described so accurately by the same formula that accounts for disc-galaxy dynamics. Milgrom himself summarised 10 cardinal points of MOND in December 2012 [@Milgromsum] New results =========== In the year 2013, there have been mounting criticisms of the standard cosmological model $\Lambda \, CDM$. It predicts that galaxies the size of the Milky Way should be accompanied by $100-600$ roughly isotropic haloes of smaller satellite galaxies formed by random fluctuations of Dark Matter [@KroupaM] [@FamaeyM]. This has been known since 1999. In an article entitled ‘Where are the missing galactic satellites?’, Klypin et al. estimated 100-300 satellites, depending on radius, see their Fig. 4 [@Klypin]. Moore et al. estimated $\sim 500$ satellites, see their Fig. 2. [@Moore]. In fact, the Milky Way has $\sim 24$ satellites and Andromeda, our nearest large galaxy, has $\sim 28$ [@Collins], where one in each case could be an interloper. A further point is that the satellites are highly correlated in both radial and momentum phase space, rather than being spherically distributed as Dark Matter predicts. Milgrom [@qumon] is insistent that tidal effects of large galaxies have strong effects on their satellites and is probably a key factor in determining their phase space distributions. The best form to use for such calculations is QUMOND [@QUMOND]. Yet another result comes from L" ughausen et al. who study an unusual type of galaxy called a ‘polar ring galaxy’ [@Lughausen]. The one they study has a small bright gas-poor disc with a central bulge, but in addition an orthogonal gas-rich disc, referred to as a polar disc. There are Coriolis forces between the two discs. Observed velocities in both discs are well predicted by MOND, whereas Dark Matter predicts a roughly spherical distribution inconsistent with the data. McGaugh and Milgrom present two papers on Andromeda dwarfs. In the first paper [@dwarf1], they compare recently published velocity dispersions of stars for 17 Andromeda dwarf spheroidals with estimates of MOND predictions, based on the luminosities of these dwarfs, with reasonable stellar $M/L$ values and no Dark Matter. The two are consistent within uncertainties. It is necessary to take account of tidal effects due to the Milky Way on Andremeda dwarf galaxies. For Andromeda, only red giants can be tracked due to distance. They predict the velocity dispersions of another 9 dwarfs for which only photometric data were available. In the second paper [@dwarf2] they test their predictions against new data. Results give reasonable stellar mass-to-light ratios, while Newtonian dynamics give large mass discrepancies. They comment that MOND distinguishes between regimes where the internal field of the dwarf or the external field of the host dominates. The data appear to recognise this distinction, which is a unique feature of MOND, not explicable in $\Lambda \, CDM$. There is a major result from Milgrom [@Milgrom13]. He has studied weak gravitational lensing of galaxies using data of Brimioulle et al [@Brimioulle]. They examined foreground galaxies illuminated by a diffuse background of distant galaxies. They remove signals from the centres of foreground galaxies so that their edges and haloes can be studied. Their objective was to study the Dark Halos of galaxies. An elementary prediction of MOND is that the asymptotic form $\sqrt {a_0 g}$ of the curvature leads to a logarithmic tail $V(r) = -\sqrt {GMa_0} \ln _e(r/r_1)$ to the Newtonian potential; here $r_1$ is the mean radius for this term. This tail lowers the zero point energy of the Newtonian acceleration. Milgrom transforms this equation into the variables used by Brimioulle et al. I myself have checked his algebra and arithmetic and agree. Milgrom shows that their results obey MOND predictions accurately over a range of accelerations $10^{-9}\, -\, 10^{-11}\,$ m $\,$ s$^{-2}$. Averaged over this range, results are a factor $\sim 40$ larger than predicted by conventional Dark Matter haloes surrounding galaxies. Fig. 2 shows the ratio of observed acceleration $A$ to Newtonian acceleration as a function of $x = -{\rm {\log }_{10}} \, g_N$. The curve is well known experimentally over the range Milgrom uses, but is less reliable beyond this. At the peak acceleration $a_0$, the effect is larger than Dark Matter predicts by a factor $\sim 65$. The unavoidable conclusion is that the standard $\Lambda CDM$ model needs serious modification. -13mm -8mm Fig. 3 displays Milgrom’s fit to the data of Brimioulle et al. The MOND predictions for baryonic mass-to-light ratios 1,1.5, 3 and 6. The measurements are reproduced from Fig. 28 of Brimioulle et al. Triangles mark Red Galaxies and squares Blue Galaxies. There is a difference in absolute magnitudes of velocity dispersions $\sigma$, due to different luminosities of red and blue galaxies. -10mm -2mm A corollary follows from Milgrom’s fit to the data of Brimioulle et al. which agree with MOND, but are far from the prediction of $\Lambda CDM$. Here the photons come from distant galaxies. The Baryonic Acoustic Oscillations are likewise carried by photons, which in this case originate from the Cosmic Microwave Background. Surely these most be treated in the same way. The conventional assumption made in the work of Schmittful et al. [@Challenor] is that photons from the Cosmic Microwave Background are bent in weak gravitational lensing only by Newtonian dynamics (including small corrections for General Relativity). However, since MOND fits the data of Brimioulle [*et al.*]{} but $\Lambda CDM$ does not by a large margin, the astrophysics community should be alert to the fact that an additional energy $\sqrt {GMa_0} \ln _e(r/r_0)$ originates from integrating the acceleration $\sqrt {GMa_0}/r$; $r_0$ is the radius where the acceleration is $a_0$. This is needed over the range of accelerations where MOND explains the gravitational rotation curves, Fig. 2. It is not presently included in the fit to the Baryonic Acoustic Oscillations. Some form factor will be needed at intergalactic distances beyond the range explored by Brimioulle et al. For very large $r$, the red shift gradually suppresses the logarithmic term because the Universe was younger when light was emitted. At small $r$, the mass $M$ in the formula falls in a way which needs to be fitted empirically. A criticism levelled at MOND is that it does not fit accurately the third peak, so treating the Baryonic Acoustic Oscillations correctly is of prime importance. Two recent papers of Sanders make valuable reading [@Sanders1] [@Sanders2]. In the second, he argues that ‘MOND, as a theory, is inherently falsifiable’ and $\Lambda CDM$ is not, because of the flexibility in the way data are fitted. MOND has the merit that it gives a specific distribution of accelerations depending on just one parameter $a_0$ over the range where rotation curves of galaxies deviate significantly from Newton’s laws. A new approach ============== Up to this point, the discussion has largely concerned the peripheries of galaxies and the approach to the asymptotic limit of the acceleration. In Ref. [@cjp], a completely different viewpoint is developed, from which the conclusion is that quantum mechanics plays a fundamental role in forming galaxies. For astrophysicists this is an unfamiliar idea. However, from a Particle Physics viewpoint it is simple and precise. Particle Physics governs Atomic and Particle Physics. It plays a fundamental role in forming Black Holes. Why not galaxies too? From a Particle Physics perspective, the natural way to express gravitation is in terms of quantised gravitons. Before plunging into this story, it is necessary to correct one unfortunate piece of wording in the article [@cjp]. It refers to the Hubble acceleration. In fact the Hubble constant has dimensions of velocity. This has no effect on the algebra or results. The procedure used in Ref. [@cjp] is to adopt commonly used forms of Milgrom’s $\mu$ function to determine the non-Newtonian component of the acceleration observed at the edges of galaxies. This peaks at or close to $a_0$ where it is bigger than $g_N$ by a large factor $\sim 171$. This acceleration is then integrated to determine the associated energy function. It turns out that the result fits naturally to a Fermi function with the same negative sign as gravity. It can be interpreted as an interaction between gravitons and nucleons (or electrons). Fitting functions are available from the author. This Fermi function lowers the total energy by $0.5 GM$ at radius $r_0$ where $g_N$ reaches $a_0$; here $M$ is the mass within radius $r_0$. It represents an energy gap like those observed in doped semi-conductors and superconductors. -15mm -6mm There is information over the whole range of accelerations from $~10^{-7}$ m s$^{-2}$ upwards; at the lowest point the acceleration is almost purely Newtonian. Results of this approach are shown in Fig. 4(a). Data are shown on a log-log plot. There are three functions in common use for the $\mu$ function used for MOND. They are illustrated in Fig. 19 of the review of Famaey and McGaugh [@Famaey]. The smoothest form, given by Milgrom is $$\mu (x) = \sqrt {1 + 1/(4\chi ^2)} - 1/(2\chi),$$ where $\chi = a/a_0$. From algebra given in eqns. (7)-(9) of Ref. [@cjp], the result is $$g^2_N + a_0g_N = a^2.$$ This gives the full curve of Fig. 4(a). Its curvature $d^2y/dx^2$ is a measure of the additional acceleration. It is straightforward to derive a formula for $d^2y/dx^2$ (see [@cjp]); evaluating it gives the curve shown on Fig. 4(b). It peaks at $x = 10^{-10}$ m s$^{-2}$. it can be approximated by a Gaussian which drops to half-height at $9.6\%$ of the value of $x$ at the peak. The conclusion is that galaxies have considerable stability. Note that this conclusion applies to galaxies of all sizes in view of the scaling relation used by Milgrom. The point of interest is that on Fig. 4(a), there appears to be a cross-over between Newtonian acceleration at low $x$ and another regime at large $x$. Asymptotically, the total acceleration, taken from MOND, is $\sqrt{GMa_0}/r$. Taking this as $-d\phi /dr$, where $\phi$ is a potential induced by the ‘extra’ acceleration $$\phi = -\sqrt {GMa_0}\ln (r/r_1);$$ here $r_1$ is the mean radius of this term. Because $a_0 \sim 10^{-10}$, $\phi$ is very small. However, it does explain the asymptotic straight-line at the right-hand edge of Fig. 4(a). It also explains the long range tail observed by Brimioulle et al. There is an ‘extra’ energy in addition to Newtonian energy. This is obtained by integrating the ‘extra’ acceleration numerically. From Fig. 4(a), $g_N = e^x = GM/r^2$; $e^{x/2} = \sqrt {GM}/r$, so $E_1 = -GM/r = -\sqrt {GM} e^{x/2}$. The appropriate equation is $$H_{11}\Psi _1 + V\Psi_2 = E\Psi _1;$$ $V$ describes the ‘extra‘ energy and $E$ is the total energy. In a quantum mechanical situation, there is a companion equation $$H_{22}\Psi _2 + V\Psi _1 = E\Psi _2.$$ These are a coupled pair of equations with two solutions $$(H_{11} - E)(H_{22} - E) - V^2 = 0.$$ This equation was first derived in 1931 by Breit and Rabi [@Breit]. The same formalism describes mixing between the three neutrinos $\nu _e$, $\nu _\mu$ and $\nu _\tau$ and also the CKM matrix of QCD and CP violation in decays of $K^0$ mesons. For galaxies, classical expectation values $<H_{11}>$ and $<H_{22}>$ are to be substituted into Eq. (9). The two solutions of the Breit-Rabi equation are $$E = \frac {E_1 + E_2}{2} \pm \sqrt{\left( \frac {E_1 - E_2}{2} \right) ^2 + V^2}.$$ -20mm -10mm A clearer picture of what is happening is obtained by rotating Fig. 4(a) clockwise by $35.78 ^\circ$; this is the mean angle of the dashed line to the $x$-axis, $45^\circ$, and the angle $\tan^{-1}(0.5)$ of the dotted line. The rotation of axes is the Bogoliubov-Valatin transformation, first discovered in an obscure phenomenon in nuclear structure physics by Bogoliubov and Valatin [@Bogoliubov] [@Valatin]. The upper curve in Fig. 5 shows the result. This equation arises in quantum mechanics whenever two energy levels cross as a function of $x$. The separation between the energy levels depends on the degree of mixing governed by $V$. -14mm -47mm Fig. 6 shows the rotation of axes; it is about the point $x = -10$, $y = -10$, where the two straight lines of Fig. 1(a) cross: $$\begin{aligned} x' &=& (x + 10) \cos \beta - (y + 10) \sin \beta \\ y' &=& (x + 10) \sin \beta + (y + 10) \cos \beta.\end{aligned}$$ Substituting Eq. (10) gives an exact expression for the curve in $x', y'$ axes. The full solution of the Breit-Rabi equation is given in Section 3.2 of Ref.$ \cite {cjp}$. It is given by $$\begin{aligned} E_2 &=& \sqrt {GM}\epsilon (x) \\ V &=& \sqrt {GM}W(x),\end{aligned}$$ where $W$ is given by the standard form of the Fermi function: $$W(x) \propto \left[ 1 + \exp \left( \frac {E - E_F}{\beta E_F}\right) \right]^{-1} ;$$ $E_F$ is the energy at the centre of the Fermi function and $\beta$ is a fitted constant. The value of $E_2$ describes the asymptotic variation of energy and is given by $\sqrt {a_0g_N}$. The depth of the Fermi function is $-0.5\, GM$. The magnitude of this term can be traced to the factor 2 difference in slope of $g_N$ and that of the asymptotic form $\sqrt {a_0 g_N}$. A Bose-Einstein condensate does not fit the data, hence demonstrating that the condensate is not in the gravitational field itself. Let us now return to Fig. 4(c). Here there is a slight complication. Fig. 4(b) is the acceleration measured in $x,y$ axes. However, in $x',y'$ axes the curvature increases by a factor 1.48. In addition, there is a small visible displacement of the centre of curvature in Fig. 4(a) by an offset of $-0.203 \times 10^{-10}$ in $x$. What then emerges from the Breit-Rabi equation is that $g_N$ is rather small near $x = a_0$ compared with that originating from the extra acceleration $dW/dx'$. This second term dominates by a large factor $\sim 171$ at the centre of the curve. This ratio falls by $50\%$ at $x=-10.6$, to 30 at $x=-11$, then -6.0 at $x=-11.5$ and $\sim 1$ at $x=-12$. Results for the ‘extra’ acceleration are symmetric about $a_0$ except for the term $\sqrt {a_0g_N}$ of Eq. (10). The conclusion is that the curved part of Fig. 4(a) is the dominant feature near $x = a_0$ where Newtonian gravitation is a rather small perturbation. Consider the effect of this result near the centre of the Fermi function at $E = E_F$ in Fig. 4(c). If we retain only the dominant terms in $W$ and $dW/dx$, $$\begin{aligned} dE/dx &\to (GM/{\ln _e 10}) 2W/dx \\ E &\to ( {GM/\ln _e 10}) 2W.\end{aligned}$$ Apart from the factor $\sqrt {GM}/\ln _e 10$, $dE/dx$ may be interpreted as the modulus of a Breit-Wigner resonance with $x$-dependent width: $$BW = \frac {\Gamma (x)/2}{E - E_F - i\Gamma (x)/2}.$$ The energy starts at zero because there is no difference between Newtonian energy and total energy at the top of Fig. 3(c), and its central energy is shifted downwards by $0.25 GM$. In Ref. [@cjp], the effect of using alternative forms of Milgrom’s $\mu$ function were tested. Although acceleration curves change significantly, as shown by the dotted curve of Fig. 4(c), the Fermi function is affected only at the ends of the range $x = 8$ to $12$ by at most $\pm 4\%$. The Fermi function acts as a funnel, attracting gas and dust into the periphery of the galaxy. The shape of the Breit-Wigner can be alternatively expressed as an Airy function with a modest form factor. It comes from the coherent interactions of gravitons with nucleons over a large volume. The form factor can arise for example from supernovae which heat considerable volumes. Such effects resemble defects like those observed in superconductors. What about the missing lower branch of the Breit-Rabi equation? On this branch both $W$ and $dW/dx'$ change sign. The change of sign requires that this branch describes an excited state rather than a condensate. (Remember that energies of both gravity and $W(x)$ are negative). Such an excited state is likely to decay on a time scale much less than that of galaxies, so this branch has not been observed. It is interesting that phenomena appear on a log-log scale. The logarithmic dependence has a natural explanation in terms of the statistical mechanics of the interaction between gravitons and nucleons. Schr" odinger shows in a delightfully simple approach that quantum mechanics requires the logarithmic dependence for both Fermi-Dirac statistics and Bose-Einstein [@Schrodinger]. This is further direct evidence for quantum mechanics at work, since it is a purely quantum phenomenon. A fit using Bose-Einstein statistics fails completely to fit galactic data. A prediction is that in Voids there will be no Fermi function lowering the energy. It is observed that many large galaxies appear at the edge of the Local Void [@Nusser]. This can occur by attracting gas and dust out of the neighbouring Void. The converse occurs in clusters of galaxies. Each galaxy in a cluster has a Fermi function and this results in complex interferences between individual galaxies in the cluster and general lowering of the energy. This may account for the fact that MOND falls short by a factor of 2 in predictions of accelerations in galaxy clusters. In Ref. [@cjp] a further conjecture is made about a connection to Dark Energy. Experiment tells us that in galaxies, the asymptotic form of the acceleration is $\sqrt {a_0g_N}$. This leads to the question: what governs the asymptotic acceleration? If MOND successfully models the formation of galaxies and globular clusters, it raises the question of how to interpret Dark Energy. In a de Sitter universe, the Friedmann-Lema\^ itre-Robertson-Walker model (FLRW) smoothes out structures using a $\Lambda CDM$ function which models the gross features. My suggestion is that galaxies create fine-structure and the FLRW model should include into Dark Energy the sum over these structures. This sum increases as galaxies grow in the recent past. It has the potential to account for the late-time acceleration of the Universe. This remains to be tested. The way in which the Bogoliubov-Valatin transformation works in nuclei has been reviewed recently in a paper of Ring [@Ring]. It is intricate, but is well described by Ring. It depends on the spontaneous violation of symmetries such as rotational symmetry in deformed nuclei and the gauge theory in superfluid systems. The phenomenon is called ‘backbending’. In nuclei, more than one basis state exists and there are towers of resonances separated by two units of spin, made from each of these basis states. At large excitations they cross in a similar way to the two regimes in Fig. 7(a) here. Basically what happens is that the excited states can decay via emission of photons and this damps the excited states as a function of energy, see Fig. 2 of [@Ring]. There is then quantum mechanical mixing amongst the excited states. [99]{} B. Famaey and S.S. McGaugh, arXiv: 1112.3960. N.B. Tully and J.R. Fisher, Astron. Astrophys. [**54**]{} 661 (1977). S.M. Faber and R.E. Jackson, Astroph, J [**204**]{} 668 (1976). M. Milgrom, Astrophys. J [**270**]{} 371 (1983). J.H. Oort (1965), [Stars and Stellar Systems]{}, Vol. [**5**]{}, [Galactic Structures]{}, ed. A Blaauw and M. Schmidt, (University of Chicago Press, Chicago), p 485. M. Milgrom, Phys. Rev. E [**56**]{} 1148 (1997). J. Beckenstein and M. Milgrom, Astrophys. J [**286**]{} 7 (1984). M. Milgrom, Phys. Lett. A [**190**]{} 17 (1994). M. Milgrom, arXiv: astro-ph/9601080. R.H. Sanders, Astrophys J [**480**]{} 492 (1997). K.G. Begeman, A.H. Broeils and R.H. Sanders, MNRAS, [**249**]{} 523 (1991). W.J.G. de Blok and S.S. McGaugh, Astroph. J [**499**]{} 66 (1998). S.S. McGaugh, J.M. Shombert, .D. Bothun and W.J.G. de Blok, Astroph. J. [**533**]{} L99 (2000). M. Milgrom, MNRAS [**326**]{} 1261 (2001). M. Milgrom, New Astron. Rev. [**46**]{} 741 (2002). M. Milgrom and R.A. Sanders, Astrophys. J [**599**]{}, L25 (2003). A.J. Romanowsky [*et al.*]{}, Science, [**301**]{} 1696 (2003). S.S. McGaugh, Astrophys. J [**609**]{} 652 (2004). S.S. McGaugh, Invited review for the 21st IAP Colloquium: Mass Profiles of Cosmological Structures, Eds. G. Mamon, F. Combes, C. Deffayet, and B. Fort. J.F. Navarro, C.S. Frenk and S.D.M. White, Astrophys. J [**490**]{} 493 (1997). M. Milgrom and R.H. Sanders, Astrophys. Lett. [**658**]{} L17 (2007). A. Begum, J. Chengalur and I.D. Karachentsev Astron Astrophys. [**433**]{} L1 (2005). R.H. Sanders and E. Noordermeer, Astrophys. J Lett. [**658**]{}, L17 (2007). E. Noordermeer and J.M. van der Hulst, arXiv: astro-ph/0611494. R.H. Sanders and D.D. Land, MNRAS [**389**]{} 701 (2008). A.S. Bolton [*et al.*]{} Astrophys. J [**665**]{} L105 (2007). M. Milgrom, arXiv: 0801.3133. D.V. Stark, S.S. McGaugh and R.A. Swaters, Astronomical Journal [**138**]{}, Issue 2, 392 (2009). M. Milgrom, MNRAS [**405**]{}, 1129 (2010). M. Milgrom, Phys. Rev. D [**82**]{}, 043523 (2010). R. Scarpa [*et al.*]{}, Astron. Astrophys. [**525**]{} A148 (2011). X. Hernandez and M.A. Jim' enez, Astrophys. J [**750**]{} 9 (2012). X. Hernandez, M.A. Jim' enez and C. Allen, MNRAS [**428**]{} 3196 (2013). S.S. McGaugh, Astrophys. J [**143**]{} 40 (2012). G.W. Angus [*et al.*]{}, MNRAS [**421**]{} 2598 (2012). M. Milgrom, Phys. Rev. Lett. [**109**]{} 131101 (2012). M. Milgrom, MNRAS [**437**]{} 2531 (2013). P. Kroupa, M. Pawlowski and M. Milgrom, arXiv: 1301.3907. B. Famaey and S.S. McGaugh, arXiv: 1301.0623. A.A. Klypin, A.V. Kravtsov, O. Valenzuela and F. Prada, Astrophys. J [**522**]{} 82 (1999). B. Moore [*et al.*]{} Astrophys. J [**524**]{} L19 (1999). M.L.M. Collins [*et al.*]{}, Astrophys. J [**768**]{} 172 (2013). M. Milgrom, MNRAS [**403**]{} 886 (2010). F. L" ughausen [*et al.*]{} arXiv: 1304.4931. S. McGaugh and M. Milgrom, Astrophys. J [**766**]{} 22 (2013). S. McGaugh and M. Milgrom, Astrophys. J [**775**]{} 139 (2013). M. Milgrom, Phys. Rev. Lett. [**111**]{} 041105 (2013). F. Brimioulle, S. Seitz, M. Lerchster, R. Bender and J. Snigula, MNRAS [**432**]{} 1046 (2013). M.M. Schmittful, A. Challenor, D. Hanson and A. Lewis, Phys. Rev. D [**88**]{} 0639012 (2013). R.H. Sanders, arXiv: 1310.6148. R.H. Sanders, arXiv: 1311.1744. D.V. Bugg, Canadian Journal of Physics, CJP-2013-0163 (2013). G. Breit and I.I. Rabi, Phys. Rev. [**38**]{} 2082 (1931). N.N. Bogolubov, J. Exptl. Theor. Phys. (U.S.S.R) [**34**]{} 50,73 (1958); translation: Soviet Phys. JETP [**34**]{} 41, 51. J.G. Valatin, Nu. Cim. [**7**]{} 843 (1958). E. Schr" odinger, [*Statistical Thermodynamics*]{}, Cambridge University Press, $2^{nd}$ Edition (1952). P.J.E. Peebles and A. Nusser, Nature [**465**]{} 565 (2010). P. Ring, arXiv: 1204.2681.
--- abstract: | We consider the class of non-Hermitian operators represented by infinite tridiagonal matrices, selfadjoint in an indefinite inner product space with one negative square. We approximate them with their finite truncations. Both infinite and truncated matrices have eigenvalues of nonpositive type: either a single one on the real axis or a couple of complex conjugate ones. As a tool to evaluate the reliability of the use of truncations in numerical simulations, we give bounds for the rate of convergence of their eigenvalues of nonpositive type. Numerical examples illustrate our results. MSC 2010 numbers: 47B36, 47B50 author: - 'Maxim Derevyagin${}^{*}$, Luca Perotti${}^{\dag}$, Michał Wojtylak${}^{\$}$' title: 'Truncations of a class of pseudo-Hermitian operators' --- Introduction ============ The Hamiltonian $H$ of a physical system represents its energy, which is a real observable. It is therefore required that the expectation values of the quantum operator $H$ be real [@exep]. This can be guaranteed by imposing that $H$ be Hermitian, $H=H^\dagger$, as it is known that the spectrum of a Hermitian operator is real and its eigenvectors form a complete orthogonal set [@dir]. It is on the other hand known that Hermiticity is not a necessary condition for a real spectrum [@bend]: a large number of one-dimensional non-Hermitian potentials, both real and complex, invariant under the simultaneous actions of the parity $P$ (space reflection) and time reflection $T$ operators [@note] have been found to admit energies that are real and discrete. The matter is not a idle one, as non-Hermitian PT-invariant operators find applications in many areas of theoretical physics: “optical" or “average" potentials in nuclear physics [@boh], quantum field theories [@Itz], scattering problems [@Fesh], localization-delocalization transitions in superconductors [@Fein], defraction of atoms by standing light waves [@Berry], as well as the study of solitons on a complex Toda lattice [@toda]. Unfortunately, PT-invariance is neither necessary nor sufficient to ensure the reality of the spectrum; however, it has been conjectured [@bend] that PT invariant Hamiltonians possess real discrete eigenvalues if the PT symmetry is unbroken i.e. if the energy eigenstates are also eigenstates of the operator PT. When the PT-symmetry is broken and the Hamiltonian is real there instead are energy eigenvalues that are complex conjugate pairs. However, no general condition has been found for the breakdown of the PT-symmetry. In this contest, it has been pointed out that a necessary, but not sufficient, condition for the spectrum to be real and discrete is the $\eta$-pseudo-Hermiticity, $\eta H \eta^{-1} = H^\dagger$, of the Hamiltonian, where $\eta$ is a Hermitian linear automorphism [@Nevanlinna]. The property is also known as selfadjointness in an indefinite inner product space, see [@bognar; @langerio; @GLR]. The eigenvectors of $H$ are in this case $\eta$-orthogonal, i.e. they are orthogonal according to the $\eta$-distorted inner-product $<\psi|\eta \psi>$. Several PT-symmetric potentials have been found to be P-pseudo-Hermitian [@Most] and classes of non-Hermitian Hamiltonians -both PT-symmetric and non-PT-symmetric- appear to be pseudo-Hermitian under $\eta = e^{-\theta p}$ where $\theta \in {\mathbb{R}}$ and $p = −i \frac{d}{dx}$ , ($\hbar = 1$) is the momentum operator (the transformation generated by $\eta$ is an imaginary shift: $\eta x \eta^{-1} = x+i\theta$, $\eta p \eta^{-1} = p$) [@Ahmed], or $\eta = e^{-\varphi(x)}$ where $\varphi(x)$ is a $C^1$ function of $x$ (the transformation is a complex gauge-like one) [@Ahmed2]. It is thus still a case by case procedure to check whether the eigenvalues of an operator are all real. This does not usually cause big practical problems when dealing with a single operator. The situation changes when we have to consider classes of operators. Procedures have been developed for families of operators acting on spaces with finite bases, see e.g. Ref. [@Weig] whose author considers a one parameter family of PT-symmetric matrices $M(\varepsilon)$, with a perturbation parameter $\varepsilon \in {\mathbb{R}}$ which destroys Hermiticity while it respects PT-invariance. Here we consider the case when for numerical simulations an operator $H$ acting on a space with an infinite basis needs to be truncated and study the rate of convergence to their asymptotic value of those eigenvalues that for truncated matrices may happen to be non-real. The operator $H=H_{[0,\infty)}$ is given by a non-symmetric Jacobi matrix $$\label{H0infty} H_{[0,\infty)}= \begin{pmatrix} a_0 & -b_0 & & \\ b_0 & a_1 & b_1 & \\ & b_1 & a_2 & b_2 \\ & & b_2 & a_3 & \ddots\\ & & & \ddots & \ddots \\ \end{pmatrix},$$ with bounded, real sequences $(a_j)_{j=0}^\infty$, $(b_j)_{j=0}^\infty$, the sequence $(b_j)_{j=0}^\infty$ being additionally strictly positive. Its finite truncations are of the form $$\label{H0n} H_{[0,n]}= \begin{pmatrix} a_0 & -b_0 & & \\ b_0 & a_1 & b_1 & \\ & b_1 & a_2 & \ddots\\ & & \ddots & \ddots & b_{n-1}\\ & & & b_{n-1} & a_n \\ \end{pmatrix},$$ and $$\eta=\diag(-1,1,1,\dots).$$ Due to the fundamental theorem of Pontryagin [@pontriagin] each the operators $H_{[0,n]}$ has, generically, either a unique single eigenvalue $\lambda_n$ on the real axis with the eigenvector $f_n$ satisfying ${\left<f_n|\eta f_n\right>}\leq 0$ or a single couple of complex conjugate eigenvalues $\lambda_n\in{\mathbb{C}}^+$, $\bar\lambda_n\in{\mathbb{C}}^-$ (to avoid confusion with the conventions used in some of the papers we quote, we note that here and in the following ${\left<x|y\right>}$ always denotes the usual inner product –either in ${\mathbb{C}}^n$ or in $\ell^2$– [*linear with respect to the second variable*]{}). The remaining part of the spectrum of $H_{[0,n]}$ is real. The same is true for the spectrum of the infinite matrix $H_{[0,\infty]}$ with the eigenvalue $\lambda_\infty$, see Section \[Prel\] for details. The character of the convergence $\lambda_n\to\lambda_\infty$ is the main topic of our paper. Our approach makes use of analytic representations of the function $$\label{mform} m_{[0,\infty)}(z)=-{\left<e_0|(H_{[0,\infty)}-z)^{-1}e_0\right>},$$ which contains the full information about the spectrum of $H_{[0,\infty)}$ and of its $[n-1/n]$ Padé approximants $$\label{mnform} m_{[0,n]}(z)=-{\left<e_0|(H_{[0,n]}-z)^{-1}e_0\right>}.$$ In particular, $\lambda_n$ ($\lambda_\infty$) is a pole of $m_{[0,n]}(z)$ ($m_{[0,\infty]}$, respectively) and it can be characterized in analytic terms. Due to the locally uniform convergence of $m_{[0,n]}$ to $m_{[0,\infty)}$ [@DD07], the sequence $(\lambda_n)_{n=0}^\infty$ converges to $\lambda_\infty$ (Corollary \[lconvergence\]). Our main interest is the rate of this convergence. In particular we show its dependence of the placement of the eigenvalue $\lambda_\infty$ in ${\mathbb{C}}^+\cup{\mathbb{R}}$. Our paper is organized as follows: - We give various analytic representations of the function , choosing in particular as our starting point $$\frac{-1}{m_{[0,\infty)}(z)}={a_0-z+b_0^2 \int_{t_3}^{t_4} \frac{d\mu(t)}{t-z}},$$ where $\mu$ is some probability measure, see Theorem \[rep\]. - In the case $\lambda_\infty\notin[t_3,t_4]$ we show that the convergence rate of $\lambda_n$ to $\lambda_\infty$ is exponential, with the base of the exponent increasing with the distance of $\lambda_\infty$ from $[t_3,t_4]$: see Theorem \[main-geom\] below. - However, the $\lambda_n$’s tend to arrange themselves in branches spiraling into $\lambda_\infty$ and some of these branches can get trapped in the real axis for a number of iterations $n_0$ which can be relatively large when $\lambda_\infty$ is close to $[t_3,t_4]$. We show examples with different numbers of branches and compute an estimate for $n_0$ in Theorem \[N\]. - In the case when $\lambda_\infty\in[t_3,t_4]$ we build an example to show that the convergence rate is in general worse than exponential. - In the concluding remarks we review the possible cases from the numerical point of view. Holomorphic representations of the $m$-function {#Prel} =============================================== We start with reviewing the spectral properties of the matrices $H_{[0,n]}$ and $H_{[0,\infty]}$. The matrix $H_{[0,n]}$ is selfadjoint in the indefinite inner-product space with the fundamental symmetry given by $\eta_n=[-1]\oplus I_n$ ($\eta_n$-pseudo-Hermitian). Consequently one of the following four possibilities applies: - $H_{[0,n]}$ is similar to a diagonal matrix with real entries, except two complex conjugate entries $\lambda_n\in{\mathbb{C}}^+$, $\bar\lambda_n\in{\mathbb{C}}^-$. The eigenvectors $f_n, g_n$ corresponding to the eigenvalues $\lambda_n,\bar\lambda_n$ of $H_{[0,n]}$ satisfy ${\left<f_n|\eta_n f_n\right>}={\left<g_n|\eta_n g_n\right>}=0$, ${\left<f_n|\eta_n g_n\right>}\neq0$. - $H_{[0,n]}$ is similar to a diagonal matrix with real entries and there is precisely one eigenvalue $\lambda_n$ with the corresponding eigenvector $f_n$ satisfying ${\left<f_n|\eta_n f_n\right>}<0$. - $H_{[0,n]}$ is similar to a block-diagonal matrix with all the blocks real and one-dimensional, except one block of the form $$\begin{pmatrix}\lambda_n & 1\\ 0 &\lambda_n\end{pmatrix}\text{ with }\lambda_n\in{\mathbb{R}}.$$ The eigenvector $f_n$ corresponding to the eigenvalue $\lambda_n$ of $H_{[0,n]}$ satisfies ${\left<f_n|\eta_n f_n\right>}=0$. - $H_{[0,n]}$ is similar to a block-diagonal matrix with all the blocks real and one-dimensional, except one block of the form $$\begin{pmatrix}\lambda_n & 1 & 0\\ 0 &\lambda_n & 1\\ 0 & 0 & \lambda_n\end{pmatrix}\text{ with }\lambda_n\in{\mathbb{R}}.$$ The eigenvector $f_n$ corresponding to the eigenvalue $\lambda_n$ of $H_{[0,n]}$ satisfies ${\left<f_n|\eta_n f_n\right>}=0$. The cases (iii) and (iv) are non-generic, i.e. the set of all matrices $H_{[0,n]}$ for which one of them applies has measure zero. We refer the reader to [@GLR] for the full canonical form of matrices selfadjoint in indefinite inner-product spaces, which gives also a full description of the eigenvectors. We observe that the matrix $H_{[0,n]}$ may jump back and forth with $n$ among the four types above. The spectral properties of the infinite matrix $H_{[0,\infty)}$, understood as an operator on $\ell^2$, are more tricky: we refer the reader to [@JL85; @JLT] for a full description and for canonical models. Here we note only that again there are essentially two possibilities: - $H_{[0,\infty)}$ is similar to an orthogonal sum of a bounded selfadjoint operator in a Hilbert space and a diagonal matrix with two complex conjugate entries $\lambda_n\in{\mathbb{C}}^+$, $\bar\lambda_n\in{\mathbb{C}}^-$. The eigenvectors $f_n, g_n$ corresponding to the eigenvalues $\lambda_n,\bar\lambda_n$ of $H_{[0,\infty)}$ satisfy ${\left<f_n|\eta_n f_n\right>}={\left<g_n|\eta_n g_n\right>}=0$, ${\left<f_n|\eta_n g_n\right>}\neq0$. - The spectrum of $H_{[0,\infty)}$ is real and $H_{[0,\infty)}$ has a (unique) real eigenvalue with the corresponding eigenvector $f_\infty$ satisfying ${\left<f_\infty|\eta f_\infty\right>}\leq0$. In the (ii’) case the Jordan chain corresponding to $\lambda_\infty$ is again of length not greater than three. Now we specify the theory developed in [@De03; @DD04; @DD07] to the case we are dealing with in the present work. Besides the matrices $H_{[0,\infty)}$ and $H_{[0,n]}$ defined in and , we shall use the following truncations of the matrix $H_{[0,\infty)}$ $$\label{H1n} H_{[1,n]}= \begin{pmatrix} a_1 & b_1 & \\ b_1 & a_1 & \ddots \\ & \ddots & \ddots & b_{n-1}\\ & & b_{n-1} & a_n \\ \end{pmatrix},\qquad n=1,2,\dots.$$ Furthermore, $H_{[1,\infty)}$ will stand for the infinite, symmetric Jacobi matrix with $(a_j)_{j=1}^\infty$ on the main and $(b_j)_{j=1}^\infty$ on the second diagonals. Similarly to and we define the functions $$\label{m1} m_{[1,n]}(z)={\left< e_1|(H_{[1,n]}-z)^{-1}e_1\right>},\quad m_{[1,\infty)}(z)={\left<e_1 |(H_{[1,\infty)}-z)^{-1}e_1\right>}.$$ Here $e_j$ stands for the $j$–th vector of the canonical basis of $\ell^2$. We call the functions appearing in , and the *$m$–functions* of the corresponding Jacobi matrix. We refer the reader to [@GS] for a treatment of $m$–functions of symmetric Jacobi matrices appearing in . The functions $m_{[1,n]}$ ($n\in\mathbb Z_+$) and $m_{[1,\infty)}$ are analytic in the open upper half-plane ${\mathbb{C}}^+$. The function $m_{[0,\infty)}$ ($m_{[0,n]}$) is analytic in the upper half-plane, except $\lambda_\infty$ ($\lambda_n$, respectively). Moreover, the Schur complement argument provides the following crucial relations [@DD04; @GS] $$\label{m01} m_{[0,n]}(z)=\frac1{z-a_0-b_0^2\ m_{[1,n]}(z)},\quad z\in\dC^+\setminus\{\lambda_n\},\ n\in{\mathbb{Z}_+},$$ $$\label{m01infty} m_{[0,\infty)}(z)=\frac1{z-a_0-b_0^2\ m_{[1,\infty)}(z)},\quad z\in\dC^+\setminus\{\lambda_\infty\}.$$ Let us now recall the definition of the class $\mathcal{N}_1$. By $\mathcal{N}_1$ we define the set of generalized Nevanlinna functions with one negative square, that is the functions of one of the three forms $$\label{GenDes1} \frac{(z-\alpha)(z-\overline{\alpha})}{(z-\beta)(z-\overline{\beta})}\varphi(z),$$ $$\label{GenDes2} \frac1{(z-\beta)(z-\overline{\beta})}\varphi(z),$$ $$\label{GenDes3} {(z-\alpha)(z-\overline{\alpha})}\varphi(z),$$ where $\alpha$, $\beta$ are complex numbers and $\varphi$ is a Nevanllina function, i.e. $\varphi$ is holomorphic in $\dC_+$ and maps $\dC_+$ into $\dC^+\cup{\mathbb{R}}$. We refer the reader to [@DHS1; @DLLSh] for equivalent definitions. Let us now formulate the theorem which fixes the subclass of $\mathcal{N}_1$ functions to be investigated in the present work: \[rep\] Let $m$ be a meromorphic function in the open upper half plane. The following conditions are equivalent. - There exist $\lambda_\infty\in{\mathbb{C}}^+\cup{\mathbb{R}}$, $d\in{\mathbb{R}}$ and a nontrivial Borel measure $\sigma$ having all moments finite and supported on an interval $[t_1,t_2]$ such that $$\label{n1model1} m(z)= \frac{1}{(z-\lambda_\infty)(z-\overline{\lambda}_\infty)}\left(z+d+\int_{t_1}^{t_2}\frac{d\sigma(t)}{t-z}\right),$$ - There exist $a_0\in{\mathbb{R}}$, $b_0>0$ and a nontrivial Borel probability measure $\mu$ having all moments finite and supported on an interval $[t_3,t_4]$ such that $$\label{n1model2} \frac{-1}{m(z)}={a_0-z+b_0^2 \int_{t_3}^{t_4} \frac{d\mu(t)}{t-z}}$$ - There exist a matrix $H_{[0,\infty)}$ of the form with bounded entries $a_j\in{\mathbb{R}}$, $b_j>0$, $j\in\mathbb{Z}_+$ such that $$\label{n1model4} m(z)=m_{[0,\infty)}(z):=-{\left<e_0|(H_{[0,\infty)}-z)^{-1}e_0\right>}.$$ Furthermore, the parameters $\lambda_\infty$ and $d$ and the measure $\sigma$ in (i), $a_0, b_0$ and $\mu$ in (ii), and $a_j, b_j$, $j\in\mathbb{Z}_+$ in (iii) are uniquely determined; the numbers $a_0$ and $b_0$ in statements (ii) and (iii) coincide and $\lambda_\infty$ from statement (i) is the (unique) eigenvalue of nonpositive type of the operator $H_{[0,\infty)}$ from statement (iii). The equivalence (ii)$\Leftrightarrow$(iii) is a consequence of equation and the classical theory which sets a correspondence between the functions $m_{[1,\infty)}$ and the Jacobi matrices $H_{[1,\infty)}$, see e.g. [@Ach61; @GS]. (iii)${\Rightarrow}$(i) Let $m=m_{[0,\infty)}$. From the construction in [@JL85] it follows that $m$ belongs to the class $\mathcal{N}_1$ and hence it has one of the forms –. Furthermore, expanding the resolvent into a geometric series at infinity one sees that $m$ necessarily possesses an asymptotic expansion at infinity $$\label{asymp} m(z)=-{\left<e_0|(H_{[0,\infty)}-z)^{-1}e_0\right>}= \frac{1}{z}-\frac{s_{1}}{z^2} -\dots-\frac{s_{2n}}{z^{2n+1}}-\cdots,$$ with $s_j\in{\mathbb{R}}$ ($j=1,2,\dots$) (see [@DD04; @DD07]). Comparing the forms – with , one gets by the Hamburger–Nevanlinna theorem [@Ach61] that the function $\varphi$ in – can be represented in the form $$\label{phi} \varphi(z)=z+d+\int_{t_1}^{t_2}\frac{d\sigma(t)}{t-z},$$ where $d\in\dR$, and $\sigma$ is a measure with all moments finite. Furthermore, comparing the expansions of , and with we can see that case applies. In consequence, $$\label{n1model} m(z)= \frac{1}{(z-\lambda_\infty)(z-\overline{\lambda}_\infty)}\left(z+d+\int_{t_1}^{t_2}\frac{d\sigma(t)}{t-z}\right),$$ Observe that the measure $\sigma$ cannot be a finitely supported (trivial) measure, since $b_j>0$ for all $j\in{\mathbb{Z}_+}$ and in consequence neither $m_{[1,\infty)}$ nor $m_{[0,\infty)}$ are rational functions. The uniqueness of the parameters $\lambda_\infty$ and $d$ and of the measure $\sigma$ follows from the theory of $\mathcal{N}_1$ functions, see e.g. [@DHS1]. The fact that $\lambda_\infty$ is the unique eigenvalue of nonpositive type of $H_{[0,\infty)}$ follows e.g. from Ref. [@DD07] or [@JL85]. (i)${\Rightarrow}$(ii) Using the algorithm proposed in [@De03] (see also [@ADL07]), one can find that $m$ defined by  can be uniquely represented as $$m(z)=\frac1{z-a_0-b_0^2\ m_1(z)},$$ where $ m_1(z)=\int_{t_3}^{t_4} \frac{d\mu(t)}{t-z}, $ is a Nevanlinna function with finite moments. \[gaps\] Already at this point we can say something about the influence of the the measure $\mu$ (spectrum of the matrix $H_{[1,\infty)}$) on the position of $\lambda_\infty$. 1\) Conditions (i) and (ii) in Ref. [@JL85] tell us that $\lambda_\infty\in[t_3,t_4]$ if and only if $$\label{JL85} \int_{t_3}^{t_4}|t-\lambda_\infty|^{-2} d\mu(t)\leq b_0^{-2}, \qquad a_0-{\lambda_\infty} +b_0^2\int_{t_3}^{t_4} (t-{\lambda_\infty})^{-1}d\mu(t)=0.$$ (In particular, since $b_0$ is strictly positive, for the first of these conditions to be true, $|t-\lambda_\infty|^{-2}$ needs to be a $\mu$-integrable function; the second condition is just the specialization of eq. to the case $z=\lambda_{\infty}$ and we introduce it here to fully characterize $\lambda_\infty$ itself). It follows that if the measure $\mu$ is sufficiently dense $\lambda_\infty$ cannot be on $[t_3,t_4]$, i.e. $\mu$ “repels" the point $\lambda_\infty$. This is the case for Examples \[jumping\] and \[jumping2\] below. 2\) If instead $\mu$ has gaps, these gaps tend to trap $\lambda_\infty$. To see this, let’s assume that the spectrum of $H_{[1,\infty)}$ has a gap $(t_5,t_6)\subset [t_3,t_4]$ and that $a_0\in(t_5,t_6)$. From the definition of $H_{[0,\infty)}$, eq. , it’s obvious that for $b_0=0$ the point $a_0$ is an eigenvalue of $H_{[0,\infty)}$ with the corresponding eigenvector $f$ satisfying ${\left<f|\eta f\right>}\leq0$, i.e. $a_0=\lambda_\infty$. We now increase $b_0=0$; applying Rouché’s theorem to $-1/m_{[0,\infty)}$ and remembering that if $\lambda_\infty\notin{\mathbb{R}}$ then $\bar\lambda_\infty$ is also an eigenvalue, we see that $\lambda_\infty$ moves along the real axis until it meets another part of the spectrum of $H_{[0,\infty)}$ (that is either an eigenvalue, or a part of the continuous spectrum, see Ref. [@SWW11; @SWW14] for a detailed analysis of a similar problem). This means that if $\lambda_\infty \in (t_5,t_6)$, then for a small change of parameters $a_0,b_0$ the eigenvalue $\lambda_\infty$ stays in the gap. We shall see one such case in Example \[jumping3\]. As already mentioned, $m_{[0,n]}$ is the $[n/n+1]$ Padé approximant of $m_{[0,\infty)}$ and it is an $\mathcal{N}_1$ function for $n\geq 1$. Consequently it can be represented in one of the forms –. As we have just done in Theorem \[rep\] for $m_{[0,\infty)}$, one can specify this representation: \[rep2\] Each function $m_{[0,n]}$ $(n=1,2,\dots)$ admits a representation $$\label{n1modeln} m_{[0,n]}(z)= \frac{1}{(z-\lambda_n)(z-\overline{\lambda}_n)}\left(z+d_n+\int\frac{d\mu_n(t)}{t-z}\right),$$ with $\lambda_n\in{\mathbb{C}}^+$, $d_n\in{\mathbb{R}}$ and $\mu_n$ a finitely supported measure. Furthermore, the parameters $\lambda_n$ and $d_n$ and the measure $\mu_n$ are uniquely determined and $\lambda_n$ is the unique eigenvalue of nonpositive type of $H_{[0,n]}$. The details of the proof of uniqueness can be found e.g. in [@DLLSh; @L; @DHS1]. The uniqueness in both Theorem \[rep\] and Proposition \[rep2\] guaranties that $\lambda_n$ ($n=1,2,\dots$) and $\lambda_\infty$ are properly defined. In the literature they are called the generalized poles of nonpositive type of the corresponding $\mathcal{N}_1$ function, see [@L]. Using the classical result saying that $m_{[1,n]}$ converges to $m_{[1,\infty]}$, see [@GS; @Ach61; @Simon98], one can prove –via eq. – the following convergence result, cf. [@DD07]: \[Conv\] The functions $m_{[0,n]}$ $(n\in{\mathbb{Z}_+})$ converge to $m_{[0,\infty)}$ as $n\to\infty$, locally uniformly on $\dC_+\setminus([t_1,t_2]\cup{\left\{\lambda_\infty\right\}})$. Further generalization to different types of $\eta$-selfadjoint Jacobi matrices can be found in [@DD07]. As a consequence we have the following corollary (cf. [@LaLuMa]): \[lconvergence\] The pole $\lambda_n$ of $m_{[0,n]}$ converges to the pole $\lambda_\infty$ of $m_{[0,\infty)}$ as $n\to\infty$. If $\lambda\in\dC_+$ or is an isolated eigenvalue then this statement is a simple consequence of Proposition \[Conv\] and the Rouché theorem. If instead $\lambda_\infty$ is real and is not an isolated eigenvalue then from Proposition \[Conv\] and the Rouché theorem we see that all the accumulation points of $\lambda_n$ lie in the spectrum of $H_{[0,\infty)}$, which is a compact set. Since both the functions $m_{[0,n]}$ and $m_{[0,\infty)}$ belong to $\mathcal{N}_1$ and are of the type , we have $$m_{[0,n]}(z)=\frac1{(z-\lambda_n)(z-\overline{\lambda}_n)}\varphi_n(z),\quad m_{[0,\infty)}(z)=\frac1{(z-\lambda_{\infty})(z-\overline{\lambda}_{\infty})}\varphi(z),$$ where $\varphi_n$ and $\varphi$ are Nevanlinna functions and $r_n$ and $r$ are rational functions. Suppose now that there is a sub-sequence such that $\lambda_{n_k}\to\lambda_0\ne\lambda_\infty$. As a consequence, $\varphi_{n_k}$ should also converge to a Nevanlinna function $\varphi_0 \neq \varphi$ which contradicts the uniqueness of $\varphi$. Convergence rates ================= Now we are in a position to ask the principal question of this paper: - *What is the character of the convergence of $\lambda_n\to\lambda_\infty$?* We mainly consider the situation when $\lambda_\infty$ is simple eigenvalue located outside the support of the measure $\mu$ in : we show a theoretical bound on the convergence rate and test it on examples. Theoretical results ------------------- In this section we consider the situation when $\lambda_\infty$ is a simple pole of $m$. Note that if $\lambda_\infty\in{\mathbb{C}}^+$, then it is necessarily a simple pole, due to Theorem \[rep\] (i); moreover, there exists $n_0\in{\mathbb{N}}$ such that $\lambda_n\in{\mathbb{C}}^+$ for $n>n_0$ (see Theorem \[N\] below). If instead $\lambda_\infty$ is a simple real pole, we show that $\lambda_n$ is real for sufficiently large $n$. We begin with a technical result, needed to prove our main theorem. \[3.2\] Let $m=m_{[0,\infty)}$ satisfy the (equivalent) conditions (i), (ii), (iii) of Theorem \[rep\]. If $\lambda_\infty\in{\mathbb{C}}\setminus[t_3,t_4]$ is a simple pole of $m$, then $${\lambda_\infty-\lambda_n} =-\frac{b_0^2\left(m_{[1,\infty)}(\lambda_\infty)-m_{[1,n]}(\lambda_\infty)\right)}{1-b_0^2 m_{[1,\infty)}'(\lambda_\infty)}+\alpha_n,$$ where $\alpha_n$ is such that $$\frac{\alpha_n}{\sup_{x\in X} |m_{[1,\infty)}(x) - m_{[1,\infty)}(x) |}\to 0, \qquad n\to \infty$$ for any disc $X{\subseteq}{\mathbb{C}}^+$ containing $\lambda_\infty$. If, additionally, $\lambda_\infty\in{\mathbb{R}}$, then $\lambda_n\in{\mathbb{R}}$ for sufficiently large $n$. Let $X$ be an open disc, such that $\lambda_\infty\in X{\subseteq}{\mathbb{C}}^+\setminus[t_3,t_4]$. Note that for sufficiently large $n$ the functions $$\label{mn} m_n(z):=\frac1{m_{[0,n]}(z)}=z-a_0-b_0^2 m_{[1,n]}(z),$$ as well as $$\label{minfty} m_\infty(z)=\frac1{m_{[0,\infty)}(z)}=z-a_0-b_0^2 m_{[1,\infty)}(z)$$ belong to $\mathcal{C}(X)$, the complex Banach space of continuous functions on $X$ with the supremum norm. Indeed, for sufficiently large $n$ the function $m_{[1,n]}(z)$ has no poles in $X\cap{\mathbb{R}}$. Also observe that $m_n$ converges to $m_{\infty}$ in $\mathcal{C}(X)$, since $m_{[1,n]}(z)$ converges to $m_{[1,\infty)}(z)$ locally uniformly on ${\mathbb{C}}\setminus[t_3,t_4]$. Consider the mapping $$F:\mathcal{C}(X)\times X \ni (m,x) \mapsto m(x) \in {\mathbb{C}}.$$ As $\lambda_\infty\in{\mathbb{C}}^+$ is a simple pole of $m_{[0,\infty)}$, one has $$m_\infty'(\lambda_\infty)=(-1/m_{[0,\infty)})'(\lambda_\infty)\neq 0.$$ Therefore, $$\frac{\partial F}{\partial x} (m_{\infty},\lambda_\infty)=m'_{\infty}(\lambda)\neq 0,$$ and we can apply the implicit function theorem in Banach spaces to the mapping $F$ (see e.g. [@krantz]). As a result we obtain in a neighborhood $U\times Y$ of $ (m_{\infty},\lambda_\infty)$ a differentiable function $\xi:U\to Y$ such that $${\left\{(m,x)\in U\times Y: m(x)=0\right\}}= {\left\{(m,\xi(m)): m\in U\right\}}=0.$$ We may take $Y$ so small that $Y{\subseteq}{\mathbb{C}}\setminus[t_3,t_4]$ and that $m_\infty$ has no other zeros in $Y$ except $\lambda_\infty$. Note that for sufficiently large $n$ one has $m_{n}\in U$. Hence, on one hand we have that for sufficiently large $n$ $$m_n(\xi(m_n))=F(m_n,\xi(m_n))=0,\quad m_n(x)\neq0,\ x\in U\setminus{\left\{\xi(m_n)\right\}}.$$ On the other hand, $\lambda_n$ converges to $\lambda_\infty$ and $m_n(\lambda_n)=0$. Consequently, $\lambda_n=\xi(m_n)$ for $n$ large enough. Now note that $$\frac{ \partial x}{\partial m}(m_\infty)m= - \frac{\frac{\partial F}{\partial m}(m_\infty,x(m_\infty))m}{\frac{\partial F}{\partial x}(m_\infty,x(m_\infty)) }=-\frac{m(\lambda_\infty)}{m_\infty'(\lambda_\infty)}.$$ Furthermore, $$\begin{aligned} {\lambda_\infty-\lambda_n}& =&\frac{ \partial x}{\partial m}(m_\infty){(m_\infty-m_n)}+\alpha(m_\infty-m_n)\\ &=&-\frac{m_\infty(\lambda_\infty)-m_n(\lambda_\infty)}{m_\infty'(\lambda_\infty)}+\alpha(m_\infty-m_n),\end{aligned}$$ where $\alpha(h_n)/{\left\Vert h_n\right\Vert}_{\mathcal{C}(X)}\to 0$ with ${\left\Verth_n\right\Vert}_{\mathcal{C}(X)}\to 0$. Set $h_n=m_n-m_\infty$ and $$\alpha_n=\alpha(m_\infty-m_n)=\lambda_\infty-\lambda_n +\frac{m_\infty(\lambda_\infty)-m_n(\lambda_\infty)}{m_\infty'(\lambda_\infty)}$$ and note that the right-hand side of the above does not depend on the initial choice of the disc $X$. This finishes the proof of the first statement. Now let $\lambda_\infty\in{\mathbb{R}}\setminus[{t_3},{t_4}]$. By the locally uniform convergence of $m_n$ to $m_\infty$ and by the Rouché theorem there is a small disc $Z$ with the center in $\lambda_\infty$, such that each function $m_n(z)$ has precisely one zero $z_n$ in $Z$. As $\lambda_n$ converges to $\lambda_\infty$ we must have $\lambda_n=z_n$ for large $n$. Therefore, $\lambda_n\in{\mathbb{R}}$, otherwise $\bar\lambda_n\in Z$ is another zero of $m_n$ in $Z$, which is a contradiction. \[expl\] We are able now to prove the main result of our paper, Theorem \[main-geom\]. First, though, we would like to stress that there are two equivalent ways of seeing it according to the objects we consider: A first interpretation takes as its main object the tridiagonal matrix, presented here in a block form $$H_{[0,\infty)}=\left( \begin{array}{c|ccc} a_0 & -b_0 & 0 & \cdots \\ \hline b_0 & & & \\ 0 & & H_{[1,\infty)} & \\ \vdots & & & \\ \end{array}\right);$$ $\lambda_\infty$ is then the (unique) eigenvalue of nonpositive type of $H_{[0,\infty)}$, $\lambda_n$ is the unique eigenvalue of nonpositive type of the finite truncation $H_{[0,n]}$ of $H_{[0,\infty)}$, and the spectrum of $H_{[1,\infty)}$ is contained, by assumption, in $[t_3,t_4]$. A second interpretation considers instead a meromorphic function $m(z)$ having the representations and and its $[n-1/n]$ Padé approximants $m_{[0,n]}$. The point $\lambda_\infty$ ($\lambda_n$) is then the unique pole of nonpositive type of $m(z)$ ($m_{[0,n]}$, respectively). In both settings Theorem \[main-geom\] gives the convergence rate of $\lambda_n$ to $\lambda_\infty$, in terms of the “distance" of $\lambda_\infty$ from the interval $[t_3,t_4]$: the rate of convergence is at least exponential $\mathcal{O}(q^{-2n})$, where the number $q$ is such that $\lambda_\infty$ lies on the ellipse with foci at $t_3,t_4$ and sum of its semi-axes equal to $({t_4}-{t_3})q/2$. Consequently, as confirmed by our numerical tests below, the convergence rate gets worse the larger is the eccentricity of said ellipse, i.e.: the convergence slows down when $\lambda_\infty$ is “close" to the interval $[t_3,t_4]$. \[main-geom\] Let $\lambda_\infty$, $\lambda_n$, $t_3$, $t_4$ be as in Theorem \[rep\] and Remark \[expl\] above. If $\lambda_\infty\in{\mathbb{C}}\setminus[t_3,t_4]$ is a simple eigenvalue, then $$\limsup_{n\to\infty}|{\lambda_\infty-\lambda_n} |^{1/n}\leq \frac{1}{q^2},$$ where $q=g+\sqrt{g^2-1}$, and $$\label{g} g=\frac{|\lambda_\infty-t_4|+|\lambda_\infty-t_3|}{t_4-t_3}>1$$ is the reciprocal of the eccentricity of the ellipse through $\lambda_\infty$ with foci at $t_3,t_4$. If, additionally, $\lambda_\infty\in{\mathbb{R}}$, then $\lambda_n\in{\mathbb{R}}$ for sufficiently large $n$. For $R>1$ let $L_R$ denote the closed set bounded by the ellipse with foci at ${t_3},{t_4}$ and the sum of its semi-axes equal to $({t_4}-{t_3})R/2$. Due to Theorem (2.6.2) in Ref. [@niso], one has $$\label{elipse} \limsup_{n\to\infty}\sup_{z\in {\mathbb{C}}\setminus L_R} |m_{[1,n]}(z) - m_{[1,\infty)}(z) |^{1/n} \leq \frac 1{R^2}.$$ Note that $$\label{LR} \lambda_\infty\notin L_R \iff R< g +\sqrt{g^2-1}.$$ Take any $R\in(1,g+\sqrt{g^2-1})$ and a small disc $X$, such that $\lambda_\infty\in X{\subseteq}{\mathbb{C}}\setminus L_R$. From Proposition \[3.2\] we obtain that $$\begin{aligned} |{\lambda_\infty-\lambda_n} | &\leq & C_1 |m_{[1,n]}(\lambda_\infty) - m_{[1,\infty)}(\lambda_\infty) | +\alpha_n\\ &\leq& C_2 \sup_{z\in X} |m_{[1,n]}(z) - m_{[1,\infty)}(z) |,\end{aligned}$$ where $C_1,C_2$ are constants, dependent on $H_{[0,\infty)}$ and $X$ only. As a consequence, $$\limsup_{n\to\infty}|{\lambda_\infty-\lambda_n} |^{1/n}\leq \frac{1}{R^2}.$$ Letting $R\to g +\sqrt{g^2-1}$ finishes the proof. Note that Theorem \[main-geom\] cannot be easily generalized to the case when $\lambda_\infty\in[t_3,t_4]\setminus\supp \mu$. For example, if the support of the measure $\mu$ consists of two disjoint intervals $[t_3,t_5]\cup[t_6,t_4]$ the estimate , which was the key point in proving Theorem \[main-geom\], still holds only outside the ellipse with foci at $t_3,t_4$, the reason being that the union of the poles of the Padé approximants of $\int_{t_3}^{t_4} (t-z)^{-1}\mu(dt)$ may be dense in $[t_5,t_6]$, see for instance Ref. [@Suetin02]. Examples -------- In our examples we want to be able to choose the position of $\lambda_\infty$; it is therefore convenient to consider cases where it is possible to calculate it without resorting at first to truncated matrices. One way to is look for matrices $H_{[1,\infty)}$ such that the the corresponding functions $m_{[1,\infty)}$ have a closed, analytic form. This will allow us to use (numerical) root finding methods to calculate $\lambda_\infty$ solving the equation $$\label{mathe} z-a_0-b_0^2\ m_{[1,\infty)}(z)=0.$$ Remembering that $$\label{H-m} m_{[1,\infty)}=\int_{t_3}^{t_4} \frac{d\mu(t)}{t-z} ,$$ this reduces to finding a suitable measure $\mu(t)$ with finite support. The choice $$\label{H-m1} d\mu=d\sigma_{\alpha,\beta}(t)=\chi_{[-1,1]}(t)\cdot \frac{(1-t)^\alpha(1+t)^\beta dt}{\int_{-1}^1 (1-s)^\alpha(1+s)^\beta ds },$$ where $\chi_{[-1,1]}(t)$ is the characteristic function of the interval $[-1,1]$, gives us the matrix $H_{[1,\infty)}$ corresponding to Jacobi polynomials with parameters $\alpha,\beta$. To construct $H_{[1,\infty)}$ we consider the Jacobi orthogonal polynomials, which form an orthogonal basis in $L^2(\sigma_{\alpha,\beta})$ and the multiplication operator $p\mapsto xp$ in $L^2(\sigma_{\alpha,\beta})$. The three term recurrence relation (4.5.1) and the normalization factors (4.3.3) in Ref. [@Szego] provide the tridiagonal representation $H_{[1,\infty)}$ of the multiplication operator. The spectral theorem for selfadjoint operators guarantees that with is satisfied. The choice $$\label{H-m2} d\mu=\chi_{[-2,2]}(t)\cdot \frac{\sqrt{4-t^2}dt}{2\pi}$$ instead gives the matrix $H_{[1,\infty)}$ with $a_j=0$, $b_j=1$, $j=1,2,\dots$ corresponding to the orthogonal polynomials associated with the Wigner semicircle measure. To calculate $\lambda_n$ we use Matlab [@matlab]: first, we calculate all eigenvalues and eigenvectors of $H_{[0,n]}$; we then find $\lambda_n$ as the only eigenvalue of nonpositive type of $H_{[0,n]}$, i.e. the only eigenvalue which is either in the upper half-plane or is real with the corresponding eigenvector ${\bf x}$ satisfying $$-|x_0|^2+\sum_{j=1}^n |x_j|^2\leq 0.$$ A summary of the relevant results for all our examples can be found in Table \[Table\]. Only graphs useful to our discussion are shown here; for the remaining cases quoted, pictures can be found in the supplementary files [@S]: the file names there refer to those given in Table \[Table\]. \[jumping\] As our first example we take $\alpha=\beta=0$ in eq. so that $$\label{ana} m_{[1,\infty)}=\frac12\int_{-1}^1\frac{dt}{t-z}=\frac12(\log(1-z)-\log(-1-z)),$$ where the branch of the logarithm is chosen in such way that the above function is a Nevanlinna function. In this case $H_{[1,\infty)}$ corresponds, in the way described in the remarks above, to the Legendre polynomials. We now vary the only remaining free parameters $a_0$ and $b_0$ of $H_{[0,\infty)}$. Note in any case, due to Remark \[gaps\], we have $\lambda_\infty\notin[t_3,t_4]=[-1,1]$. Let us start with $a_0=0.5$, $b_0=0.05$. It is immediately evident from Figure \[Legendre\_compl\] that the points $\lambda_n$ arrange themselves on three branches which spiral into $\lambda_\infty$. The point $\lambda_n$ jumps in a regular fashion from one branch to another: all $\lambda_n$’s with $n \mod 3=\operatorname{const}$ fall on the same branch. The two branches starting on the real axis leave it at $n=45$ and $n=190$ respectively, as can be seen from the plot of the imaginary part of $\lambda_n$ in Figure \[Legendre\_ReIM\]; the plots of the corresponding real parts have each a cusp at the same time, due to the inversion of the direction of motion of $\lambda_n$. The plot in Figure \[Legendre\_conv\] shows exponential convergence of $\lambda_n$ to $\lambda_\infty$ setting in soon after all three branches leave the real line. The black reference line with slope $-2\log(q)$ represents the bound from Theorem \[main-geom\], the intercept is chosen so that the line is superimposed on the numerical data. In this example (as well as in subsequent examples with measures having no gaps) we see that the estimate of the convergence rate in Theorem \[main-geom\] is sharp and consequently can not be improved in general. On the other hand, the estimate is not always sharp, cf. Example \[jumping3\]. We have already mentioned that the points $\lambda_n$ arrange themselves regularly on branches. This behavior is common to most of our examples, except the cases when $\lambda_\infty$ is on the real axis, where because of Proposition \[3.2\] there is one branch only. Since the branches appear to approach $\lambda_\infty$ isotropically, as can be e.g. seen in the zoomed picture of Figure \[Legendre\_compl\], this suggests a convenient way of calculating the numerical value of $\lambda_\infty$ as the mean $$\label{mean} \lambda_\infty^{(N)}=k^{-1}\sum_{n=N-k+1}^N \lambda_n,$$ where $k$ denotes the number of branches and $N$ the last $n$ for which $\lambda_n$ is calculated. The value of $\lambda_\infty$ obtained taking the average of the three last points (one for each branch) equals $0.4999 + 0.0039{\mathrm i}$, which agrees well with the value $0.498631+0.00391397{\mathrm i}$ obtained solving with Mathematica [@mathematica] eq. . We conclude noting that –here and in all our other examples– the real axis forms a barrier for $\lambda_n$: it never crosses it and branches which touch it get stuck in it; this is clearly visible in the movie [Legendre\_spirals\_a\_0.5.avi]{} that can be found among the supplementary files to this paper [@S]. This behavior is related to the symmetry of the spectrum with respect to the real axis. If we now keep $a_0=0.5$ constant and vary $b_0$ to assume the values $b_0=0.5, 0.1$, and $0.01$, in all cases the $\lambda_n$’s arrange themselves over three spiraling branches and the value $\lambda_\infty$ obtained from the average of the last three values of $\lambda_n$ agrees with the one calculated solving eq. with Mathematica to the last digit shown in Table \[Table\]. In all cases convergence is exponential and the slope of logarithmic plots similar to that in Figure \[Legendre\_conv\] are in agreement with the value $-2\log(q)$ from Theorem \[main-geom\] (see Table \[Table\]). There are some case to case differences, but they do not affect the picture given above: for $b_0=0.5$ the three branches are not clearly visible, due to the very fast convergence of $\lambda_n$; for $b_0=0.1$ only one branch spends some time on the real axis, up to $n=46$; and for $b=0.01$ even after 1000 iterations one of the three branches has not yet left the real axis (for figures see the supplementary material [@S1]). Our last example with measure eq. is a case when $\lambda_\infty\in{\mathbb{R}}\setminus [t_3,t_4]$: if we take $a_0=1.001$ and $b_0=0.001$ we get $\lambda_\infty\simeq 1.001$ (Mathematica has problems solving eq. in this case). The values of $\lambda_n$ are all real; we therefore have a single branch. Convergence is again exponential (for figures see the supplementary material [@S2]). It is instructive to compare this case to the case $a_0=0.5$, $b=0.05$: the distance of $\lambda_\infty$ from the interval $[-1,1]$ is of the same order, but the convergence is much faster in the present case. This is due to the different eccentricities of the ellipses from Theorem \[main-geom\]: in the present case the eccentricity is smaller, and therefore $q$ is larger and in consequence the convergence rate is better. Finally, in Figure \[Ns\] we summarize the convergence behavior when submatrix $H_{[1,\infty)}$ corresponds to measure eq. : we vary the parameters $a_0\in(-1,1)$ and $b_0\in(0.05,0.5)$ and for each pair $(a_0,b_0)$ we compute the value $n_0$ where the last $\lambda_n$ branch leaves the real axis. Each pair $(a_0,b_0)$ determines a single point $\lambda_\infty$ in the upper half-plane, which we plot color coded according to $n_0$. The figure appears to have a fractal character for which we do not yet have an explanation but which we suspect to be related to the way the number of $\lambda_n$ branches varies with varying $a_0$. This can be seen in the movie [Legendre\_spirals\_b\_0.01.avi]{} [@S] where we keep $b_0=0.01$ and vary $a_0$. We instead observe no change in the number of branches when varying $b_0$ at constant $a_0$ (see e.g. the movie [Legendre\_spirals\_a\_0.5.avi]{} [@S]). As we have already mentioned, branches are a common occurrence, not limited to the example just given. We give here a couple more examples. \[jumping2\] We now take $H_{[1,\infty)}$ corresponding to the orthogonal polynomials associated with the Wigner semicircle measure, i.e. the measure given by eq. and we choose $a_0=0.5$, $b_0=0.1$ as the remaining parameters for $H_{[0,\infty)}$. Note due to Remark \[gaps\] we again have $\lambda_\infty\notin[t_3,t_4]=[-2,2]$. In Figure \[Wigner\_compl\] it is possible to see that the $\lambda_n$’s form twelve branches. Knowing this, we can use the recipe given above to calculate $\lambda_\infty$ as the mean eq. of the last twelve values of $\lambda_n$; the result agrees to the last digit shown in Table \[Table\] with the value obtained solving eq. with Mathematica. Figure \[Wigner\_conv\] shows exponential convergence of $\lambda_n$ to $\lambda_\infty$ with the rate predicted by Theorem \[main-geom\]; other plots concerning this example can be found as supplementary files [@S3]. The video [Wigner\_spirals\_b\_0.01.avi]{} in [@S] shows the evolution of the $\lambda_n$ branches under the change of the parameter $a_0$. As in Example \[jumping\], it is evident that here too the number of branches changes with $a_0$. Looking at the first few frames of the movie it is also evident that there are branches that start off the real axis, hit it and –instead of continuing into ${\mathbb{C}}^-$– get trapped in it moving horizontally for a number of $n$, and then leave it when the spiral reenters ${\mathbb{C}}^+$. \[jumping3\] Here we present an example of a different nature. The matrix $H_{[1,\infty)}$ is constructed in such way, that its spectrum is a totally disconnected, Cantor-like set, see [@cantor] for details; the other parameters are $a_0=0.5$ and $b_0=0.1$. In this particular case we could count $27$ branches of $\lambda_n$; such a high number is hardly visible when plotting $\lambda_n$ in the complex plane but can be seen in the plot of $\log|\lambda_n-\lambda_\infty|$ in Figure \[Cantor\_conv\]: the $\lambda_n$’s form regular clusters of $27$ points. What is particularly noteworthy is that –contrary to the previous examples– the convergence rate is faster than the one predicted by Theorem \[main-geom\], as can be seen comparing the theoretical bound (black line) with the numerical points in Figure \[Cantor\_conv\]. This is probably due to the fact that the spectrum of $H_{[1,\infty)}$ contains gaps. Other pictures corresponding to this example (called [Cantor\_a\_0.5\_b\_0.1\_xxx.eps]{}), as well as the video [Cantor\_spirals\_b\_0.2.avi]{} showing the evolution of branches under the change of the parameter $a_0$, can be found in the supplementary files [@S]: again the number of branches changes with $a_0$; moreover intervals in $a_0$ where both $\lambda_\infty$ and all the $\lambda_n$ become real are evident and correspond to the gaps in $H_{[1,\infty)}$, see Remark \[gaps\]. Although Theorem \[main-geom\] proves the exponential rate of convergence of $\lambda_n$ to $\lambda_\infty$, it does not say when this convergence starts manifesting itself: at least in theory we could have $|{\lambda_\infty-\lambda_n} |> q^{-2n}$ for some $n$. Our numerical tests indicate that the convergence is somehow “better" than the above bound, as the asymptotic behavior is $K \cdot q^{-2n}$ with $K<1$. Even the curious initial behavior connected with the real axis we observed in several cases above, does not bring $|{\lambda_\infty-\lambda_n}|$ to exceed $q^{-2n}$. On the other hand it would be convenient to have an estimate of $n_0$ such that for $n>n_0$ the point $\lambda_n$ is sure to be outside the support $[t_3,t_4]$ of the measure $\mu$: if one or more of the $\lambda_n$’s branches is still on $[t_3,t_4]$, our justification for estimating $\lambda_\infty$ by the mean eq. is compromised. We give an upper bound for $n_0$ in Appendix \[app:uno\]. $\lambda_\infty\in{\mathbb{R}}$ is embedded in the spectrum of the representing measure {#realcase} --------------------------------------------------------------------------------------- We shall limit our investigation of the case when $\lambda_\infty\in[t_3,t_4]$ to an example where $\lambda_\infty=t_3$, to show that convergence in this case is in general worse than exponential. To build our example, we start by recalling Remark \[gaps\]. The only example known to us of classical orthogonal polynomials satisfying condition with $\lambda_\infty\in[t_3,t_4]$ are the Jacobi polynomials with parameters $\alpha\geq 2$ or $\beta\geq 2$ and with $\lambda_\infty=t_3$ or $\lambda_\infty=t_4$, respectively. \[realev\] We take $H_{[1,\infty)}$ such that $$m_{[1,\infty)}=\int _{-1}^1 \frac{\frac34(1+t)^2(1-t) dt}{t-z},$$ i.e. we take $\alpha=2, \beta=1$ in eq. . We then choose $a_0=-5/3$, $b_0=\sqrt{2/3}$, so that $\lambda_\infty=-1$. This can be seen e.g. by analyzing the matrix $H_{[0,\infty)}$ itself: it results from [@JL85] conditions (i) and (ii) that $-1$ is an algebraically simple eigenvalue of $H_{[0,\infty)}$ with the corresponding eigenvector $x$ satisfying $ -|x_0|^2+\sum_{j=1}^\infty |x_j|^2=0, $ i.e. $\lambda_\infty=-1$ is the unique eigenvalue of nonpositive type of $H_{[0,\infty)}$. Furthermore, due to Theorem 2.2 ($p_{1s}$) of [@JL85], $-1$ is a singular critical, algebraically simple eigenvalue (see [@JL85] for a classification of eigenvalues of nonpositive type). The log-log plot of $|\lambda_n-\lambda|$ Vs. $n$ in Figure \[Jacobicrit\_conv\] shows clearly that convergence in this case is only polynomial: $|\lambda_n-\lambda|\simeq n^{-2}$, as can be seen comparing the numerical data with the black reference line whose slope is $-2$. The plot of the real and imaginary part of $\lambda_n$ can be found in [@S] as the file [Jacobi12crit\_ReIm.eps]{}. Concluding remarks {#Conclusions} ================== As already stated above, the main question we tried to answer here is: “Suppose we are only able to compute the spectrum of finite truncations of a pseudo-Hermitian tridiagonal matrix $H_{[0,\infty)}$; can we say something about the spectrum of the full operator $H_{[0,\infty)}$? Most interestingly, can we predict if the spectrum of $H_{[0,\infty)}$ is real or not?" Suppose we have performed $N$ iterations, increasing step by step the size of the truncated matrix $H_{[0,n]}$; one of the next four cases applies. - $\lambda_n$ is real for all $n=1{,\dots,}N$, is either the minumum or the maximum of the spectrum of $H_{[0,n]}$ and is separated from the other eigenvalues. Then the limit point $\lambda_\infty$ is also a real single eigenvalue, separated from other eigenvalues of $H_{[0,\infty)}$. - $\lambda_n$ is complex for all $n$ larger than a $n_0<N$. Then the limit point $\lambda_\infty$ is also a complex single eigenvalue. $\lambda_\infty$ itself can be evaluated by finding the number of $\lambda_n$ branches and then using eq. . - $\lambda_n$ oscillates between the real line and the complex plane up to $n=N$, a common occurrence when $\lambda_\infty$ is very close to the support of the measure $\mu$. Then the situation is in principle unclear: the limit eigenvalue $\lambda_\infty$ might be a complex point, a real critical point, or a real point in a relatively small gap of the spectrum. Still, if the $\lambda_n$ branches can be found and not to many of them are still trapped on ${\mathbb{R}}$ for $n\simeq N$, it is still possible to give a numerical evaluation $\lambda_\infty^{(N)}$ of $\lambda_\infty$ by a careful use of eq. . If the plot of $\log|\lambda_n-\lambda_\infty^{(N)}|$ is then approximately a straight line, this is another indication for $\lambda_\infty$ being a simple eigenvalue out of the support of $\mu$. - The sequence $\lambda_n$ converges to a point $\lambda_\infty\in{\mathbb{R}}$, but the convergence is not exponential, which again can be seen by the study of the plot of $\log|\lambda_n-\lambda_\infty|$. Then $\lambda_\infty$ is a critical point on the real line, embedded in the support of $\mu$. In view of the second equation of , this case seems to be non-generic, i.e. a small change of the entries of the matrix will lead to a different case. However, we were able to clearly observe this case in a numerical simulation, which in our opinion is an argument for considering this possibility as well. While, for sake of simplicity, we restricted ourselves to the case of matrices with a single eigenvalue of nonpositive type, we believe that the results derived above can be generalized to a wider class of operators, at least to those considered in Ref. [@DD07]. In the context of random matrices [@Wojtylak12b] or Nevanlinna functions, our research can be viewed as concerning the problem of predicting whether or not $\lambda_\infty\in [t_3,t_4]$ by calculating a finite number of Padé approximants. A connection can also be found to Padé approximation of the Z-transform, considered in [@BessisPerotti09], where the real line is replaced by the unit circle. Acknowledgments {#acknowledgments .unnumbered} =============== Michał Wojtylak gratefully acknowledges the financial assistance of the Alexander von Humboldt Foundation with a Research Grant for Experienced Scientists, carried out at TU Berlin and with a Return Home Scholarschip, carried out at Jagiellonian University, Kraków. Maxim Derevyagin gratefully acknowledges the financial assistance of the European Research Council under the European Union Seventh Framework Programme (FP7/2007-2013)/ERC, grant agreement no. 259173. Maxim Derevyagin and Michał Wojtylak are indebted to Professor Olga Holtz for her encouragement and support. Estimate of the maximum number of real $\lambda_n$, when $\lambda_\infty\in{\mathbb{C}}^+$ {#app:uno} ========================================================================================== \[N\] Let $m=m_{[0,\infty)}$ be of the forms and with $\lambda_\infty\in{\mathbb{C}}^+$. Then for any ${\varepsilon}\in(0,1)$ for $$\label{bigN} n>N_{{\varepsilon}}:=\frac{\log\left( \frac{8b_0^2g\max_{z\in g L_R} |m_{[0,\infty)}(z)|}{(g-1)^2(t_4-t_3)}\right)}{\log\left( 1+{\varepsilon}\sqrt{1-g^{-2}} \right)}$$ one has $$\lambda_n\in{\mathbb{C}}\setminus L_{R({\varepsilon})},$$ where g is again given by eq. , $R({\varepsilon})=g+{\varepsilon}\sqrt{g^2-1}$ and $L_{R({\varepsilon})}$ again denotes the closed set bounded by the ellipse with foci at ${t_3},{t_4}$ and the sum of its semi-axes equal to $({t_4}-{t_3})R({\varepsilon})/2$. In particular, if $n>N_{\varepsilon}$ for some ${\varepsilon}\in(0,1)$ then $\lambda_n\notin[t_3,t_4]$. Consider the functions $$m_n(z):=\frac1{m_{[0,n]}(z)},\quad m_\infty(z)=\frac1{m_{[0,\infty)}(z)}.$$ We now recall the estimate given in [@niso] as formula (6.10): for every $n\in\mathbb{Z}_+$ and for every $\delta\in (1,R)$ one has $$\label{deltaR} \max_{z\in\partial L_R} | m_{[1,\infty)}(z)-m_{[1,n]}(z)(z)|\leq \frac {8\delta}{(t_4-t_3)(\delta-1)^2}\left(\frac\delta R\right)^{2n}.$$ Applying equation with $\delta=g$, $R=R({\varepsilon})=g+{\varepsilon}\sqrt{g^2-1}$ we get after elementary transformations of that $$\begin{aligned} |m_n(z) - m_\infty(z)|& =& b_0^2 |m_{[1,n]}(z) - m_{[1,\infty)}(z)|\\ &\leq& \frac{8b_0^2g} {({t_4}-{t_3})(g-1)^2 } \left( \frac{g}{1+{\varepsilon}\sqrt{g^2-1}}\right)^{2n}\\ &\leq& \frac{1}{\max_{z\in\delta L_R} |m_{[0,\infty)}(z)|} = \min _{z\in\delta L_R} |m_{\infty}(z)|. \end{aligned}$$ Hence, by the the Rouche theorem, $m_n$ and $m_\infty$ have the same number of zeros in $\bar{\mathbb{C}}\setminus L_R$. However, $m_\infty$ has precisely two zeros in $\bar{\mathbb{C}}\setminus L_R$, namely $\lambda_\infty$ and $\bar\lambda_\infty$. As $\lambda_n$ is the only a zero of $m_n$ in the upper half-plane, we get $\lambda_n\in {\mathbb{C}}\setminus L_R$. If we now apply Theorem \[N\] to Example \[jumping\], we get $N_{0.5}=52$ for $b_0=0.5$, $N_{0.5}=2155$ for $b_0=0.1$, $N_{0.5}=10917$ for $b_0=0.05$, and $N_{0.5}\simeq 4\cdot 10^5$ for $b_0=0.01$. Comparing with Example \[jumping\], where $n_0\simeq 1,46,190$, and $n_0 \gg 1000$ respectively, we see that the estimate $N_{0.5}$ is a far from tight upper bound whose ratio to the numeric value $n_0$ appears to grow with decreasing $b_0$. [20]{} There are in practice some rare exceptions when a real spectrum is not sought – as for example when looking for resonances through complex dilatation of a Hamiltonian [@Buch] – and it has to be kept in mind that, even when the Hamiltonian can be made to be Hermitian by symmetrization, different linear combinations of orderings of the operators in the Hamiltonian itself can be possible (see e.g. the case of the double pendulum [@per]) corresponding to different physical systems having the same classical Hamiltonian. see any quantum mechanics text, e.g. P. Dirac, “The Principles of Quanturn Mechanics" Oxford University Press, Oxford, 4th ed. (1958). C.M. Bender and S. Boettcher, Phys. Rev. Lett. 80 (1998) 5243. Under T reflection one replaces $i$ by $−i$ in the operator while under P reflection one replaces $x$ by $a−x$, where $a/2$ is the origin about which one is performing the parity reflection. A. Bohr and B.R. Mottelson, Nuclear Structure, Vol. I, Sect. 2.4, (W.A. Benjamin Inc., New York, 1969). C. Itzykson and J.-M. Drouffe, Statistical field theory, Vol. 1, Sect. 3.2.3, (Cambridge University Press, Cambridge, 1989). H. Feshbach, C. E. Porter and V. F. Weisskopf, Phys. Rev. 96 448 (1954). J. Feinberg and A. Zee, cond-mat/9706218. M.V. Berry and D.H.J. O’Dell, J. Phys. A 31 (1998) 2093. C. M. Bender, G. V. Dunne, and P. N. Meisinger, “Complex periodic potentials with real band spectra" Phys. Lett. A 252 (1999) 272. R. Nevanlinna, Ann. Ac. Sci. Fenn. 1 (1952) 108; 163 (1954) 222; L.K. Pandit, Nuovo Cimento (supplemento) 11 (1959) 157; E.C.G. Sudarshan, Phys. Rev. 123 (1961) 2183; M.C. Pease III, Methods of matrix algebra (Academic Press, New York, 1965); T.D. Lee and G.C. Wick, Nucl. Phys. B 9 (1969) 209; F.G. Scholtz, H. B. Geyer and F.J.H. Hahne, Ann. Phys. 213 (1992) 74. J. Bognár, [*Indefinite Inner Product Spaces*]{}, Springer–Verlag, New York–Heidelberg, 1974. I.S. Iohvidov, M.G. Krein, H. Langer, [*Introduction to spectral theory of operators in spaces with indefinite metric*]{}, Mathematical Research, vol. 9. Akademie-Verlag, Berlin, 1982. I. Gohberg, P. Lancaster and L. Rodman: *Indefinite Linear Algebra and Applications*. Birkhäuser–Verlag, 2005. A. Mostafazadeh, J. Math. Phys. 43 (2002) 205;43 (2002) 2814; 43 (2002) 3944. Z. Ahmed, Phys. Lett. A 290 (2001) 19. Z. Ahmed, Phys. Lett. A 294 (2002) 287. Stefan Weigert; “An algorithmic test for diagonalizability of finite-dimensional PT-invariant systems"J. Phys. A: Math. Gen. 39 (2006) 235–245. L.S. Pontryagin, Hermitian operators in spaces with indefinite metric, *Izv. Nauk. Akad. SSSR*, Ser. Math. 8 (1944), 243–280 \[Russian\]. M.S. Derevyagin, V.A. Derkach, “On the convergence of Padé approximations for generalized Nevanlinna functions", *Trans. Moscow Math. Soc.* 68 (2007) 2007, 119–162. P. Jonas and H. Langer, “A model for $\pi$–selfadjoint operator in a $\Pi_1$ space and a special linear pencil", *Integr. Equ. Oper. Th.* 8 (1985), 13–35. M.S. Derevyagin, “On the Schur algorithm for indefinite moment problem", Methods of Functional Analysis and Topology, Vol. 9 (2003), No.2, 133-145. , “Spectral problems for generalized Jacobi matrices", Linear Algebra Appl., Vol. 382 (2004), 1–24. F. Gesztesy, B. Simon, “$m$-functions and inverse spectral analysis for finite and semi-infinite Jacobi matrices", Journal d’Analyse Math. 73 (1997) 267–297. V. Derkach, S. Hassi, and H.S.V. de Snoo, “Operator models associated with Kac subclasses of generalized Nevanlinna functions", Methods of Functional Analysis and Topology, 5 (1999), 65–87. A. Dijksma, H. Langer, A. Luger, and Yu. Shondin, “A factorization result for generalized Nevanlinna functions of the class ${\mathbf{N}}_\kappa$", Integral Equations Operator Theory, 36 (2000), 121-125. N.I. Achiezer, *The classical moment problem*, Oliver and Boyd, Edinburgh, 1965. D. Alpay, A. Dijksma, H. Langer, “The Transformation of Issai Schur and Related Topics in an Indefinite Setting, System Theory, the Schur Algorithm and Multidimensional Analysis Operator Theory: Advances and Applications Volume 176, 2007, 1–98. . H. Langer, “A characterization of generalized zeros of negative type of functions of the class $\mathbf{N}_{\kappa}$", Oper. Theory Adv. Appl., 17 (1986), 201–212. B. Simon, “The classical moment problem as a self-adjoint finite difference operator", *Adv. Math.*, 137 (1998), 82–203. H. Langer, A. Luger, V. Matsaev, “Convergence of generalized Nevanlinna functions", *Acta Sci. Math.* (Szeged), 77 (2011), 425–437. S.G. Krantz, H.R. Parks, *The Implicit Function Theorem: History, Theory, and Applications*, Springer Science+Business Media, 2013. S. P. Suetin, “On the dynamics of “wandering” zeros of polynomials that are orthogonal on certain intervals", Russian Mathematical Surveys 57(2), 425-427. , R2014b, The MathWorks, Inc., USA. http://www2.im.uj.edu.pl/MichalWojtylak/convergence.html Mathematica, 10.0, Wolfram Research, Inc., USA. The pictures names in [@S] are [Legendre\_a\_0.5\_b\_0.xxx\_compl.eps]{} for $\lambda_n$ on the complex plane, [Legendre\_a\_0.5\_b\_0.xxx\_ReIm.eps]{} for real and imaginary part of $\lambda_n$, and [Legendre\_a\_0.5\_b\_0.xxx\_conv.eps]{} for the logarithmic plot for the convergence rate. The pictures names in [@S] are [Legendre\_a\_1.001\_b\_0.001\_xxxx.eps]{} for $\lambda_n$ on the complex plane ([xxxx=compl]{}), real and imaginary part of $\lambda_n$ ([xxxx=ReIm]{}),and the convergence rate ([xxxx=conv]{}). The names are [Wigner\_a\_0.5\_b\_0.1\_xxxx.eps]{}. G. Szego, *Orthogonal polynomials*, AMS, Providence, Rhode Island, 1939. M.F. Barnsley, J.S. Geronimo, N.A. Harrington, “Infinite-dimensional Jacobi matrices associated with Julia sets”, Proc. AMS, 88 (1983), 625–630. J. Wimp, “Explicit formulas for the associated Jacobi polynomials and some applications", *Can. J. Math.,* 39(1987), 983–1000. M. Wojtylak, “On a class of $H$–selfadjoint random matrices with one eignvalue of nonpositive type" Electron. Commun. Probab. 17 (2012), no. 45, 1–14. D. Bessis, R. Perotti, Universal analytic properties of noise: introducing the J-matrix formalism, *J. Phys. A: Math. Theor.* 42 (2009) 365–202. A. Buchleitner and D. Delande, Phys. Rev. Lett. [**70**]{}, 33 (1993). L. C. Perotti, Phys. Rev. [**E70**]{}, 066218 (2004). $a_0$ $b_0$ $\max n$ $\lambda_\infty$ $n_0$ q $|\lambda_\infty-\lambda_n|$ ------------- ------- -------------- ---------- ------------------------------ --------- -------- ------------------------------ -- -- Legendre 0.5 0.5 50 $0.4045 + 0.3064{\mathrm i}$ 1 1.3855 $\simeq q^{-2n} $ 0.5 0.1 200 $0.4946 + 0.0155{\mathrm i}$ 46 1.0180 $\simeq3.5^{-1} q^{-2n}$ 0.5 0.05 1000 $0.4986 + 0.0039{\mathrm i}$ 190 1.0045 $\simeq4.6^{-1} q^{-2n}$ 0.5 0.01 1000 $0.4999 + 0.0002{\mathrm i}$ $>1000$ 1.0002 $ \simeq6.5^{-1} q^{-2n}$ 1.001 0.001 200 1.0010 — 1.0456 $\simeq12.5^{-1} q^{-2n}$ Wigner 0.5 0.1 1000 $0.4975 + 0.0096{\mathrm i}$ 175 1.0050 $\simeq4^{-1}q^{-2n}$ Cantor 0.5 0.1 400 $0.5074 + 0.0096{\mathrm i}$ 81 1.0044 $\ll q^{-2n} $ Jacobi(1,2) -5/3 $\sqrt{2/3}$ 300 -1 — 1 $\simeq n^{-2}$ : Parameters and numerical results for Examples \[jumping\], \[jumping2\], \[jumping3\] and \[realev\]. If $\lambda_\infty\notin{\mathbb{R}}$ then $n_0$ denotes the maximal $n$ for which $\lambda_n\in{\mathbb{R}}$, $q$ was computed according to Theorem \[main-geom\].[]{data-label="Table"} ![The only eigenvalue $\lambda_n$ of nonpositive type of the matrix $H_{[0,n]}$ given in eq. with $a_0=0.5$, $b_0=0.05$ and $H_{[1,n]}$ being the Jacobi matrix corresponding to the Legendre polynomials. The points, each corresponding to a different $n$, are plotted on the complex plane color coded according to $n$ in the upper picture, and according to $n \mod 3$ ($0$–blue, $1$–cyan, $2$–yellow) in the lower picture, where we show a zoomed detail around $\lambda_\infty$. The three branches of $\lambda_n$ mentioned in the text are clearly visible. See Example \[jumping\].[]{data-label="Legendre_compl"}]({Legendre_a_0.5_b_0.05_compl2}.eps "fig:"){width="300pt"} ![The only eigenvalue $\lambda_n$ of nonpositive type of the matrix $H_{[0,n]}$ given in eq. with $a_0=0.5$, $b_0=0.05$ and $H_{[1,n]}$ being the Jacobi matrix corresponding to the Legendre polynomials. The points, each corresponding to a different $n$, are plotted on the complex plane color coded according to $n$ in the upper picture, and according to $n \mod 3$ ($0$–blue, $1$–cyan, $2$–yellow) in the lower picture, where we show a zoomed detail around $\lambda_\infty$. The three branches of $\lambda_n$ mentioned in the text are clearly visible. See Example \[jumping\].[]{data-label="Legendre_compl"}]({Legendre_a_0.5_b_0.05_compl3}.eps "fig:"){width="300pt"} ![Real and imaginary part of the eigenvalue $\lambda_n$ from Figure \[Legendre\_compl\] Vs. $n$. The points color coded according to $n \mod 3$. The cusps in the real parts of the yellow and cyan branches correspond to the change of direction when a branch leaves the real line, see Figure \[Legendre\_compl\]. See Example \[jumping\].[]{data-label="Legendre_ReIM"}]({Legendre_a_0.5_b_0.05_ReIm2}.eps){width="300pt"} ![Same parameters as in Figure \[Legendre\_compl\]. The black line represents the theoretical bound given in Theorem \[main-geom\], shifted down so as to be superimposed on the numerical data. The logarithmic scale on the vertical axis makes evident the exponential convergence of $\lambda_n$ to $\lambda_\infty$. See Example \[jumping\].[]{data-label="Legendre_conv"}]({Legendre_a_0.5_b_0.05_conv2}.eps){width="300pt"} ![The value $n_0$ where the last $\lambda_n$ branch leaves the real axis as a function of $\lambda_\infty$. $H_{[1,n]}$ is the Jacobi matrix corresponding to the Legendre polynomials and $a_0$ and $b_0$ vary to get the different values of $\lambda_\infty$. See Example \[jumping\].[]{data-label="Ns"}](N2.eps){width="300pt"} ![The eigenvalues $\lambda_n$ for $H_{[1,n]}$ associated with the Wigner measure and parameters $a_0=0.5$, $b_0=0.1$, plotted in the complex plane, color coded according to $n \mod 12$ where twelve is the number of $\lambda_n$ branches. See Example \[jumping2\]. []{data-label="Wigner_compl"}]({Wigner_a_0.5_b_0.1_compl2}.eps){width="300pt"} ![Same parameters and color code as in Figure \[Wigner\_compl\]. The black line represents the theoretical bound given in Theorem \[main-geom\], shifted down so as to be superimposed on the numerical data. The logarithmic scale on the vertical axis makes evident the exponential convergence of $\lambda_n$ to $\lambda_\infty$. See Example \[jumping2\].[]{data-label="Wigner_conv"}]({Wigner_a_0.5_b_0.1_conv2}.eps){width="300pt"} ![Exponential convergence of $\lambda_n$ to $\lambda_\infty$. Here $H_{[1,\infty)}$ is associated with a measure on a Cantor-like set and $a_0=0.5$, $b_0=0.1$. The black line represents the theoretical bound given in Theorem \[main-geom\]. Color coding according to $n \mod 27$ evidences a repeated pattern of 27 points. The observed convergence is again exponential but much faster than the bound given by Theorem \[main-geom\]: the observed $q$ is approximately $1.1147$, much larger than the theoretical one $1.0044$. See Example \[jumping3\].[]{data-label="Cantor_conv"}]({Cantor_a_0.5_b_0.1_conv2}.eps){width="300pt"} ![Log-log plot showing $\mathcal{O}(n^{-2})$ convergence of $\lambda_n$ to $\lambda_\infty=-1$ in a case when $\lambda_\infty$ is embedded in the spectrum of $H_{[1,\infty)}$. See Example \[realev\].[]{data-label="Jacobicrit_conv"}]({Jacobi12crit_conv2}.eps){width="300pt"}
--- abstract: 'Domain Adaptation (DA) has the potential to greatly help the generalization of deep learning models. However, the current literature usually assumes to transfer the knowledge from the source domain to a specific known target domain. Domain Agnostic Learning (DAL) proposes a new task of transferring knowledge from the source domain to data from multiple heterogeneous target domains. In this work, we propose the Domain-Agnostic Learning framework with Anatomy-Consistent Embedding (DALACE) that works on both domain-transfer and task-transfer to learn a disentangled representation, aiming to not only be invariant to different modalities but also preserve anatomical structures for the DA and DAL tasks in cross-modality liver segmentation. We validated and compared our model with state-of-the-art methods, including CycleGAN, Task Driven Generative Adversarial Network (TD-GAN), and Domain Adaptation via Disentangled Representations (DADR). For the DA task, our DALACE model outperformed CycleGAN, TD-GAN, and DADR with DSC of 0.847 compared to 0.721, 0.793 and 0.806. For the DAL task, our model improved the performance with DSC of 0.794 from 0.522, 0.719 and 0.742 by CycleGAN, TD-GAN, and DADR. Further, we visualized the success of disentanglement, which added human interpretability of the learned meaningful representations. Through ablation analysis, we specifically showed the concrete benefits of disentanglement for downstream tasks and the role of supervision for better disentangled representation with segmentation consistency to be invariant to domains with the proposed Domain-Agnostic Module (DAM) and to preserve anatomical information with the proposed Anatomy-Preserving Module (APM).' author: - Junlin Yang - 'Nicha C. Dvornek' - Fan Zhang - Juntang Zhuang - Julius Chapiro - MingDe Lin - 'James S. Duncan' bibliography: - 'egbib.bib' title: 'Domain-Agnostic Learning with Anatomy-Consistent Embedding for Cross-Modality Liver Segmentation' --- Introduction ============ Domain Adaptation (DA) has emerged as an effective technique to help the generalization of deep learning models [@wang2018deep]. Although supervised deep learning models have been very successful in a variety of computer vision tasks, such as image classification and semantic segmentation, it usually requires lots of labeled data and assumes that training and testing data are sampled *i.i.d* from the same distribution. In practice, it is expensive and time-consuming to collect annotated data for every new task and new domain. At the same time, domain shift is common, which means training and testing data are typically from different distributions but related domains. In medical imaging, domain shift can be caused by different scanners, sites, protocols and modalities, adding to the high cost and difficulties of collecting large medical imaging datasets annotated by experts. Progress has been achieved to tackle this problem, especially for the domain shift caused by different scanners, sites and protocols. Yet, DA between different modalities is more challenging and yet to be extensively explored due to the large domain shift between different modalities [@dou2018pnp]. Once achieved, it will not only solve the scarcity of annotated data for medical imaging, but also greatly improve the current clinical workflows and the integration of different modalities. For example, both CT and MR play an important role in the diagnosis and follow-up after treatment of hepatocellular carcinoma (HCC) and they provide entirely different information. MR provides better specificity and multi-parametric tissue characterization along with better soft tissue contrast which helps identify fat, diffusion, and enhancement in a much more dynamic way, while CT merely measures perfusion and density of tissue. CT is quantitative due to calibration of density with Hounsfield unit, while MRI is not [@oliva2004liver]. It is desired to achieve domain adaptation from CT to MR since CT is cheaper and more available in practice and many tasks such as liver segmentation are usually required on each modality. Most current works on the domain shift problem assume that the target domain is specific and known as a prior and try to adapt the source domain into a distinct target domain. Domain Agnostic Learning (DAL) [@peng2019domain] proposes a novel task to transfer knowledge from a labeled source domain to unlabeled data from arbitrary target domains, a difficult yet practical problem. For example, target data could consist of images from different medical sites, from different scanners and protocols, or even from different modalities. The main challenge is that the target data is highly heterogeneous and from mixed domains. Mainstream DA methods for semantic segmentation in medical imaging such as CycleGAN [@zhu2017unpaired] and its variant TD-GAN [@zhang2018task] work at the pixel level. However, they assume a one-to-one mapping between source and target, and thus are unable to recover the complex cross-domain relations in the DAL task [@almahairi2018augmented; @huang2018multimodal]. Furthermore, the translation in pixel-level information by making the marginal distributions of the two domains as similar as possible does not necessarily guarantee semantic consistency [@tzeng2015simultaneous]. This is also the case for methods that incorporate feature-level marginal distributions alignment which do not explicitly enforce semantic-consistency, such as DADR [@yang2019domain]. In this work, we propose an end-to-end trainable model that solves not only the problem of unsupervised DA, but also works for DAL. Our DALACE model learns domain-agnostic anatomical embeddings by disentanglement under the supervision of a Domain Agnostic Module (DAM) and an Anatomy Preserving Module (APM). It enforces semantic-consistency to ensure the disentangled domain-agnostic feature space to be meaningful and interpretable, instead of simply aligning marginal distributions via adversarial training. Our model outperforms the state-of-the-art models on DA and generalizes naturally to the DAL task. We show the success of disentangling anatomical information and modality information by visualization of domain-agnostic images and modality-transferred images. Our model thus improves the interpretability of black-box deep neural network models. Through ablation studies, we show that the performance of the downstream task benefits from the learned disentangled representations, and the proposed supervision modules DAM and APM boost the disentanglement. Furthermore, domain-agnostic images generated by our DALACE model have the potential for training a better joint learning model that utilizes the annotations from all modalities and works the best on each modality at the same time. This initial effort to help the integration of different modalities is valuable, as each modality has its unique strengths and plays its unique role in clinical practice. The main contributions are summarized below. First, this work explicitly proposes and tackles the DAL task for medical image segmentation. With the supervision of DAM and APM, the proposed end-to-end model learns a domain-agnostic anatomical embedding to reduce the domain shift while preserving the anatomy. Second, numerous experiments were conducted to show the effectiveness of our proposed model for the DA, DAL and joint learning tasks with large CT and small MR datasets. Third, We show the designed model by disentanglement to be more interpretable through visualization. Ablation studies show the benefit of disentanglement for the downstream task and the role of supervision for disentanglement. Related Work ============ **Domain Adaptation** has been a popular topic and is the potential solution for generalization of deep learning models. There are mainly two categories, feature-level domain adaptation that aligns features between domains and pixel-level domain adaptation that performs style-transfer between domains [@wang2018deep]. For medical images, domain adaptation between different domains caused by different scanners, medical sites and modalities is quite important, considering the high cost of collecting and annotating medical images from different domains and the valuable and unique roles of different modalities in clinical practice. Most state-of-the-art domain adaptation methods for medical image segmentation reduce the domain shift through adversarial learning. For example, CycleGAN [@zhu2017unpaired] and its variants TD-GAN [@zhang2018task] and TA-ADA [@jiang2018tumor] rely on the cycle-consistency loss and have led to impressive results. However, they assume a one-to-one mapping, instead of many-to-many, between data with complex cross-domain relations. Thus, they fail to capture the true structured conditional distribution. Instead, these models learn an arbitrary one-to-one mapping and generate translated output lacking in diversity [@almahairi2018augmented; @huang2018multimodal]. DADR [@yang2019domain] achieves DA by disentangling medical images into content space and style space. However, anatomy-consistency is not always guaranteed without explicitly enforcing semantic consistency on content space. As for feature-level adaptation, while it seems effective for tasks like classification, it is unclear how well it might scale to dense structured domain adaptation [@ramirez2018exploiting; @luo2019taking]. **Domain Agnostic Learning.** Compared to Domain Adaptation, Domain Agnostic Learning aims to learn from a source domain and map to arbitrary target domains instead of one specific known target domain [@peng2019domain]. In the field of medical imaging, it is an interesting task to explore since it is common to get test data from different domains caused by different scanners, sites, protocols and modalities [@dou2018pnp]. As for cross-modality liver segmentation, the DAL task is in particular useful since images from many different modalities (e.g. CT, MR with different phases, etc.) are routinely acquired for better diagnosis, image guidance during treatment and follow-up after treatment [@oliva2004liver]. Mainstream DA methods align the source and target domains by adversarial training [@long2015learning; @tzeng2017adversarial]. However, with highly entangled representations, these models have limited capacity to tackle the DAL task. [@peng2019domain] proposes to solve the DAL task for classification by learning disentangled representations. **Disentangled Representation Learning.** Disentangled representation learning aims to model the different factors of data variation [@higgins2018towards]. A couple of methods have been proposed to learn disentangled representations [@chen2016infogan; @higgins2017beta]. Some focus on disentangling style from content [@huang2018multimodal]. In our case, we define content as anatomy information, i.e., spatial structure, and define style as modality information, i.e., the rendering of the image. Recent work [@locatello2019challenging] suggests that future research on disentangled representation learning should investigate concrete benefits of enforcing disentanglement of the learned representations and be explicit about the role of inductive biases and supervision. In our work, we discuss the performance boost by disentanglement learning and the role of supervision from our proposed anatomy preserving module (APM) and domain agnostic module (DAM) through ablation studies. Disentanglement learning also plays an important role to go from the DA task to DAL task. **Interpretation by Disentanglement** Deep neural networks are generally considered black box models. However, there has been lots of recent work on interpretation of deep learning models, particularly in medical imaging [@li2019efficient; @codella2018collaborative]. [@gilpin2018explaining] summarizes these works into three main categories, including emulating the processing of the data to draw connections between the inputs and outputs, explaining the representation of data inside the network, and designing neural networks to be easier to explain. Disentangled representation falls into the last sub-category since these networks are designed to explicitly learn meaningful disentangled representations [@chen2016infogan; @higgins2017beta]. Through visualization of transferred images and domain-agnostic images and experiments on downstream tasks, we not only show the success of disentanglement between anatomy information and modality information, but also show the representation has the potential for task transfer and data reconstruction. Furthermore, experimental results demonstrate that the downstream tasks benefit from the learned disentangled representation. These results show that our model is designed to learn meaningful, interpretable representations. Method ====== We propose an end-to-end trainable Domain Agnostic Anatomical Embedding by Disentanglement (DALACE) model to tackle the DA and DAL tasks. Of note, the DA task is defined as transferring knowledge from a given source labeled dataset belonging to domain $\mathcal{D}_s$ to a target unlabeled dataset that belongs to a specific known domain $\mathcal{D}_t$. The DAL task is defined in a similar way, except that the target unlabeled dataset consists of data from multiple domains $\{\mathcal{D}_{t1}, \mathcal{D}_{t2}, ..., \mathcal{D}_{tn}\}$ without any domain label for each sample annotating which domain it belongs to. The ultimate goal is to minimize the target risk for downstream tasks [@peng2019domain]. In our application to medical images from patients with HCC, CT has true segmentation masks while MR does not. CT and MR are unpaired with each other. The DA task is to transfer knowledge from CT data to pre-contrast phase MR data, the DAL task is to transfer knowledge from CT data to heterogeneous multi-phasic MR data from mixed domains, and the downstream task of interest is cross-modality liver segmentation. Please see the visualization of DA and DAL in Fig. \[fig:DA\_DAL\]. ![Schematic diagram of the domain adaptation task and the domain agnostic learning task.[]{data-label="fig:DA_DAL"}](DA_DAL.png){width="0.623\linewidth"} ![image](pipeline.png){width="0.78\linewidth"} End-to-End Pipeline ------------------- Fig. \[fig:pipeline\] shows the end-to-end DALACE pipeline to learn a domain-agnostic anatomical embedding, which is invariant to domains but discriminative of the classes for the segmentation task. Input CT and MR images are denoted as $X_{CT}$ and $X_{MR}$. Inspired by the MUNIT [@huang2018multimodal] model and DADR [@yang2019domain] model, DALACE consists of two anatomy encoders $E_a^{CT}$ and $E_a^{MR}$, two modality encoders $E_m^{MR}$ and $E_m^{CT}$, and two style-based generators with multi-layer perceptron (MLP) and adaptive instance normalization (AdaIN) [@karras2019style] $G^{CT}$ and $G^{MR}$. We propose the Anatomy Preserving Module (APM) and Domain Agnostic Module (DAM) to generate domain-agnostic anatomical images for the DA and DAL tasks. To start the pipeline, both source data $x_{CT}$ and target data $x_{MR}$ are fed into the encoders ($E_a^{CT}$, $E_m^{CT}$, $E_a^{MR}$ and $E_m^{MR}$) and embedded into anatomy codes $a^{CT}$ and $a^{MR}$ (feature maps) and modality codes $m^{CT}$ and $m^{MR}$ (vectors). In the next step, anatomy codes and modality codes are fed into style-based generators $G^{CT}$ and $G^{MR}$ for self-reconstruction via optimizing the $L_{img}$ term in equation . Then modality codes $m^{CT}$ and $m^{MR}$ are swapped and together with the original anatomy codes are fed into style-based generators for cross-reconstruction/modality-transfer generation, which is contrained by the $L_{latent}$ loss term in equation . Please refer to and for details about $L_{img}$ and $L_{latent}$. Expectation is taken with respect to $x_{CT} \sim X_{CT}$ and $x_{MR} \sim X_{MR}$. $$\label{eq:recon} \begin{split} L_{recon} &= \alpha L_{img} + \beta L_{latent} \\&= \alpha (L_{CT}+L_{MR}) \\&+ \beta (L_{a}^{CT}+L_{m}^{CT}+L_{a}^{MR}+L_{m}^{MR}) \end{split}$$ $$\label{eq:img} \begin{split} &\;\;\;\;\,L_{CT}+L_{MR} \\&=\mathbb{E}||G^{CT}(E_{a}^{CT}(x_{CT}),E_m^{CT}(x_{CT}))-x_{CT}||_1 \\&+ \mathbb{E}||G^{MR}(E_{a}^{MR}(x_{MR}),E_m^{MR}(x_{MR}))-x_{MR}||_1 \end{split}$$ $$\label{eq:latent} \begin{split} &\;\;\;\;\,L_{a}^{CT}+L_{m}^{CT}+L_{a}^{MR}+L_{m}^{MR} \\&=||E_{a}^{MR}(x_{CT\rightarrow MR})-a^{CT}||_1 \\&+||E_{m}^{CT}(x_{MR\rightarrow CT})-m^{CT}||_1 \\&+||E_{m}^{MR}(x_{CT\rightarrow MR})-m^{MR}||_1 \\&+||E_{a}^{CT}(x_{MR\rightarrow CT})-a^{MR}||_1 \end{split}$$ To generate anatomy-preserving domain-agnostic images, only anatomy codes alone are fed into the generators without modality codes. DAM encourages the anatomy embedding to be domain-agnostic by adversarial training while APM encourages the anatomy embedding to be anatomy-preserving by adversarial training [@huang2018multimodal; @zhang2018task]. In this way, the model is designed to learn meaningful and interpretable disentangled representations, thus helping us to understand the learned representations and the model better. Feedback Supervision Modules ---------------------------- ### Domain Agnostic Module This module encourages the embedding to be domain-agnostic in an adversarial training way. It consists of two discriminators $D^{CT}$ and $D^{MR}$, which try to discriminate between real CT $X_{CT}$ and fake CT transferred from MR $X_{MR\rightarrow CT}$ and real MR $X_{MR}$ and fake MR transferred from CT $X_{CT\rightarrow MR}$, respectively. The discriminators compete with encoders and style-based generators to encourage the disentanglement of modality and anatomical information by driving modality information into the modality codes, thus forcing the anatomy embedding to be domain-agnostic. Please see Equation for details. ![Domain-Agnostic Module, which encourages the embedding to be domain-agnostic by adversarial training.[]{data-label="fig:long"}](DAM.png){width="0.525\linewidth"} \[fig:onecol\] $$\label{eq:DAM} L_{adv}^{cross}=L_{adv}^{CT\rightarrow MR}+L_{adv}^{MR\rightarrow CT}$$ $$\label{eq:CTtoMR} \begin{split} L_{adv}^{CT\rightarrow MR} &= \mathbb{E}[log(1-D^{MR}(x_{CT\rightarrow MR}))]\\&+ \mathbb{E}[log(D^{MR}(x_{MR}))] \end{split}$$ $$\label{eq:MRtoCT} \begin{split} L_{adv}^{MR\rightarrow CT} &=\mathbb{E}[log(1-D^{CT}(x_{MR\rightarrow CT}))]\\&+\mathbb{E}[log(D^{CT}(x_{CT}))] \end{split}$$ ### Anatomy Preserving Module (APM) The Anatomy Preserving Module helps the embedding to preserve and align high-level semantic information for different modalities. It consists of two steps. In the first step, both anatomical images from CT and MR, $X_{a}^{CT}$ and $X_{a}^{MR}$, are fed into a segmentation module $S$, i.e. a U-Net based model, to generate segmentation masks $\hat{M}_{a}^{CT}$ and $\hat{M}_{a}^{MR}$ for both $X_{a}^{CT}$ and $X_{a}^{MR}$. For $\hat{M}_{a}^{CT}$, we compute the pixel-wise cross entropy loss (Equation ) between $\hat{M}_{a}^{CT}$ and the ground truth mask of original CT image $M_{a}^{CT}$ to encourage the encoders and style-based generators to keep the anatomy information. For $\hat{M}_{a}^{MR}$, we train a conditional GAN to differentiate between the pair of $X_{a}^{MR}$ and $\hat{M}_{a}^{MR}$ and the pair of $X_{a}^{CT}$ and $\hat{M}_{a}^{CT}$ (Equation ), thus encouraging the pair of anatomical images and prediction masks originally from CT and MR to be nondifferentiable so that the anatomical images from MR will be anatomy preserving in an adversarial way. ![Anatomy-Preserving Module, which encourages the embedding to be anatomy-preserving by adversarial training. D is the discriminator, S is the U-Net segmentation module.[]{data-label="fig:long"}](APM.png){width="0.725\linewidth"} \[fig:onecol\] $$\label{eq:CE} L_{CE} = -\Sigma \; y_{true}log(y_{pred})$$ $$\label{eq:CGAN} \begin{split} L_{adv}^{pair} &= \mathbb{E}[log(1-D(x_a^{MR}, \hat{M}_a^{MR}))]\\&+ \mathbb{E}[log(D(x_a^{CT}, \hat{M}_a^{CT}))] \end{split}$$ Implementations Details ----------------------- Anatomy encoders consist of 1 convolutional layer of stride 1 with 64 filters, 2 convolutional layers of stride 2 with 128, 256 filters respectively and 4 residual layers with 256 filters followed by batch normalization, while modality encoders are composed of 1 convolutional layer of stride 1 with 64 filters, 4 convolutional layers of stride 2 with 128, 256, 256, and 256 filters, a global average pooling layer, and a fully-connected layer with 8 filters without any batch normalization. Style-based generators with MLP take the anatomy codes (feature maps of size 64x64x256) and modality codes (vector of length 8) as inputs, which consist of 4 residual layers with 256 filters, 2 upsampling layers of 2x, and 1 convolutional layer of stride 1. The modality codes are used as inputs to the MLP to generate affine transformation parameters. Residual blocks in the style-based generators are equipped with an Adaptive Instance Normalization (AdaIN) layer to take the affine transformation parameters from the modality codes via the MLP. Discriminators are convolutional neural networks for binary classification. As for the DAM and APM modules, the segmentation network is a standard U-Net [@ronneberger2015u] architecture and the discriminators are also convolutional binary classifiers. The Adam optimizer [@kingma2014adam] is used for optimization. To update the parameters in the DALACE model, First, $alpha$ and $\beta$ are set as 2.5 and 0.01 for minimization of the loss function in equation : $\min_{E_a, E_m, G}\, L_{recon}= \alpha L_{img} + \beta L_{latent}$. Second, adversarial training is applied for the loss function in equation : $\min_{E_a, E_m, G} \, \max_{D} \, L_{adv}^{cross}$. Thhird, loss functions in equation and are optimized as $\min_S\,L_{CE}$, $\min_{E_a, G} \, \max_{D} \, L_{adv}^{pair}$, where $S$ denotes the segmentation module in APM. Learning rate is set as 0.001 except 0.0001 for $\min_{E_a, G} \, \max_{D} \, L_{adv}^{pair}$. In total, 2600 epochs are trained for each fold. In the first 600 epochs, $L_{adv}^{pair}$ is not optimized. Experiments were conducted on two Nvidia 1080ti GPUs. The training time each fold is $\sim2.5\,h$. The testing time each fold is within a minute. Experimental Results ==================== Data and Preprocessing ---------------------- We tested our DALACE model on slices from unpaired CT and MR scans: 130 CT scans from the LiTS challenge at ISBI and MICCAI 2017 [@christ2017lits] and multi-phasic MR scans from 20 patients at a local medical center, including pre-contrast phase MR, 20s post-contrast phase (arterial phase) MR and 70s post-contrast phase (venous phase) MR. Please see Fig. \[fig:DATA\] for image examples. Not only is the huge domain shift from CT to MR observed, the domain shifts between multi-phasic MR images can not be neglected. The multi-phasic MR dataset of 20 patients was collected with Institutional Review Board (IRB) approval and manual liver segmentation masks were created by a radiology expert. Both MR and CT data are normalized and resliced to be isotropic in three dimensions. Bias field correction is applied on MR data. For all the experiments, both CT and MR datasets are partitioned into 5-folds for cross-validation purposes. ![Examples of images from different modalities, from left to right: CT, pre-contrast phase MR, 20s post-contrast phase MR and 70s post-contrast phase MR.[]{data-label="fig:DATA"}](DATA.png){width="0.5\linewidth"} Domain Adaptation ----------------- For the DA task, to transfer knowledge from CT to pre-contrast phase MR, competing models are trained with labeled CT images and unlabeled pre-contrast phase MR images. Model performance was assessed using the dice similarity coefficient (DSC) between true and predicted liver segmentations. To have a better sense of understanding of the data and the DA task, we have a supervised U-Net trained and tested on the small pre-contrast phase MR dataset to serve as the upperbound. Another supervised U-Net is trained on CT and tested on pre-phase MR to serve as the lowerbound for each task. Please see Table \[tab:Seg\_DAL\] for details. The upperbound might be lower than the actual upperbound since the training MR data for 5-fold cross-validation is small and noisy. Compared to the MR data, CT data is much more available and robust to artifacts. ***Settings*** For each cross-validation split, four folds of CT data with segmentation masks and pre-contrast MR data without segmentation masks are used to train, and one fold of pre-contrast MR data without segmentation masks is used to test. The state-of-the-art models CycleGAN [@zhu2017unpaired], TD-GAN [@zhang2018task], and DADR [@yang2019domain] are trained with the same partition of data for the DA task. DALACE finds a shared space to embed both CT and MR and transfers both modalities into anatomical images while CycleGAN and TD-GAN tries to transfer directly between CT and MR. ***Results*** As shown in Table \[tab:Seg\_DA\], our DALACE model outperforms the current state-of-the-art models with DSC of 0.847 compared to DSC of 0.721 for CycleGAN, 0.793 for TD-GAN, and 0.806 for DADR. Please see Fig. \[fig:Seg\] for visual comparison of qualitative results from different models on cross-modality liver segmentation with DA. ![Two examples of DA task for cross-modality liver segmentation with different methods. From left to right: original pre-contrast phase MR images, ground truth masks, U-Net w/o DA results, CycleGAN results, TD-GAN results, DADR results, DALACE results.[]{data-label="fig:Seg"}](Seg.png){width="0.775\linewidth"} Domain Agnostic Learning ------------------------ For the DAL task, to transfer knowledge from CT to multi-phasic MR, competing models are trained with labeled CT images and unlabeled MR images in three different phases. To better assess performance on the DAL task, we have a supervised U-Net trained and tested on MR from each phase separately to serve as the upperbound. Another supervised U-Net is trained on CT and tested on MR from all phases to serve as the lowerbound. Please see Table \[tab:Seg\_DAL\] for details. The upperbound might be lower than the actual upperbound given the noisy and small MR dataset for training with 5-fold cross-validation. Among different phases, liver in the arterial phase MR is more visually inhomogeneous than liver in other MR phases, which might lead to downgraded performance. ***Settings*** For each cross-validation split, four folds of CT data with segmentation masks and multi-phasic MR data including pre-contrast phase, 20s post-contrast phase and 70s post-contrast phase without segmentation masks are used to train, and one fold of the multi-phasic MR data is used to test. The state-of-the-art models CycleGAN [@zhu2017unpaired], TD-GAN [@zhang2018task] and DADR [@yang2019domain] are trained with the same partition of data for the DAL task. ***Results*** As shown in Table \[tab:Seg\_DAL\], our DALACE model outperforms the current state-of-art models with DSC of 0.794 compared to DSC of 0.522 for CycleGAN, 0.719 for TD-GAN, and 0.742 for DADR. The shared embedding space from our DALACE model is modality-invariant to CT and multi-phasic MR, thus it is effective on the DAL task where target data is from heterogenous mixed domains. CycleGAN and TD-GAN performed badly in transferring between CT and multi-phasic MR since they are assuming multi-phasic MR to be from the same domain. DADR assumes mixed domains, but does not enforce anatomy-consistent representations, which results in lower performance compared to our DALACE model. Joint Learning -------------- For joint learning, instead of transferring knowledge from CT to MR, knowledge from CT and MR are jointly learned to get a better model on both CT and MR. Specifically, not only do CT images have ground truth masks, but also MR images have ground truth masks for training. ***Settings*** CT and pre-contrast phase MR are used in this experiment. For each cross-validation split, four folds of CT with segmentation masks and four folds of MR with segmentation masks are used to train the DALACE model, and the other one fold of CT and MR is used to test the model. U-Net trained on four folds of CT with segmentation masks and tested on the other one fold of CT and U-Net trained on four folds of MR with segmentation masks and tested on the other one fold of MR were used for comparison. ***Results*** As shown in Table \[tab:Joint\], the DALACE model for joint learning simultaneously outperforms the fully-supervised U-Net models separately trained and tested on each modality, with DSC of 0.911 tested on CT and 0.907 tested on MR compared to 0.901 on CT and 0.869 on MR using fully-supervised U-Net. Of note, 0.869 (0.044) is the estimated upperbound for DA tasks from Table \[tab:Seg\_DA\]. Overall, our DALACE model outperformed other methods for the joint learning task, especially in terms of MR tested DSC, which is of most interest. Only two methods DADR and DALACE in joint learning exceeded the upperbound for DA and showed synergy from effectively intergrating information from both CT and MR. Since our DALACE model for joint learning uses the domain-agnostic images of CT and MR as the inputs for the segmentation module, it shows that the DALACE model successfully disentangles the anatomy information from modality information. To achieve the task of liver segmentation, it does not necessarily require information from modality codes, but only anatomical information is relevant to the segmentation task. Analysis ======== Results Analysis ---------------- We tested our DALACE model on unpaired CT and MR data in three experiments and showed that DALACE is superior to the current state-of-the-art models in the literature such as CycleGAN, TD-GAN and DADR. The DALACE model, which works on both domain-transfer and task-transfer to learn a disentangled representation, not only aims to be invariant to different modalities but also preserves anatomical structures. In the DAL experiment, the main challenge is that target data come from multiple target domains, which violates the assumptions made by CycleGAN and TD-GAN. Features in CycleGAN and TD-GAN are highly entangled so that it is hard to learn a domain-invariant representation given multiple domains. However, the DALACE model generalizes easily to the DAL experimental settings and demonstrates superior performance in terms of DSC score due to disentanglement learning. Compared to DADR, the DALACE model achieved improved performance for the downstream tasks through explicitly enforcing semantic consistency. In the Joint-Learning experiment, we show the potential of our DALACE model to integrate different modalities, which shows the meaningful disentangled representations from each domain are domain-agnostic and aligned to preserve the anatomy structures. Visualization of Disentanglement -------------------------------- Through the above experiments, we have shown that our DALACE model outperforms the current state-of-the-art models on both DA and DAL tasks. In this section, we will show that anatomy information and modality information are disentangled by the DALACE model through visualization of domain-agnostic images and modality-transferred images. ***Domain-Agnostic Images*** ![Two sets of examples of domain-agnostic images. In each set, the first row from right to left is CT, pre-contrast MR, 20s post-contrast MR, and 70s post-contrast MR, and the second row is its corresponding domain-agnostic images.[]{data-label="fig:DAIMG"}](Domain_Agnostic_Images.png){width="0.9\linewidth"} To generate domain-agnostic images, CT and MR images are embedded by encoders into anatomy codes and modality codes. Then only anatomy codes are fed into style-based generators without modality codes to get the outputs as domain-agnostic images. Please see Fig. \[fig:DAIMG\] for domain-agnostic anatomical images generated by anatomy codes from different domains including CT and multi-phasic MR. As demonstrated in the figure, the modality information is erased while the anatomical structures are preserved in the domain-agnostic images. In other words, the anatomy information is extracted and preserved in the domain-agnostic anatomical embeddings. ![CT images are transferred to multi-phasic MR images in three phases. From left to right, each column is the multi-phasic MR images (from top to bottom: 70s post-contrast phase MR, 20s post-contrast phase MR, pre-contrast phase MR), the CT images, the modality-transferred images with anatomy structure from the CT images in the second column and modality rendering from the multi-phasic MR images in the first column.[]{data-label="fig:STIMG"}](DALACE_ST.png){width="0.36\linewidth"} ***Modality-Transferred Images*** To perform modality transfer, both input images and reference images are embedded by encoders into anatomy codes and modality codes. Then we maintain the anatomy codes from input images and modality codes from reference images and feed them into the style-based generators to get modality-transferred images. The generated modality-transferred images will inherit the anatomy structure from input images and modality rendering from reference images. Please see Fig. \[fig:STIMG\] for CT images transferred to multi-phasic MR images in three phases. It shows the successful disentanglement of modality information into modality code. Interpretation -------------- According to the categories for deep learning model explanation methods in [@gilpin2018explaining], the DALACE model is designed to be easier to interpret by explicitly learning meaningful and interpretable representations. The successful disentanglement of anatomy and modality information, as shown in Fig. \[fig:DAIMG\] and Fig. \[fig:STIMG\], adds transparency to the black-box model. Furthermore, in the previous experiments, it was shown that the learned meaningful and interpretable representation is able to generalize and is useful for reconstruction and downstream tasks. Ablation Studies ================ Recent work on disentanglement learning [@locatello2019challenging] suggests that, besides demonstrating the successful disentanglement, two important directions of future research are: (1) to investigate the concrete benefits of enforcing disentanglement learning for downstream tasks. (2) to explicitly discuss the role of supervision on disentanglement. Ablation studies are performed on our model to analyze the role of the components in our proposed model, in accordance with the above two points. Effectiveness of Disentanglement -------------------------------- To investigate the concrete benefits of enforcing disentanglement of the learned representations, we took out the disentanglement from our model by replacing the anatomy encoders, modality encoders and style-based generators with CycleGAN and the other parts of the model remain the same, except that there will be no domain-agnostic images and direct modality-transfer is applied between CT and MR, which is essentially the TD-GAN [@zhang2018task] model with the segment module pretrained on CT. The ablation experiment showed that, without the disentanglement component, the performance decreased from 0.847 to 0.793 for the DA task and from 0.794 to 0.719 for the DAL task, which indicates that the disentanglement benefits the performance of downstream tasks. Role of Supervision on Disentanglement -------------------------------------- To be explicit about the role of supervision for disentanglement, as well as to investigate the role of APM and DAM in the DALACE model, we take out the APM and DAM part respectively. Taking out APM will separate the end-to-end DALACE model into a two-stage model without enforcing semantic consistency, which is essentially DADR [@yang2019domain]. Taking out DAM will result in weakening the model’s ability to learn a domain agnostic representation, thus degrading the performance. Please see Table \[tab:supervision\] for details. It shows the important role of supervision on disentanglement and performance. Conclusions and Limitations =========================== For medical image analysis, in practice, it is expensive and time consuming to collect and annotate medical images. DA can be an effective solution for generalization of deep learning models for medical image analysis. However, target data itself can come from different scanners, medical sites, protocols and modalities with domain shifts, demonstrating the importance of the proposed DAL task. In addition, each modality plays a unique role in the diagnosis and after-treatment follow-up. An accurate model for the DAL task not only solves the problem of scarcity of labeled training data for medical image analysis using deep learning, but also it will improve the current clinical workflow and greatly help the integration of different modalities. This work explicitly proposed the DAL task for medical image analysis and introduced DALACE, an end-to-end trainable model which utilizes disentanglement to preserve the anatomical information and promote domain adaptation to the new DAL task. Through ablation studies, we explicitly investigated the effectiveness of disentanglement and the role of supervision for disentangled representation that is domain agnostic and anatomy preserving. By visualization, we showed that the disentanglement promotes the interpretability of the learned representation. While DALACE is proposed to tackle the DA and DAL tasks, it also has the potential to realize style transfer. Getting one model that works on style transfer, DA and DAL tasks is difficult but desirable and an interesting direction for future work. Without paired CT and MR to serve as ground truth, style transfer results are difficult to be quantitatively evaluated. The joint learning experiment also points to a potential direction for future studies, the integration of modalities.
--- abstract: 'The Hooghly-Matla estuarine complex is the unique estuarine system of the world. Nutrient from the litterfall enrich the adjacent estuary through tidal influence which in turn regulate the phytoplankton, zooplankton and fish population dynamics. Environmental factors regulate the biotic components of the system, among which salinity plays a leading role in the regulation of phytoplankton, zooplankton and fish dynamics of the estuary. In this article, a $PZF$ model is considered with Holling type-II response function. The present model considers salinity based equations on plankton dynamics of the estuary. The interior equilibrium is considered as the most important equilibrium state of this model. The model equations are solved both analytically and numerically using the real data base of Hooghly-Matla estuarine system. The essential mathematical features of the present model have been analyzed thorough local and global stability and the bifurcations arising in some selected situations. A combination of set of values of the salinity of the estuary are identified that helped to determine the sustenance of fish population in the system. The ranges of salinity under which the system undergoes Hopf bifurcation are determined. Numerical illustrations are performed in order to validate the applicability of the model under consideration.' author: - | Ujjwal Roy$^{(1)}$, S. Sarwardi$^{(2)}$[^1], N. C. Majee$^{(1)}$, Santanu Ray$^{(3)}$\ $^{(1)}$ Department of Mathematics, Jadavpur University\ Kolkata-700 032, West Bengal, India\ email: \ $^{(2)}$ Department of Mathematics, Aliah University, IIA/27, New Town\ Kolkata - 700 156, West Bengal, India\ email: \ $^{(3)}$Ecological Modelling Laboratory, Department of Zoology, Visva-\ Bharati University, Santiniketan-731 235, West Bengal, India.\ email:  title: '**Effect of salinity and fish predation on zooplankton dynamics in Hooghly-Matla estuarine system, India**' --- [**Mathematics Subject Classification**]{}: [92D25, 92D30, 92D40]{} **Keywords:** model; Equilibria; Local stability; Global stability; Hopf bifurcation; Numerical simulation Introduction ============ The delta of Hooghly-Matla estuarine system is networked by seven major river along with their tributaries and creeks. This deltaic system harbours luxuriant mangroves and constitutes Sundarban mangrove ecosystem. The mangroves are the major resources of detritus and nutrients to the adjacent estuary that make up favourable habitat for the growth of shell fish and fin fish (cf. Mandal et al., (2009)). This estuary act as route and refuge areas for a variety of migratory fish species. The estuary supports 53 species of pelagic fish belonging to 27 families and 124 species under 49 families of demersal fish (cf. Hussain and Acharya, (1994)). Fishery in this estuarine water contributes a part in the economy of the state of West Bengal, India. The organic matter that passes from litterfall to the adjacent estuary supports both the grazing and detritus food chains of this lotic system. Among the chemical components studied so far, the salinity plays a crucial role in the abundance (cf. Bhunia, (1979)) and dynamics of zooplankton of the estuary (cf. Ghosh, (2001), Ketchum, (1951)). Because, this community lies in the middle of grazing food chain; phytoplankton-zooplankton-fish ($PZF$). In addition, perturbation to this trophic level may trigger imbalance in the food chain which in turn affect the phytoplankton and fish community. The dynamics of salinity is season dependent. During monsoon and early post monsoon, huge fresh water enters in the estuary from the upstream resulting in the lowering of salinity. In the pre-monsoon, fresh water runoff from the upstream becomes very less and due to tidal influence of the adjacent Bay of Bengal, the salinity increases. Throughout the year, a gradient of salinity is observed between upstream and downstream area of the estuary (cf. Mandal et al., (2009)). The abundance of different species of zooplankton varies according to the salinity of the estuary throughout the year, as a result the grazing rate also changes with seasons. In $PZF$ system, the grazing rate of zooplankton is one of the most sensitive parameter in Hooghly-Matla estuarine system as the dynamics of zooplankton also depends on the lower trophic level of phytoplankton as well as fish population (cf. Mandal et al., (2009)). Moreover, fish predation exhibits top-down effect on zooplankton community. Therefore, in the $PZF$ system, two important parameters that shape up the zooplankton dynamics of the estuary are salinity dependent grazing rate of zooplankton and fish predation rate (cf. Dube et al., (2010)). There have been only a few models based on the effects of salinity on plankton dynamics. Few studies have been done in the relationship between salinity levels to the types of species that can occur in algal blooms (cf. Griffin et al., (2001), Marcarelli et al., (2006), Quinlan and Phlips, (2007)). Many $PZF$ model are constructed on estuarine system (cf. Ray et al., (2001); Cottingham, (2004); Dube and Jayaraman, (2008); Dube et al., (2010)) and top-down effect (cf. Morozov et al., (2005); Irigoien et al., (2005); Calbet and Saiz, (2007)) but none of the models have considered the salinity dependent grazing rate of zooplankton which plays an important rule in the zooplankton dynamics of estuary. The present account deals with a $PZF$ model, where salinity dependent grazing rate of zooplankton is taken into consideration along with the variation of upstream and downstream salinity. Mathematical model formulation ------------------------------ Let $P$, $Z$, $F$ denote the populations of phytoplankton, zooplankton and fish respectively. In this present $PZF$ model, light and temperature dependant photosynthesis rate of phytoplankton and salinity induced grazing rate of zooplankton have been incorporated into the model proposed by Mandal et al., (2011). The modified model under consideration is as follows: $$\begin{aligned} \left \{ \begin{array}{lll} \frac{dP}{dt} = m_{1}P\Bigl(1-\frac{P}{k_{P}}\Bigr)-g_{s}\frac{PZ}{P+k_{Z}}\\ \frac{dZ}{dt} = a g_{s}\frac{PZ}{P+k_{Z}}-g_{f}\frac{ZF}{Z+k_{F}}-m_{2}Z\\ \frac{dF}{dt} = g_{f}\frac{ZF}{Z+k_{F}}-m_{3}F.\end{array}\right. \label{eq1}\end{aligned}$$   $g_s=\delta g_Z,~\text{where}~~ \delta =\frac{s_{u}}{s_{u}-s_{d}},$ the dilution factor and $m_{1},$  $s_{d},$ $s_{u},$ $g_{Z},$ $k_{Z},$ $k_{P}$ are net growth rate of phytoplankton, downstream salinity, upstream salinity, grazing rate of zooplankton, half saturation constant of zooplankton on phytoplankton grazing by zooplankton and half saturation constant grazing by phytoplankton respectively. Since estuary is a transition zone of river and sea, so there is always fluctuation of salinity throughout the year, which is due to dilution by upstream river water and/or mixing by downstream sea tidal water. $\delta$ is calculated by following the equation of Ketchum, (1951). Besides grazing the abundance of zooplankton is also dependent on loss due to $E_{zoo}=Z E_{zo} ,$ respiration $R_{zoo}=Z r_{zo},$ fish predation $F_p=Z r_{rf}$, and mortality $M_{zoo}=Z M_{Z}$, $R_{zoo}$ is governed by $r_{zo}$.  $m_2=(E_{zo}+r_{zo} + r_{fp} +M_{Z}).$ The abundance of $F$ is governed by many processes in the estuary. Fish predation $F_{p}$ on $Z$ follows Michaelis-Menten kinetics (Holling type II) which enriches the fish pool of the estuarine system. $F_{p}$ depends on $k_{Z}$ and $g_f$. $F$ population is reduced by mortality rate of fish $M_{f}$, respiration $Fr_f$, harvest by fishing $FH_{cf}$ and excretion of fish $FE_f$. $FH_{f}$ are controlled by $R_f$ and $H_f$ respectively. Respiration rate of Fish and $H_{f}$ Harvest rate of Carnivorous Fish respectively.   $m_3=(E_{f} +M_{f}+r_{f}+H_{f}),$ where $r_{fp},$ $r_{zo}$ and  $M_{rz}$ are fish predation rate, zooplankton respiratory rate and mortality rate respectively. Existence and positive invariance --------------------------------- Letting $ X \equiv(P,Z,F)^T$,$ ~f: \mathbb{R}^3\rightarrow \mathbb{R}^3, f=(f_{1},f_{2}, f_{3})^T$, the system (\[eq1\]) can be rewritten as $\dot{X}=f(X)$. Here $f_{i}\in C^\infty (\mathbb{R})$ for $i=1,2,3,$ where $f_{1}= m_{1}P(1-\frac{P}{k_{P}})-g_{s}\frac{PZ}{P+k_{Z}}$, $f_{2}= {a} g_{s}\frac{PZ}{P+k_{Z}}-m_{2}Z-\frac{ZF}{Z+k_{F}}$, $f_{3}= g_{f}\frac{ZF}{Z+k_{F}}-m_{3}F$. Since the vector function $f$ is a smooth function of the variables $P$, $Z$ and $F$ in the positive octant $ \Omega^0 =\{(P,Z,F);P>0,Z>0,F>0\},$ the local existence and uniqueness of the system hold. Boundedness of the system ------------------------- Boundedness of a system guarantees its biological validity. The following theorem establishes the uniform boundedness of the system (\[eq1\]). **Theorem 1.** *All the solutions of the system (\[eq1\]) which start in $\mathbb{R}_{+}^3$ are uniformly bounded.* **Proof.** Let $(P(t), Z(t), F(t))$ be any solution of the system with positive initial conditions. From the real ecological field study one can consider   $\max \{ ~ m_{2},~ m_3\} < 1.$ Now let us we define the function $X =aP+Z+F.$  The time derivative of X gives $$\begin{aligned} \frac{d X}{d t} &=&a\frac{d P}{d t} + \frac{d Z}{d x} + \frac{d F}{d t},\nonumber\\ &=& am_{1}P(1-\frac{P}{k_{P}})-ag_{s}\frac{P Z}{P+k_{Z}} +a g_{s}\frac{P Z}{P+k_{Z}}-g_{f}\frac{Z F}{Z+k_{F}}-m_{2}Z + g_{f}\frac{Z F}{Z+k_{F}}-m_{3}F \nonumber\\ &=&am_{1}P(1-\frac{P}{k_{P}}) - m_2 Z - m_3 F.\end{aligned}$$ Now for each $v > 0$, we have $$\begin{aligned} \frac{d X}{d t} + v X &=&am_{1}P(1-\frac{P}{k_{P}}) - m_2 Z - m_3 F + v(aP+Z+F)\nonumber\\ &=& am_{1}P(1-\frac{P}{k_{P}})+ a v P +(v-m_2)Z+(v-m_3)F\nonumber\\ &\leq&am_{1}(1-\frac{P}{k_{P}})+ a v P,~ \text{if}~ v \leq ~\min \{ ~ m_{2},~ m_3\},\nonumber\\ ~&\leq &\frac{a (m_{1}+v)^2k_{P}}{4}.\end{aligned}$$ Using the theory of differential inequality (cf. Birkoff and Rota, (1982); Sarwardi et al. (2011)), one can easily obtain $$\begin{aligned} \limsup_{t\rightarrow +\infty}X(t)\leq \frac{a (m_{1}+v)^2k_{P}}{4}=\rho.\end{aligned}$$ Therefore, all the solutions of the system (\[eq1\]) enter into the compact region $\mathcal{B}=\Bigl\{(P,Z,F)\in \mathbb{R}_{+}^3 : aP+Z+F \leq \frac{a (m_{1}+v)^2k_{P}}{4}\Bigr\},$ which completes the proof. Equilibria and their feasibility -------------------------------- To determine the equilibrium points of the system of(\[eq1\]) we put $\dot{P}=\dot{Z}=\dot{F}=0$. The equilibria of the system(\[eq1\]) are (i) the null equilibrium $E_{0}=(0,0,0);$\ (ii) the axial equilibrium $E_{1}=(k_{P},0,0);$\ (iii) the boundary equilibrium $E_{2}=(\frac{m_{2}k_{Z}}{(a g_{s} -m_{2})},~\frac{(a m_{1} k_{Z})(ak_{P}g_{s}-k_{P}m_{2}-m_{2}k_{Z})}{(a g_{s}-m_{2})^2 k_{P}},0);$ and\ (iv) the interior equilibria   $E_{*}^i=(P_i^*, Z^*, F_i^*);~i = 1,2,$ where $P_i^*$=$\frac{-m_1(g_f-m_3)(k_{P}-k_Z)\pm\sqrt{(m_1(g_f-m_3)(k_{P}-k_Z))^2-4{m_1(m_3-g_f)}(k_{P}k_Zm_1(g_f-m_3)-m_3k_fg_sk_P) }}{2m_1(m_3-g_f)},$\ $Z_i^*$=$\frac{m_3k_F}{(g_f-m_3)}$ and $F_i^* = \frac{m_1k_Fa(P^*-k_P)}{m_3k_P}+\frac{k_F(ag_s-m_2)}{(g_f-m_3)}.$ Existence of planer and interior equilibria and their stability --------------------------------------------------------------- Equilibria of the model (\[eq1\]) can be obtained by solving $\dot{P}=\dot{Z}=\dot{F}=0$. Though the system (\[eq1\]) has several non-negative steady state but we have mentioned the existence of interior equilibrium point only for its biological importance.\ The interior equilibrium point $E_{*}=(P^*,Z^*,F^*)$ of the system (\[eq1\]) exists if $ (g_f-m_2)>0$ . When these conditions are satisfied, we have $P^*$ as the positive root of the following quadratic equation: $$A_0 {P^*}^2 - A_1 P^* - A_2=0,\label{eq3}$$ where $$\begin{aligned} A_0 =-m_1(g_f-m_3),~ \, A_1 =(g_fm_1-m_3m_1)(k_{P}-k_Z), ~\, A_2 = k_{P}k_Zm_1(g_f-m_3)-m_3k_fg_sk.\nonumber\end{aligned}$$ Since $g_f>m_3,$ $A_0$ is positive. Therefore, one positive root of (\[eq3\]) can be found as $$P^*=\frac{1}{2A_0}[A_1+\sqrt{A_1^2+4A_0A_2}],$$ which exists if $A_2>0$ and $A_0>0$. Therefore, the equation (\[eq3\]) has a unique positive solution if $g_{f} > m_{3}.$ Once we get the unique positive solution of $P^*$ from equation (\[eq3\]), it is easy to obtain the other components of the interior equilibrium $E_*.$ **Feasibility:** It is clear that the axial equilibrium is feasible. The interior equilibrium is feasible if the condition (i) $ (g_{f} - m_{3}) > 0 $ and (ii) $(k_{P}-k_z) < 0 $ hold good. Local stability of equilibrium ------------------------------ In order to find out about the stability of the equilibrium points we need to linearize the system (\[eq1\]). Since the axial and boundary equilibria are less important on this system (\[eq1\]). That’s why we don’t go for detailed analysis of the system around the equilibria other than interior equilibrium. The stability of interior equilibrium, we find the Jacobian matrix. For this purpose we define the system (\[eq1\]) as follows: $$\frac{dX}{dt}= F(X),$$ where $f(X) = \left[ \begin{array}{c} m_{1}P(1-\frac{P}{k_{P}})-g_{s}\frac{PZ}{P+k_{Z}}\\ a g_{s}\frac{PZ}{P+k_{Z}}-g_{f}\frac{ZF}{Z+k_{F}}-m_{2}Z \\ g_{f}\frac{ZF}{Z+k_{F}}-m_{3}F \end{array} \right],$     $Y=\left[\begin{array}{c} P\\ Z\\ F \end{array} \right].$ The variational matrix of the system at any arbitrary point ( $\tilde{P}$,$\tilde{Z}$,$\tilde{F}$ ) is $\tilde{J}=\frac{\partial f}{\partial X} \mid_{(\tilde{P},\tilde{Z},\tilde{F})} =\left[ \begin{array}{ccc} \displaystyle \frac{2 g_s \tilde{Z}}{\tilde{P}+k_Z}-m_1-\frac{g_s \tilde{Z} k_{Z}}{(\tilde{P} +k_{Z})^2} &\displaystyle -\frac{g_s \tilde{P} }{(\tilde{P} + k_{Z})} &\displaystyle 0 \\ \displaystyle \frac{ag_s k_{Z} \tilde{Z}}{(\tilde{P} +k_{Z})^2} &\displaystyle \frac{g_f \tilde{F} }{(\tilde{Z} + k_F)}-\frac{g_{f} \tilde{F} k_{F}}{(\tilde{Z} + k_{F})^2} & \displaystyle -\frac{g_{f} \tilde{Z}}{(\tilde{Z} + k_{F})} \\ \displaystyle 0 & \displaystyle \frac{g_{f} \tilde{F} k_{F}}{(\tilde{Z} + k_{F})^2} &\displaystyle 0 \\ \end{array} \right].$ Therefore, the Jacobian matrix for the system (\[eq1\]) at $E_*$ is given by $$J_{*} =\left[ \begin{array}{ccc} \displaystyle R_1-R_2-m_1 &\displaystyle -K_{1} &\displaystyle 0 \\ \displaystyle R_2 &\displaystyle K_{2}-R_3 & \displaystyle -m_{3} \\ \displaystyle 0 & \displaystyle R_{3} &\displaystyle 0 \end{array} \right],$$ where $R_{1}=\frac{2 g_s Z^*}{(P^* +k_Z)},~ R_{2}=\frac{g_s Z^* k_Z}{(P^* +k_Z)^2},~ R_{3}=\frac{g_f F^* k_F}{(Z^* + k_F)^2},~ K_{1}=\frac{g_s P^*}{(P^* + k_Z)},~ K_{2} =\frac{g_f F^* }{(Z^* + k_F)}.$ The characteristic equation corresponding to the Jacobian matrix $J_*$ is $$\Delta(\lambda)=\lambda^3 + D_1\lambda^2 + D_2 \lambda + D_3 = 0.$$ For the stability of the solutions of the system (\[eq1\]) all of the roots of the characteristic equation for the Jacobian matrix at $E_{*}$ ie. $J_{*}$ should have negative real parts. This can be achieved without actually solving for all the roots of the characteristic equation, but by applying the Routh-Hurwitz condition for negative real parts of the characteristic roots. According to the Routh-Hurwitz conditions, the solutions of (\[eq1\]) will be asymptotically stable if the conditions $ D_1 > 0,~~ D_1D_2 >D_3,~ D_3 > 0$ are satisfied. Global Stability ================ Now we examine the global stability issue. Here we describe the general method described by Li and Muldowney, (1996) to show an n-dimensional autonomous dynamical system $f:D\rightarrow \mathbb{R}^n, D\subset \mathbb{R}^n$, an open and simply connected set and $f\in C^{1}(D)$, where the dynamical system is given by $$\frac{dx}{dt}= F(x)\label{eq4}$$ is globally stable under certain parametric conditions (cf. Jin and Haque, (2005); Haque et al., (2009) and Buonomo et al., (2008))\ Now we assume the following incidents:\ (A1) The autonomous dynamical system has a unique interior equilibrium point x$^*$ in D.\ (A2) The domain D is simply connected.\ (A3) There is a compact absorbing set $\Omega\subset$ D.\ Now we give a definition due to Haque et al., (2005) **Definition 1.** The unique equilibrium point x$^*$ of the dynamical system (\[eq4\]) is globally asymptotically stable in the domain D if it is locally asymptotically stable and all the trajectories in D converges to its interior equilibrium point x$^*$. Let J=(J$_{ij})_{3x3}$ be the variational matrix of the system (\[eq4\]) and J$^{[2]}$ be second additive compound matrix with order $^{n}C_{2} \times$ $^{n}C_{2}.$ In particular for n=3 we can write\ $J^{[2]} =\frac{{\partial f}^{[2]}}{\partial X}=\left[ \begin{array}{ccc} V_{11}+V_{22} & V_{23} & -V_{13} \\ V_{32} & V_{11}+V_{33} & V_{12}\\ -V_{31} & V_{21} & V_{22}+V_{33} \\ \end{array} \right]$ Let $M(x) \in C^{1}(D)$, be an $^{n}C_{2}$ $\times$ $^{n}C_{2}$ matrix valued function. Moreover, we consider B as a matrix such that B=M$_{f} M^{-1}+MJ^{[2]}M^{-1}$ where the matrix M$_{f}$ is represented by $$(M_{ij}(x))_{f}= {\biggr(\frac{\partial M_{ij}}{\partial x}\biggr)}^t.f(x)=\nabla M_{ij}.f(x).\label{eq5}\\$$ Again due to Jr. Martin et al., (1974), we consider the Lozinski$\check{i}$ measure $\Gamma$ of $B$ with respect to a vector norm $|.|$ in $\mathbb{R}^n$ . $N=$ $^{n}C_{2}$, we have $$\Gamma (B)=\lim_{h \rightarrow 0^+}\frac{|I+h B|-1}{h}. \notag{}$$ Li and Muldowney (1996) showed that the system (\[eq4\]) will be globally stable if the conditions (A1), (A2) and (A3) together with $$\limsup_{t \rightarrow \infty} \sup \frac{1}{t}\displaystyle\int_{0}^{t}\Gamma (B(x(s,x_{0})))ds<0\label{eq6}$$ hold simultaneously. The above stated condition not only assures that there are no orbits (i.e., homoclinic orbits, heteroclinic cycles and periodic orbits) which gives rise to a simple closed rectifiable curve in D, invariant for the system (\[eq4\]) but also it is a robust Bendixson criterion for (\[eq5\]) in addition with (\[eq6\]) is sufficient for local stability the system (\[eq4\]) around $E_{*}.$ Now we use the above discussion to show that our system (\[eq1\]) is globally stable around its interior equilibrium. We make the following transformation of the variables $x\rightarrow P^{-1}$, $y\rightarrow Z$,  $z\rightarrow F$, for which the autonomous system (\[eq1\]) is transformed to the following one: $$\frac{dX}{dt}= f(X),\label{eq7}$$ where $f(X) =\left[ \begin{array}{c} \frac{m_1}{x}(1-\frac{1}{xk_P})-\frac{g_s y}{1+xk_Z}-x^2\\ \frac{ag_sy}{1+xk_Z}-\frac{g_f yz}{y+k_F}-m_2y \\ \frac{g_f yz}{y+k_F}-m_3 z\\ \end{array} \right]$   and   $X=\left[ \begin{array}{c} x\\ y\\ z\\ \end{array} \right].$ The variational matrix of the system (\[eq7\]) can be written as $V =\frac{\partial f}{\partial X}=\left[ \begin{array}{ccc} \displaystyle \frac{x g_sy(2+xk_Z)}{(1 +xk_Z)^2}-m_1 &\displaystyle \frac{g_s x^2 }{(1 + xk_Z)} &\displaystyle 0 \\ \displaystyle -\frac{ag_s y k_Z}{(1 +xk_Z)^2} &\displaystyle \frac{ag_s}{(1 + xk_Z)}-\frac{a g_s y}{(1 + xk_Z)}+\frac{g_fyz}{(y+k_F)^2} & \displaystyle -\frac{g_fy}{(y + k_F)} \\ \displaystyle 0 & \displaystyle \frac{g_f Z k_F}{(y + k_F)^2} &\displaystyle 0 \\ \end{array} \right].$ If $V^{[2]}$ be the second additive compound matrix of V then due to Buonomo et al. (2008) we can write $V^{[2]} =\left[ \begin{array}{ccc} \displaystyle a_{11} &\displaystyle -\frac{g_f y}{(y + k_F)} &\displaystyle 0 \\ \displaystyle \frac{g_f Z k_F}{(y + k_F)^2} &\displaystyle a_{22} & \displaystyle \frac{g_s x^2 }{(1 + xk_Z)} \\ \displaystyle 0 & \displaystyle -\frac{a g_s y k_Z}{(1 +xk_Z)^2} &\displaystyle a_{33} \end{array} \right],$ where $$\begin{aligned} a_{11}&=&\frac{x g_sy(2+xk_Z)}{(1 +xk_Z)^2}-m_1+\frac{g_f y z }{(y + k_F)^2},\nonumber\\ a_{22}&=&\frac{x g_sy(2+xk_Z)}{(1 +xk_Z)^2}-m_1,\nonumber\\ a_{33}&=&\frac{g_fyz}{(y+k_F)^2} . \nonumber\end{aligned}$$ We consider $P(X)$ in $C^{1}(D)$ in a way that $P=$ $\bigl(1, \frac{y}{z}, \frac{y x^2}{z}\bigr)$ then we have $P^{-1} = \text{diag} \bigl(1, \frac{z}{y}, \frac{z}{yx^2}\bigr)$, $P_f P^{-1} = \text{diag} \{0, \frac{\dot{y}}{y}- \frac{\dot{z}}{z}, ~ \frac{2\dot{x}}{x} + \frac{\dot{y}}{y}-\frac{\dot{z}}{z}\}$ $PV^{[2]}P^{-1} =\left[ \begin{array}{ccc} a_{11} & -\frac{g_fy}{(y + k_F)} & 0 \\ \frac{g_f k_F z}{(y + k_F)^2} & a_{22} & \frac{g_sx^2}{(1+ x k_Z)} \\ 0 & - \frac{a g_s y k_Z}{(1 +xk_Z)^2} & a_{33} \end{array} \right].$ After some algebraic calculation, we get $B = P_f P^{-1} + P V^{[2]} P^{-1}=\left[\begin{array}{ccc} B_{11}&B_{12} \\ B_{21} &B_{22} \\ \end{array} \right]$, where $B_{11}=a_{11}= \frac{x g_sy(2+xk_Z)}{(1 +xk_Z)^2}-m_1+\frac{g_f y z }{(y + k_F)^2}$ ,\ $B_{12}= \left[ \begin{array}{ccc} -\frac{g_f y}{y+k_F} & 0 \\ \end{array} \right]$, $B_{21}= \left[ \begin{array}{ccc} \frac{g_f z k_F}{(y + k_F)^2} & 0 \\ \end{array} \right]^T$, $B_{22}= \left[ \begin{array}{ccc} a_{22}+ \frac{\dot{y}}{y}-\frac{\dot{z}}{z} & \frac{g_sx^2}{(1 + x k_Z)} \\ - \frac{a g_s y k_Z}{(1 + x k_Z)^2} & a_{33}+\frac{2\dot{x}}{x}+\frac{\dot{y}}{y}-\frac{\dot{z}}{z} \\ \end{array} \right]$. Now let us define the following vector norm in $R^3$ as $|(u,v,w)|=$max $\big( |u|,|v|+|w|)$, where $|(u,v,w)|$ is the vector norm in $R^3$ and it is denoted by $\Gamma$, the Loǩinski measure with respect to this norm. Therefore, $\Gamma(B)\leq$sup$\{l_1,l_2\}$, where $l_i=\Gamma_1($B$_{ii})+|$B$_{ij}|$ for i$=$1, 2 and i$\neq$ j, here $|$B$_{12}|$ and $|$B$_{21}|$ are the matrix norms with respect to the L$^1$ vector norm and $\Gamma_1$ is the Loǩinski measure with respect to that norm. Therefore, we can easily obtain the following terms: $\Gamma_1(B_{11})=\frac{x g_s y(2 + x k_Z)}{(1 +x k_Z)^2}-m_1+\frac{g_f y z }{(y + k_F)^2}, |B_{12}|= \max \{ \frac{g_f y}{y+k_F}, \hspace{0.2cm} 0 \},\\ |B_{21}|= \max \{ \frac{g_{f} z k_{F}}{(y + k_F)^2},\hspace{0.2cm} 0\},\\ \Gamma_1(B_{22})= \frac{\dot{y}}{y}-\frac{\dot{z}}{z} + \max\left\{\frac{g_s y (2x+x^2 k_Z-a k_Z)}{(1+x k_Z)^2},\hspace{0.2cm} \frac{g_s x^2}{(1+ xk_Z)}+\frac{g_f y z}{(y+k_F)^2}+\frac{2\dot{x}}{x}) \right \}.$ Using the system equation (\[eq7\]), we have $l_1 = \frac{x g_s y(2 + x k_Z)}{(1 +x k_Z)^2}-m_1+\frac{g_f y z }{(y + k_F)^2} +\frac{g_f y}{y+k_F},$   $l_2 = \frac{\dot{y}}{y}-\frac{\dot{z}}{z} -2m_{1}(1-\frac{1}{xk_P})+ \frac{g_s y x}{1+xk_Z}+\frac{g_f y k_F}{(y+ k_F)^2}.$\ Now from the expression of $l_1$ and $l_2$ one can obtain $\Gamma(B)=\max\biggr\{ \frac{x g_s y(2 + x k_Z)}{(1 +x k_Z)^2}-m_1+\frac{g_f y z }{(y + k_F)^2} +\frac{g_f y}{y+k_F}, \frac{\dot{y}}{y}-\frac{\dot{z}}{z} -2m_{1}(1-\frac{1}{xk_P})+ \frac{g_s y x}{1+xk_Z}+\frac{g_f y k_F}{(y+ k_F)^2}\biggr\}.$\ $~~~~~~~~=\frac{\dot{y}}{y}-\frac{\dot{z}}{z} -2m_{1}(1-\frac{1}{xk_P})+ \frac{g_s y x}{1+xk_Z}+\frac{g_f y k_F}{(y+ k_F)^2},$ if $1<x k_P<2.$ Thus, $ \Gamma(B)\leq\frac{\dot{y}}{y}-\biggr[\frac{2g_s}{\rho+k_F}-(\frac{g_s}{k_Z}+\frac{g_f}{k_F})\rho-(m_1+m_3)\biggr]\hspace{4.2cm}\\ \,~~~~ ~~~~\leq\frac{\dot{y}}{y} -\mu,$ where $\mu=\frac{2g_s}{\rho+k_F}-(\frac{g_s}{k_Z}+\frac{g_f}{k_F})\rho-(m_1+m_3)>0.$ Taking average value of $\Gamma(B)$ within the time scale $0$ and $t$ one can have $$\hspace{0.2cm} \frac{1}{t}\int^t_0\Gamma(B)ds\leq\frac{1}{t}log\frac{y(t)}{y(0)}-\mu, \notag{}\\ \text{which gives}$$ $$\lim_{t \rightarrow \infty}sup\hspace{0.2cm} sup \hspace{0.2cm}\frac{1}{t}\int^t_0\Gamma(B(x(s,x_0)))ds<-\mu<0, \label{eq8}$$ whenever $1>\frac{1}{x k_{P}}>\frac{1}{2}.$ Now we are in a position to state the following theorem: The model system (\[eq1\]) is globally asymptotically stable around its interior equilibrium if $\mu>0$, i.e $\frac{2g_s}{\rho+k_F}-(\frac{g_s}{k_Z}+\frac{g_f}{k_F})\rho-(m_1+m_3)>0.$ Numerical simulation -------------------- For the purpose of making qualitative analysis, numerical simulations have been carried out by making use of MATHLAB-R2010a and Maple-18. In Table 1, we summarize the used parameters with their admissible values and their biological interpretations. All these results have also been verified by numerical simulations,of which we report some in the figures. In order to compare our model with the corresponding model. we have run simulations using the standard MATLAB differential equations integrator for the Runge-Kutta method, i.e, MATLAB routine ODE45. The numerical experiments performed on the system (\[eq1\]) using the experimental data taken from different sources to confirm our theoretical findings. Specially we focus here on the following values of the system parameters as shown in the table-I. The problem described by the system (1) is well posed. The P, Z, F axes are invariant under the flow of the governing system. In the present system the total environmental population is bounded above (cf. Subsection 1.3). Therefore, any solution starting in the interior of the first octant never leaves it. This mathematical fact is consistent with the biologically well behaved system and is common in the modern research work on mathematical biology. We took a set of parameter values: $m_1=0.6,\, m_2=0.0698,\, m_3=0.324,\, a=0.8,\, g_s=0.75,\, g_f=0.6894, \,s_d=12.30,\, s_u=8.23,\, k_P=12,\, k_Z=38,\, k_F=10.1.$ For this set of parameter values the system possessed a unique equilibrium point $E_* (1.809, 8.964, 3.112)$. We have established the sufficient conditions for global stability of the coexistence equilibrium. The coexistence coexistence equilibrium point $E_{*}$ has been found through numerical simulations whose global asymptotical stability has been depicted in Figures 1(a) and Figure 2(a). It is observed from the Figure 1:(b)-(c) and Figure 2(b)-(c) that the fish population gradually decreases as the upstream salinity increases from $s_u=8.33$ to $s_u = 8.51$ while the downstream salinity is keeping fixed at $s_d=12.30.$ For higher upstream salinity ($s_u=8.51$) Figure 1(c) in the estuary is detrimental to the present zooplankton population. Zooplankton in this particular region are stenohaline thus the fish population cannot persist in such high salinity and the population gradually decreased to zero. Therefore, the salinity has an important role on regulating fish population in the present estuarine system. Representative numerical simulations are shown in Figures: 3–5. In Figure 3(a), the solutions plots describing the Hopf bifurcation and Figure 3(b) exhibits the corresponding phase space diagram. The present dynamical system confirms the existence of chaotic nature which has been presented in Figure 4:(a)-(b). Figure 5:(a)-(f) shows different phase space diagram for different values of upstream salinity within the range 4.50–7.50, while downstream salinity has been kept fixed at $s_d=10.30$ and the remaining parameter values are same as taken in Figure 1. It is observed the catastrophic change in the behaviour of the solution plots as the parameter $s_u$ varies from 4.50 to 8.0 (cf. Figure 5:(a)-(f)). More specifically, Figure 5(a) shows periodic orbit with period 4; Figure 5(b) shows the chaotic orbit; Figure 5(c) shows periodic orbit with period 6; Figure 5(d) shows the period doubling bifurcation; Figure 5(e) demonstrates a limit cycle and finally Figure 5(f) ensures the stability of the system around an interior equilibrium point $(2.622, 8.878, 10.12)$ of the system (\[eq1\]). 0.5cm [ |c| c| c | c |c| c ]{} Descriptions & & Values & &\ &${ m_{1} }$& 0.6 &$day^{-1}$&\ &${k_{P}}$ & & &\ & ${g_{z}}$ & &$day^{-1}$ &\ & ${E_{zo}}$ & & $day^{-1}$ &\ & ${ r_{zo} }$ && [$day^{-1}$]{} &\ & ${ r_{fp} }$ && $day^{-1}$ &\ & ${M_{z}}$ & ${0.0145}$ & $day^{-1}$ &\ & ${ E_{f} }$ && $day^{-1}$ &\ & ${ M_{f} }$ && $day^{-1}$ &\ & ${ r_{f} }$ && $day^{-1}$ &\ & ${ H_{f} }$ && $day^{-1}$ &\ & ${ g_{f} }$ && $day^{-1}$ &\ & ${ k_{F} }$ && &\ & ${ k_{Z} }$ && &\ \[Table:t\_1\]\ ------------ \[f1\] (a) (b) (c) ------------ \(a) (b) (c) \(a) (b) \(a) (b) Discussion: =========== Hooghly-Matla estuarine ecosystem is highly dynamic as the environment changes (change in water temperature and salinity) and as more new species invade (cf. Rosith, (2013) and Bhaumik, (2013)). This estuarine system is mixohaline in nature, where the salinity ranges from freshwater condition ($< 2$ppt) to 22 ppt at various points along the stretch of the estuary (cf. Mandal et al., (2012)). In the pre-monsoon, fresh water runoff from the upstream becomes very less and due to tidal influence of the adjacent Bay of Bengal, the salinity increases. Throughout the year, a gradient of salinity is observed between upstream and downstream area of the estuary (cf. Mandal et al., (2009)). Previous studies indicate that changes in the estuarine fish assemblage are regulated by associated changes in the salinity and estuarine mouth morphology (cf. Gillanders et al., (2011)). The Hooghly-Matla estuarine system has funnel shaped sea face which is appropriate for optimum tidal flux (cf. Rosith, (2013) and Bhaumik, (2013)).\ Several research works are published on fish depletion in the Hooghly-Matla estuary. It is observed that there is gradual regime shift of saline water from the upper stretches of estuary to the downstream of the estuary during the past three decades and it happened when of Farakka Barrage is commissioned in 1975 (cf. Sinha, (1996)). The shift of estuarine zone towards the downstream is due to the increased ingress of fresh water to the Hooghly-Matla estuary, which resulted in the extension of freshwater zone. This leads to disappearance of stenohaline fishes from the system. Barron et al., (2002) reported that phytoplankton species of estuarine origin are more tolerant to low salinity than oceanic species. This study suggests the co-existence of euryhaline fishes and estuarine phytoplankton in the Hooghly-Matla estuarine system. The present study is in agreement with the above findings.\ Another study which supports the present research is the ionic balance of zooplankton in estuarine environment. It is observed that the propensity of invertebrate species richness near the upstream of the estuary decreases as the salinity reaches the critical values (5 to 8 ppt). It occurs due to the inability of invertebrates to regulate specific ionic concentrations at and below the critical salinity (cf. Khlebovich, (1969)). Freshwater ingress in the monsoon period changes the salinity and this in turn indirectly effects the dynamics of fish population of the estuary.\ The present situation of Hooghly-Matla estuarine system represents stressed condition of the fish species in the estuary. Model shows that the existence of fish in the system is possible only when the growth rate of carnivorous fish population is greater than cumulative effect of excretion rate, respiration rate ($g_f > m_3$). The growth rate is dependent on predation of zooplankton by carnivorous fish population. Energy allocation is dependent on the life-history strategy of a fish species. It is widely accepted that both material and energy are mobilized and reallocated for reproduction in fishes (cf. Jobling, (1995)). This indicates that there exists energy trade off in physiological processes of the estuarine species, where most of the energy earned through predation is allocated for reproduction rather respiration and excretion. Hence, there is growth of fish population in the estuary. We have also shown that the salinity effect has a prominent role in significantly stabilizing the coexistence equilibrium in our model (cf. Figure 1 and Figure 2). [**Acknowledgement:**]{} Mr. Ujjwal Roy and Dr. N. C. Majee are thankful to the Department of Mathematics, Jadavpur University for providing all the facilities to complete this research work. The corresponding author Dr. S. Sarwardi is thankful to the Department of Mathematics, Aliah University for providing opportunities to perform the present work. He is also thankful to his Ph.D. supervisor Prof. Prashanta Kumar Mandal, Department of Mathematics, Visva-Bharati for his generous help and continuous encouragement while preparing this manuscript. Prof. Santanu Ray is thankful to the Department of Zoology, Visva-Bharati University (a Central University) for opportunities to perform the present work. [99]{} Hussain, K.Z., Acharya G. Mangroves of the Sundarbans. Bangladesh. IUCN, Bangkok, Thailand. Vol. [**2.**]{} (1994) Bhunia, A.B. Ecology of tidal creeks and mudflats of Sagar Island (Sundarbans) West Bengal,PhD dissertation, Calcutta University. (1979) Rosith, C.M., Sharma, A.P., Manna, R.K., Satpathy, B.B., Bhaumik, U. Ichthyofaunal diversity, assemblage structure and seasonal dynamics in the freshwater tidal stretch of Hooghly estuary along the Gangetic delta. *Aquatic Ecos. Health Mgmt.* Vol. [**16**]{}, 445-453. (2013) Khlebovich, V.V. Aspects of animal evolution related to critical salinity and internal state. *Marine Biol.* Vol. [**[2]{}**]{}, 338–345. (1969) Gillanders, B.M., Elsdon, T.S., Halliday, I.A., Jenkins, G.P., Robins, J.B., Valesini, F.J. estuaries: a review. Potential effects of climate change on Australian estuaries and fish utilisin *Marine and Freshwater Reseach.* Vol.[**[62]{}**]{},1115–1131. (2011) Ghosh, P.B., Role of macrofauna in energy partitioning and nutrient recycling in a tidal creek of Sunderbans mangrove forest, India. Kumar, A. Ecology and Ethology of Aquatic Biota. D aya Publishing House, New Delhi. 90–97. (2001) Ketchum, B.H. The Exchange of Fresh and Salt Water in Tidal Estuaries. *J. Mar Res.* Vol.[**10**]{}, 18–38. (1951) Griffin, A.L., Michael, H., Hamilton, D.P. Modelling the impact of zooplankton grazing on phytoplankton biomass during a dinofagellate bloom in the Swan river estuary. *West. Aust. Ecol. Eng.* Vol. [**[16]{}**]{}, 373–394. (2001) Marcarelli, A.M., Wurtsbaugh, W.A., Griset, O. Salinity controls phytoplankton response to nutrient enrichment in the Great Salt Lake, Utah, USA. Canad. *J. Fish Aquat. Sci.* Vol. [**[63]{}**]{}, 2236–2248. (2006) Quinlan, E.L., Phlips, E.J. Phytoplankton assemblages across the marine to low-salinity transition zone in a blackwater dominated estuary. *J. Plankton Res.* Vol. [**[29]{}**]{}, 401–416. (2007) Ray, S., Berec, L., Straskraba, M., Jorgensen, S.E. Optimization of exergy and implications of body sizes of phytoplankton and zooplankton in an aquatic ecosystem model. *J. Ecol. Model* Vol. [**[ 140]{}**]{}, 219–234. (2001) Cottingham, K.L., Glaholt, S., Brown, A.C. Zooplankton community structure affects how phytoplankton respond to nutrient pulses. *J. Ecol.* Vol. [**[85]{}**]{}, 158–171. (2004) Dube, A., Jayaraman, G. Mathematical modelling of the seasonal variability of plankton in a shallow lagoon. *J. Nonl. Anal.* Vol. [**[69]{}**]{}, 850–865. (2008) Dube, A., Jayaraman, G., Rani, R. Modelling the effects ofvariable salinity on the temporal distribution of plankton in shallow coastal lagoons. *J. Hydro-environment Res.* Vol. [**[4]{}**]{}, 199–209. (2010) Morozov, A.Y., Nikolay, P.N., Sergei, V.P. Invasion of a top predator into an epipelagic system can bring a paradoxical top-down trophic control. *Biol. Invasions.* Vol. [**[7]{}**]{}, 845–861. (2005) Irigoien, X., Flynn, K.J., Harris, R.P. Phytoplankton blooms: a ’loophole’ in microzooplankton grazing impact. *J. Plankton Res.* Vol. [**[27]{}**]{}, 313–321. (2005) Calbet, A., Saiz, E. The ciliate-copepod link in marine ecosystems. *Aquat. Microb. Ecol.* Vol. [**[38]{}**]{}, 157–167. (2005) Mandal, S., Ray, S., Ghose, P.B. Modeling Nutrint (dissolved inorganic nitrogen) and plankton dynamics at sagarisland of Hooghly-Matla estuarine system, West Bengal, India, *J. Nat. Resour. Model.* Vol. [**[25]{}**]{}, (2012) Jin, Z. Haque, M. Global stability analysis of an ecoepidemiological model of the salton sea. *J. Biol. Syst.* Vol. [**[14]{}**]{}, 373–385. (2005) Sarwardi, S., Haque, M., Venturino, E. A Leslie-Gower Holling-type II ecoepidemic model. *J. Appl. Math. Comput.* Vol. [**[35]{}**]{}, 263–280. (2011) Ray, S., Berec, L., Straskraba, M., Ulanowicz, R.E. Evaluation of system performance through optimizing ascendency in an aquatic ecosystem model. *J. Biol. Syst.* Vol. [**[9]{}**]{}, 269–290. (2001) Birkoff, G., Rota, G.C. Ordinary differential equations. Ginn. (1982) Li, M.Y., Muldowney, J.S. A geometric approach to global stability problems, SIAM *J. Math. Anal.* Vol. [**[27]{}**]{}, 1070–1083. (1996) Jr. Martin, R.H. Logarithmic norms and projections applied to linear differential systems. *J. Math. Anal. Appl.* Vol. [**[45]{}**]{}, 432–454. (1974) Buonomo, B., Onofrio, A., Lacitignola, D. Global stability of an SIR epidemic model with information dependent vaccination. *Math. Biosc.* Vol. [**[216]{}**]{}, 9–16. (2008) Sinha, M., Mukhopadhyay, M.K., Mitra, P.M., Bagchi, M. M., Karamkar, H.C. Impact of Farakka barrage on the hydrology and fishery of Hoogly estuary. *Estuaries*, [**19**]{}, 710 –722. (1996) Jobling, M. Environmental biology of fishes. Fish and fisheries series 16, New York: Chapman Hall. (1995) Barron, S., Weber, C., Marino, R., Davidson, E., Tomasky, G., Howarth, R. Effects of varying salinity on phytoplankton growth in a low–salinity coastal pond under two nutrient conditions. *Biol. Bull.*. Vol. [**203**]{}, 260–261. (2002) Haque, M., Zhen, J., Venturino, E. An ecoepidemiological predator-prey model with standard disease incidence. *J. Math. Meth. Appl. Sci.* Vol. [**32**]{}, 875–898. (2009) [^1]: Author to whom all correspondence should be addressed
--- abstract: 'An integro-differential Kolmogorov equation is considered in Hölder-type spaces defined by a scalable Lévy measure. Some properties of those spaces and estimates of the solution are derived by using probabilistic representations.' address: 'University of Southern California, Los Angeles' author: - 'R. Mikulevičius and Fanhui Xu' date: 'March 22, 2018' title: 'On the Cauchy problem for parabolic integro-differential equations in generalized Hölder spaces' --- Introduction ============ Let $\left( \Omega,\mathcal{F},\mathbf{P}\right)$ be a complete probability space. Given a Lévy measure $\nu$ on $\mathbf{R}^d_0=\mathbf{R}^d\backslash\{0\}$, we suppose there exists an adapted Poisson random measure $J\left( ds, dy\right)$ on $\left( \Omega,\mathcal{F},\mathbf{P}\right)$ such that $$\begin{aligned} &&\qquad\mathbf{E}\left[ J\left( ds, dy\right)\right]= \nu\left( dy\right)ds,\\ &&\tilde{J}\left( ds, dy\right)= J\left( ds, dy\right)-\nu\left( dy\right)ds.\end{aligned}$$ Then there is a Lévy process $Z_t^{\nu}$ associated to $\nu$ in the way that $$\begin{aligned} \label{lev} \qquad Z_t^{\nu}=\int_0^t\int_{\mathbf{R}^d_0} \chi_{\alpha}\left( y\right) y \tilde{J}\left( ds, dy\right)+\int_0^t\int_{\mathbf{R}^d_0} \left( 1-\chi_{\alpha}\left( y\right)\right) y J\left( ds, dy\right),\end{aligned}$$ where, as a convention, $$\begin{aligned} \alpha:=\inf\{\sigma\in\left( 0,2\right):\int_{\left\vert y\right\vert\leq 1}\left\vert y\right\vert ^{\sigma}\nu\left( dy\right)<\infty \}\end{aligned}$$ is the order of $\nu$, and $\chi_{\alpha}\left( y\right):=1_{\alpha\in\left(1,2\right)}+1_{\alpha=1}1_{\left\vert y\right\vert\leq 1}$. The aim of this paper is twofold. One is to introduce function spaces of generalized smoothness and reveal the embedding relations among them. The other is to study the Cauchy problem of the following parabolic-type Kolmogorov equation within the framework of such generalized smoothness: $$\begin{aligned} \label{eq1} \partial_t u\left( t,x\right)&=&Lu\left( t,x\right)-\lambda u\left( t,x\right)+f\left( t,x\right), \lambda\geq 0,\\ u\left( 0,x\right)&=& 0,\quad\left( t,x\right)\in \left[0,T\right]\times\mathbf{R}^d,\nonumber\end{aligned}$$ where $L$ is the infinitesimal generator of $Z_t^{\nu}$. Study on function spaces of generalized smoothness dates back to the seventies, signified by the work of H. Triebel [@tr], G.A. Kalyabin [@kg], P.I. Lizorkin [@kg2] and so on. It is a natural development after the theory of differentiable functions of multi-variables and has been thriving for decades due to its close relation to interpolation theory, potential theory and the theory of differential operators. What is of most interest to us is the possibility to use the language of generalized smoothness to describe and investigate some special Lévy processes, $\eqref{lev}$ in particular. We know by the Lévy-Khinchine formula that each Lévy process $\left(Z_t\right)_{t\geq 0}$ is determined by a continuous negative definite function which is called the symbol. Generally speaking, by assuming the symbol $\tilde{\psi}$ of $\left(Z_t\right)_{t\geq 0}$ behaves up to a perturbation like $\psi$, one could expect the scales of spaces associated with $\psi$ plays the same role for $\tilde{\psi}$ as the classical Besov spaces do for elliptic operators. This was illustrated in [@fw] and [@fw2] and was a motivation for defining such spaces. In this paper, we utilize a continuous function $w$ to capture the discrepancy generated by scaling and we support this viewpoint by investigating $\eqref{eq1}$ in $w$-scaled Besov spaces. A continuous function $w: \left( 0,\infty\right)\rightarrow \left( 0,\infty\right)$ is called a scaling function if $$\begin{aligned} \lim_{r\rightarrow 0}w\left( r\right)=0, \quad\lim_{R\rightarrow \infty}w\left( R\right)=\infty \end{aligned}$$ and if there is a nondecreasing continuous function $l\left(\varepsilon\right),\varepsilon>0$ such that $\lim_{\varepsilon\rightarrow 0}l\left(\varepsilon\right)=0$ and $$\label{scale} w\left( \varepsilon r\right)\leq l\left(\varepsilon\right)w\left( r\right), \quad\forall r,\varepsilon>0.$$ We call $l$ the scaling factor of $w$. Assumptions throughout this paper will be summarized in Section 2.3, denoted by **A(w,l)**. Intuitively, a measure satisfying **A(w,l)** is non-degenerate and has a scaling effect on integrability that can be compensated by $w$, which voices for a large family of Lévy measures, including $\alpha$-stable measures, $\alpha$-stable-like measures and certain radical-and-angular expressed measures. (See Section 2.3.) We will fix a Lévy measure $\mu$ that meets **A(w,l)** as our **reference measure** and use $w$ to define generalized Besov (resp. Hölder) norms $\left\vert \cdot\right\vert_{\beta,\infty}$ (resp. $\left\vert \cdot\right\vert_{\beta}$) and generalized Besov (resp. Hölder) spaces $\tilde{C}^{\beta}_{\infty,\infty},\beta>0$ (resp. $\tilde{C}^{\beta}$). (See Section 2.2.) Write $H_T=\left[0,T\right]\times\mathbf{R}^d$. One of the main results of this paper is: \[thm2\] Let $\beta\in\left(0,\infty\right), \lambda\geq 0$ and $\nu$ be a Lévy measure satisfying **A(w,l)**. If $f\left( t,x\right)\in \tilde{C}^{\beta}_{\infty,\infty}\left(H_T\right)$. Then there is a unique solution $u\in\left( t,x\right)\in \tilde{C}^{1+\beta}_{\infty,\infty}\left(H_T\right)$ to $$\begin{aligned} \label{eqq} \partial_t u\left( t,x\right)&=&L^{\nu}u\left( t,x\right)-\lambda u\left( t,x\right)+f\left( t,x\right), \lambda\geq 0,\\ u\left( 0,x\right)&=& 0,\quad\left( t,x\right)\in H_T,\nonumber \end{aligned}$$ where for any function $\varphi\in C_b^{2}\left(\mathbf{R}^d\right)$, $$\begin{aligned} \label{op} L^{\nu}\varphi\left( x\right):=\int\left[ \varphi\left( x+y\right)-\varphi\left( x\right)-\chi_{\alpha}\left( y\right)y\cdot \nabla \varphi\left( x\right)\right]\nu\left(d y\right). \end{aligned}$$ Moreover, there exists a constant $C$ depending on $\kappa,\beta, d, T,\mu,\nu$ such that $$\begin{aligned} \left\vert u\right\vert_{\beta,\infty}&\leq& C\left( \lambda^{-1}\wedge T\right) \left\vert f\right\vert_{\beta,\infty},\label{est6}\\ \left\vert u\right\vert_{1+\beta,\infty}&\leq& C\left\vert f\right\vert_{\beta,\infty}\label{est3} \end{aligned}$$ And there is a constant $C$ depending on $\kappa,\beta,d, T,\mu,\nu$ such that for all $0\leq s<t\leq T$, $\kappa\in\left[ 0,1\right]$, $$\label{est4} \left\vert u\left(t,\cdot\right)-u\left(s,\cdot\right)\right\vert_{\kappa+\beta,\infty}\leq C\left\vert t-s\right\vert^{1-\kappa}\left\vert f\right\vert_{\beta,\infty}.$$ As it will be seen later, when $\nu$ behaves like an $\alpha$-stable measure, $\eqref{est6}$-$\eqref{est4}$ are ordinary Besov (equiv. Hölder-Zygmund) regularity estimates. By norm equivalence stated in Section 3, Theorem \[thm2\] implies immediately \[thm3\] Let $\beta\in\left(0,\infty\right), \lambda\geq 0$ and $\nu$ be a Lévy measure satisfying **A(w,l)**. If $f\left( t,x\right)\in \tilde{C}^{\beta}\left(H_T\right)$ and $$\begin{aligned} \int_0^1 l\left( t\right)^{\beta}\frac{dt}{t}+\int_1^{\infty} l\left( t\right)^{\beta}\frac{dt}{t^2}<\infty. \end{aligned}$$ Then there is a unique solution $u\in\left( t,x\right)\in \tilde{C}^{1+\beta}\left(H_T\right)$ to $\eqref{eqq}$. Moreover, there exists a constant $C$ depending on $\kappa,\beta, d, T,\mu,\nu$ such that $$\begin{aligned} \left\vert u\right\vert_{\beta}&\leq& C\left( \lambda^{-1}\wedge T\right) \left\vert f\right\vert_{\beta},\label{est62}\\ \left\vert u\right\vert_{1+\beta}&\leq& C\left\vert f\right\vert_{\beta}\label{est32} \end{aligned}$$ And there is a constant $C$ depending on $\kappa,\beta,d, T,\mu,\nu$ such that for all $0\leq s<t\leq T$, $\kappa\in\left[ 0,1\right]$, $$\label{est42} \left\vert u\left(t,\cdot\right)-u\left(s,\cdot\right)\right\vert_{\kappa+\beta}\leq C\left\vert t-s\right\vert^{1-\kappa}\left\vert f\right\vert_{\beta}.$$ In [@mp2], a parabolic-type Kolmogorov equation with an operator $\mathcal{L}=A+B$ was considered in the standard Hölder-Zygmund space, where $B$ is the lower order part and the principal part $A$ assumes a form of $$\begin{aligned} Au\left(t, x\right):=\int\left[ u\left(t, x+y\right)-u\left( t,x\right)-\chi_{\alpha}\left( y\right)y\cdot \nabla u\left( t,x\right)\right]\rho\left( t,x,y\right)\frac{dy}{\left\vert y\right\vert^{d+\alpha}}.\end{aligned}$$ While in [@mp], a parabolic integro-differential equation perturbed by Gaussian noise was studied in the stochastic Hölder spaces. Operators were introduced as $$\begin{aligned} \mathcal{L}u\left(t, x\right)&:=&\int\left[ u\left(t, x+y\right)-u\left( t,x\right)-1_{\alpha\geq 1}1_{\left\vert y\right\vert\leq 1}y\cdot \nabla u\left( t,x\right)\right]\nu\left( t,x,dy\right)\\ &+&1_{\alpha=2}a^{ij}\left(t, x\right)\partial^2_{ij}u\left(t, x\right)+1_{\alpha\geq 1}\tilde{b}^{i}\left(t, x\right)\partial_i u\left(t, x\right)+l\left(t, x\right) u\left(t, x\right),\end{aligned}$$ and results were expressed in terms of moments. A similar operator was adopted in [@mp3] and the corresponding deterministic model was studied in the little Hölder-Zygmund spaces. Besides, the Cauchy problem for a linear parabolic SPDE of the second order was considered in [@rm] and [@br] in standard Hölder classes. Our note is organized as follows. In section 2, notation is introduced and spaces are defined. Meanwhile, we collect all assumptions that are needed in this paper and provide with examples that satisfy all the assumptions. A few more defining properties of our model are discussed as well. In section 3, we elaborate embedding relations among function spaces. Probability representations are used to extend operations to all functions in $C_b^{\infty}\left(\mathbf{R}^d\right)$. After the extension, those operations become bijections on $C_b^{\infty}\left(\mathbf{R}^d\right)$. Norm equivalence then follows from continuity of operators. Regularity estimates in the case of smooth inputs are collected in section 4, while those for Besov (equiv. Hölder) inputs are put in section 5. Section 6 accommodates existing results that are used in our proofs. Notation, Spaces and Models =========================== Basic Notation -------------- $\mathbf{N}=\{0,1,2,3,\ldots,\}$, $\mathbf{N}_{+}=\mathbf{N}\backslash\{0\}$. $H_T=\left[0,T\right]\times\mathbf{R}^d$. $S^{d-1}$ is the unit sphere in $\mathbf{R}^d$. $\Re$ is a notation for the real part of a complex-valued quantity. For a function $u=u\left( t,x\right) $ on $H_T$, we denote its partial derivatives by $\partial _{t}u=\partial u/\partial t$, $\partial _{i}u=\partial u/\partial x_{i}$, $\partial _{ij}^{2}u=\partial ^{2}u/\partial x_{i}x_{j}$, and denote its gradient with respect to $x$ by $% \nabla u=\left( \partial _{1}u,\ldots ,\partial _{d}u\right) $ and $% D^{|\gamma |}u=\partial ^{|\gamma |}u/\partial x_{1}^{\gamma _{1}}\ldots \partial x_{d}^{\gamma _{d}}$, where $\gamma =\left( \gamma _{1},\ldots ,\gamma _{d}\right) \in \mathbf{N}^{d}$ is a multi-index. We use $C_b^{\infty}\left( \mathbf{R}^d\right)$ to denote the set of infinitely differentiable functions on $\mathbf{R}^d$ whose derivative of arbitrary order is finite, and $C^{k }\left( \mathbf{R}^{d}\right),k\in\mathbf{N} $ the class of $k$-times continuously differentiable functions. $\mathcal{S}\left( \mathbf{R}^d\right)$ denotes the Schwartz space on $\mathbf{R}^d$ and $\mathcal{S}'\left( \mathbf{R}^d\right)$ denotes the space of continuous functionals on $\mathcal{S}\left( \mathbf{R}^d\right)$, i.e. the space of tempered distributions. We adopt the normalized definition for Fourier and its inverse transforms for functions in $\mathcal{S}\left( \mathbf{R}^d\right)$, i.e., $$\begin{aligned} \mathcal{F}\varphi\left(\xi\right) &=& \hat{\varphi}\left(\xi\right) :=\int e^{-i2\pi x\cdot \xi}\varphi\left(x\right)dx, \\ \mathcal{F}^{-1}\varphi\left(x\right)&=&\check{\varphi}\left(x\right) := \int e^{i2\pi x\cdot \xi}\varphi\left(\xi\right)d\xi, \enskip \varphi\in\mathcal{S}\left( \mathbf{R}^d\right).\end{aligned}$$ Recall that Fourier transform can be extended to a bijection on $\mathcal{S}'\left( \mathbf{R}^d\right)$. $\mu$ always refers to our reference measure, and $\alpha$ denotes its order unless otherwise specified. Throughout the sequel, $Z_t^{\nu}$ represents the Lévy process associated to the Lévy measure $\nu$ in the way as $\eqref{lev}$. For any Lévy measure $\nu$, any $R>0$ and $\forall B\in\mathcal{B}\left( \mathbf{R}^d_0\right)$, $$\begin{aligned} \nu_R \left(B\right)&=&\int 1_B\left( y/R\right)\nu\left( dy\right),\label{mea}\\ \tilde{\nu}_R\left( dy\right) &:=&w\left( R\right) \nu_R\left( dy\right).\label{meas}\end{aligned}$$ Without loss of generality, we normalize $w$ by a constant so that $w\left( 1\right)=1$ and $\tilde{\nu}_1\left( dy\right)=\nu\left( dy\right)$. Meanwhile, we introduce for any Lévy measure $\nu$, $$\begin{aligned} \bar{\nu}\left( dy\right):=\frac{1}{2}\left( \nu\left( dy\right)+\nu\left(- dy\right)\right).\end{aligned}$$ We have specific values assigned for $\alpha_1,\alpha_2,c_0,c_1, c_2, N_0,N_1$, but we allow $C$ to vary from line to line. In particular, $C\left(\cdots\right)$ represents a constant depending only on quantities in the parentheses. Function Spaces of Generalized Smoothness ----------------------------------------- By definition of the scaling factor, there is a constant $N>3$ such that $l\left( N^{-1}\right)<1<l\left( N\right)$. For such a $N$, by Lemma 6.1.7 in [@bl] and appropriate scaling, there exists $\phi\in C_0^{\infty}\left(\mathbf{R}^d\right)$ such that $supp\left(\phi\right)=\{ \xi:\frac{1}{N}\leq \left\vert \xi\right\vert\leq N\}$, $\phi\left(\xi\right)> 0$ in the interior of its support, and $$\sum_{j=-\infty}^{\infty}\phi\left( N^{-j}\xi\right)=1 \mbox{ if } \xi\neq 0.$$ We denote throughout this paper $$\begin{aligned} &&\varphi_j =\mathcal{F}^{-1}\left[\phi\left( N^{-j}\xi\right)\right],\quad j=1,2,\ldots,\xi\in \mathbf{R}^d,\label{j1}\\ &&\varphi_0 =\mathcal{F}^{-1}\left[ 1-\sum_{j=1}^{\infty}\phi\left( N^{-j}\xi\right)\right].\label{j0}\end{aligned}$$ Apparently, $\varphi_j\in \mathcal{S}\left( \mathbf{R}^d\right),j\in\mathbf{N}$. If we write $$\begin{aligned} && \tilde{\varphi_j}=\varphi_{j-1}+\varphi_{j}+\varphi_{j+1}, j\geq 2,\label{ssch}\\ && \tilde{\varphi}_1=\check{\phi}+\varphi_{1}+\varphi_{2},\quad \tilde{\varphi}_0=\varphi_{0}+\varphi_{1},\label{sch}\end{aligned}$$ then, $$\begin{aligned} \label{schw} \mathcal{F}\tilde{\varphi}_j\left( \xi\right)=\hat{\tilde{\varphi}}_j\left( \xi\right)=\mathcal{F}\tilde{\varphi}\left( N^{-j}\xi\right), \quad \xi\in\mathbf{R}^d, j\geq 1,\end{aligned}$$ where $$\begin{aligned} \label{schwa} \mathcal{F}\tilde{\varphi}\left( \xi\right)=\phi\left( N\xi\right)+\phi\left( \xi\right)+\phi\left( N^{-1}\xi\right).\end{aligned}$$ Note that $\phi$ is necessarily $0$ on the boundary of its support. Then, $$\begin{aligned} \mathcal{F}\varphi_j=\mathcal{F}\varphi_j\mathcal{F}\tilde{\varphi}_j, j\geq 0,\end{aligned}$$ and then $$\begin{aligned} \label{convol} \varphi_j=\varphi_j\ast\tilde{\varphi}_j, j\geq 0,\end{aligned}$$ where in particular $$\begin{aligned} \tilde{\varphi}_j\left( x\right) =N^{jd}\tilde{\varphi}\left( N^{j}x\right), j\geq 1.\end{aligned}$$ $\varphi_j,j\geq 0$ are convolution functions we use to define **generalized Besov spaces**. Namely, we write $\tilde{C}^{\beta}_{\infty,\infty}\left( \mathbf{R}^{d}\right) $ as the set of functions in $\mathcal{S}'\left( \mathbf{R}^d\right)$ for which the norm $$\begin{aligned} \left\vert u\right\vert_{\beta,\infty}:=\sup_{j} w\left( N^{-j}\right)^{-\beta}\left\vert u\ast \varphi_j\right\vert _{0}<\infty, \quad \beta\in\left( 0,\infty\right).\end{aligned}$$ For $\kappa\in\left[ 0,1\right],\beta\in\left(0,\infty\right)$, $C^{\mu,\kappa,\beta}\left( \mathbf{R}^{d}\right) $ denotes the collection of functions in $\mathcal{S}'\left( \mathbf{R}^d\right)$ whose norm $$\begin{aligned} \left\vert u\right\vert _{\mu,\kappa,\beta }:=\left\vert u\right\vert _{0}+\left\vert\mathcal{F}^{-1}\left[ \psi^{\mu,\kappa}\mathcal{F}u\right]\right\vert_{\beta,\infty}=\left\vert u\right\vert _{0}+\left\vert L^{\mu,\kappa}u\right\vert_{\beta,\infty}<\infty,\end{aligned}$$ where $$\begin{aligned} \psi^{\mu}\left(\xi\right):=\int \left[ e^{i2\pi \xi\cdot y}-1-i2\pi\chi_{\alpha}\left( y\right)\xi\cdot y\right]\mu\left( dy\right), \xi\in\mathbf{R}^d,\end{aligned}$$ is the Lévy symbol associated to $L^{\mu}$, $$\begin{aligned} \psi^{\mu,\kappa}:=\left\{\begin{array}{ll} \psi^{\mu} & \mbox{ if } \kappa=1,\\ -\left(-\Re\psi^{\mu}\right)^{\kappa} & \mbox{ if } \kappa\in\left( 0,1\right),\\ 1 & \mbox{ if } \kappa=0, \end{array}\right.\end{aligned}$$ and $$\begin{aligned} \label{opp} L^{\mu,\kappa}u:=\mathcal{F}^{-1}\left[ \psi^{\mu,\kappa}\mathcal{F}u\right], u\in \mathcal{S}'\left( \mathbf{R}^d\right).\end{aligned}$$ When $\kappa=0$, $L^{\mu,\kappa}u:=u$, then $C^{\mu,\kappa,\beta}\left( \mathbf{R}^{d}\right) $ is $\tilde{C}^{\beta}_{\infty,\infty}\left( \mathbf{R}^{d}\right) $. When $\kappa=1$, we simply write $L^{\mu,\kappa}=L^{\mu}$ and write $\left\vert u\right\vert _{\mu,\kappa,\beta }$ as $\left\vert u\right\vert _{\mu,\beta }$. In this case, if $u\in C_b^{1}\left(\mathbf{R}^d\right)$, then definition $\eqref{opp}$ coincides with $$\begin{aligned} L^{\mu}\varphi\left( x\right):=\int\left[ \varphi\left( x+y\right)-\varphi\left( x\right)-\chi_{\alpha}\left( y\right)y\cdot \nabla \varphi\left( x\right)\right]\mu\left(d y\right).\end{aligned}$$ For $\kappa\in\left[ 0,1\right],\beta\in\left(0,\infty\right)$, $\tilde{C}^{\mu,\kappa,\beta}\left( \mathbf{R}^{d}\right) $ is the class of functions in $\mathcal{S}'\left( \mathbf{R}^d\right)$ whose norm $$\begin{aligned} \left\Vert u\right\Vert _{\mu,\kappa,\beta }:=\left\vert \left( I-L^{\mu}\right)^{\kappa}u\right\vert_{\beta,\infty}<\infty,\end{aligned}$$ where $$\begin{aligned} \left( I-L^{\mu}\right)^{\kappa}u:=\left\{\begin{array}{ll} \left( I-L^{\mu}\right)u & \mbox{ if } \kappa=1,\\ \mathcal{F}^{-1}\left[ \left(1-\Re\psi^{\mu}\right)^{\kappa}\mathcal{F}u\right] & \mbox{ if } \kappa\in\left[ 0,1\right). \end{array}\right.\end{aligned}$$ When $\kappa=0$, $\left( I-L^{\mu}\right)^{\kappa}u:=u$, then $\tilde{C}^{\mu,\kappa,\beta}\left( \mathbf{R}^{d}\right) $ is again $\tilde{C}^{\beta}_{\infty,\infty}\left( \mathbf{R}^{d}\right)$. When $\kappa=1$, we simply write $\left\Vert u\right\Vert _{\mu,\kappa,\beta }$ as $\left\Vert u\right\Vert _{\mu,\beta }$. We will see in Section 3 that $L^{\mu,\kappa}$ and $\left( I-L^{\mu}\right)^{\kappa}$ could be defined for functions in $C_b^{\infty}\left( \mathbf{R}^{d}\right) $ even if $\kappa\in\left( 1,2\right)$. That is $$\begin{aligned} L^{\mu,\kappa}:=L^{\mu,\kappa/2}\circ L^{\mu,\kappa/2},\quad \left( I-L^{\mu}\right)^{\kappa}:=\left( I-L^{\mu}\right)^{\kappa/2}\circ \left( I-L^{\mu}\right)^{\kappa/2},\end{aligned}$$ where $\circ$ means composition. There will also be **generalized Hölder spaces**. Using the scaling function, we write for $\beta \in \left( 0,1/\alpha\right)$ $$\begin{aligned} \left\vert u\right\vert _{0} &=&\sup_{t,x}\left\vert u\left( t,x\right) \right\vert , \\ \left[ u\right] _{\beta } &=&\sup_{t,x,h\neq 0}\frac{\left\vert u\left( t,x+h\right) -u\left( t,x\right) \right\vert }{w\left( \left\vert h\right\vert\right) ^{\beta }}.\end{aligned}$$ $\tilde{C}^{\beta}\left( \mathbf{R}^{d}\right) $ denotes the set of functions such that the norm $$\begin{aligned} \left\vert u\right\vert _{\beta }:=\left\vert u\right\vert _{0}+\left[ u\right] _{\beta }<\infty , \quad\beta \in \left( 0,1/\alpha\right).\end{aligned}$$ And $\tilde{C}^{1+\beta}\left( \mathbf{R}^{d}\right) $ denotes the set of functions such that the norm $$\begin{aligned} \left\vert u\right\vert_{1+\beta }:=\left\vert u\right\vert _{0}+\left\vert L^{\mu}u\right\vert _{0}+\left[L^{\mu} u\right]_{\beta }<\infty,\quad\beta \in \left( 0,1/\alpha\right).\end{aligned}$$ Assumptions and Examples ------------------------ All the assumptions needed in this paper are collected in this section. Because of their dependence on $w,l$, we denote them by **A(w,l)**. Let $\nu$ be a Lévy measure, i.e. $$\begin{aligned} \int_{\mathbf{R}^d_0} \left( 1\wedge \left\vert y\right\vert^2\right) \nu\left( dy\right)<\infty.\end{aligned}$$ Recall definitions $\eqref{mea}$ and $\eqref{meas}$. **A(w,l)**. (i) For all $R>0$, $\tilde{\nu}_R\left( dy\right)\geq \mu^0 \left( dy\right)$, where $\mu^0$ is a Lévy measure supported on the unit ball $B\left( 0\right)$ and $$\label{mu0} \int \left\vert y\right\vert^2 \mu^0\left( dy\right) +\int \left\vert\xi\right\vert^4\left[1+\upsilon\left( \xi\right)\right]^{d+3}\exp\{-\zeta^0\left( \xi\right) \}d\xi<\infty,$$ in which $$\begin{aligned} &&\upsilon\left( \xi\right)=\int \chi_{\alpha}\left( y\right) \left\vert y\right\vert \left[ \left( \left\vert \xi\right\vert\left\vert y\right\vert\right)\wedge 1\right]\mu^0\left(dy\right),\\ &&\quad\zeta^0\left( \xi\right) =\int \left[ 1-\cos \left( 2\pi\xi\cdot y\right)\right]\mu^0\left( dy\right).\end{aligned}$$ In addition, for all $\xi\in S_{d-1}=\{\xi\in\mathbf{R}^d:\left\vert\xi\right\vert=1 \}$, there is a constant $c_1>0$, such that $$\int_{\left\vert y\right\vert\leq 1} \left\vert \xi\cdot y\right\vert^2 \mu^0\left( dy\right)\geq c_0.$$ (ii) If $\alpha=1$, then $$\label{alpha1} \int_{r<\left\vert y\right\vert <R} y\nu\left( dy\right)=0 \quad\text{ for all } 0<r<R<\infty.$$ (iii) There exist constants $\alpha_1\geq\alpha_2$ such that $\alpha_1,\alpha_2\in\left( 0,1\right)$ if $\alpha\in\left( 0,1\right)$, $\alpha_1,\alpha_2\in \left( 1,2\right]$ if $\alpha\in\left( 1,2\right)$, $\alpha_1\in\left( 1,2\right]$ and $\alpha_2\in\left[0,1\right)$ if $\alpha=1$, and $$\int_{\left\vert y\right\vert\leq 1} \left\vert y\right\vert^{\alpha_1}\tilde{\nu}_R\left( dy\right)+\int_{\left\vert y\right\vert> 1} \left\vert y\right\vert^{\alpha_2}\tilde{\nu}_R\left( dy\right)\leq N_0$$ for some positive constant $N_0$ that is independent of $R$.\ (iv) $\varsigma\left( r\right):=\nu\left( \left\vert y\right\vert >r\right),r>0$ is continuous in $r$ and $$\begin{aligned} \int_0^{1}s \varsigma\left( rs\right)\varsigma\left( r\right)^{-1} ds\leq C_0,\end{aligned}$$ for some $C_0>0$ independent of $r$. We assume both the reference measure $\mu$ and the operator measure $\nu$ satisfy **A(w,l)**. Though looking heavy, **A(w,l)** embraces various models that have been receiving wide attention. For instance, in [@zh], $\nu$ is confined by two $\alpha$-stable Lévy measures, namely, $$\begin{aligned} \label{as1} &&\int_{S_{d-1}}\int_{0}^{\infty }1_{B}\left( rw\right) \frac{dr}{r^{1+\alpha }}\Sigma _{1}\left( dw\right) \nonumber\\ &\leq &\nu \left( B\right) \leq \int_{S_{d-1}}\int_{0}^{\infty }1_{B}\left( rw\right) \frac{dr}{% r^{1+\alpha }}\Sigma _{2}\left( dw\right)\end{aligned}$$ for any Borel measurable set $B$, where $\Sigma_1$ and $\Sigma_2$ are two finite measures defined on the unit sphere and $\Sigma_1$ is nondegenerate. As a result, $\nu$ satisfies **A(w,l)** for $w\left( r\right)=l\left( r\right)=r^{\alpha},r>0$. To see some other examples, let us adopt for now the radial and angular coordinate system and write $\nu$ as $$\label{mod} \nu\left( B\right)=\int_0^{\infty}\int_{\left\vert w\right\vert=1} 1_B\left( rw\right)a\left( r,w\right)j\left(r\right)r^{d-1}S\left( dw\right)dr, \quad \forall B\in\mathcal{B}\left( \mathbf{R}^d_0\right),$$ where $S\left( dw\right)$ is a finite measure on the unit sphere. Suppose $\Lambda\left( dt\right)$ is a measure on $\left(0,\infty\right)$ such that $\int_0^{\infty} \left( 1\wedge t\right) \Lambda\left( dt\right)<\infty$, and $\phi\left(r\right)=\int_0^{\infty}\left( 1-e^{-rt}\right)\Lambda\left( dt\right), r\geq 0$ is the associated Bernstein function. Set in $\eqref{mod}$ $S\left(dw\right)$ to be the usual Lebesgue measure, $a\left( r,w\right)=1$, and $$j\left( r\right)=\int_0^{\infty}\left( 4\pi t\right)^{-d/2}\exp\left( -\frac{r^2}{4t}\right)\Lambda\left( dt\right), r>0.$$ Futhermore, assume\ **H.** (i) There is $C>1$ such that $$C^{-1}\phi\left( r^{-2}\right)r^{-d}\leq j\left(r\right)\leq C\phi\left( r^{-2}\right)r^{-d}.$$ (ii) There are $0<\delta_1\leq \delta_2<1$ and $C>0$ such that for all $0<r\leq R$ $$C^{-1}\left( \frac{R}{r}\right)^{\delta_1}\leq \frac{\phi\left( R\right)}{\phi\left( r\right)}\leq C\left( \frac{R}{r}\right)^{\delta_2}.$$ **G.** There is a function $\rho_0\left(w\right)$ defined on the unit sphere such that $\rho_0\left(w\right)\leq a\left( r,w\right)\leq 1, \forall r>0$, and for all $\left\vert \xi\right\vert=1$, $$\int_{S^{d-1}}\left\vert \xi\cdot w\right\vert^2\rho_0\left( w\right) \geq c>0.$$ Options for such $\Lambda$ and thus $\phi$ could be\ (1) $\phi\left( r\right) =\Sigma_{i=1}^{n} r^{\alpha_i}, \alpha_i\in\left( 0,1\right), i=1,\ldots,n$;\ (2) $\phi\left( r\right) =\left( r+r^{\alpha}\right)^{\beta}, \alpha,\beta\in\left( 0,1\right)$;\ (3) $\phi\left( r\right) =r^{\alpha}\left(\ln\left(1+r\right)\right)^{\beta}, \alpha\in\left( 0,1\right),\beta\in\left( 0,1-\alpha\right)$;\ (4) $\phi\left( r\right) =\left[ \ln\left( \cosh \sqrt{r}\right)\right]^{\alpha}, \alpha\in\left( 0,1\right)$. It can be shown that Assumptions **H** and **G** offer us quite a few candidates of the **A(w,l)**, with the scaling function $w\left( r\right)=j\left( r\right)^{-1}r^{-d}, r>0$ and the scaling factor $$l\left( r\right)=\left\{\begin{array}{ll} Cr^{2\delta_1} & \mbox{ if } r\leq 1,\\ Cr^{2\delta_2} & \mbox{ if } r> 1 \end{array}\right.$$ for some $C>0$. (See [@cr] for details.) In [@cr] and [@cr2], Cauchy problems have been considered in the $L^p$-space and $H^{\mu,s}_p$-space respectively which are defined by Lévy measures from the **A(w,l)** class. More Discussion about the Model ------------------------------- Some estimates on magnitude of the scaling function $w$ and the scaling factor $l$ can be extracted merely from their definitions. \[kl\] Let $w: \left( 0,\infty\right)\rightarrow \left( 0,\infty\right)$ be a scaling function and $l$ be an associated scaling factor which satisfies $l\left( N^{-1}\right)<1<l\left( N\right)$. $r_1=\inf \{r>0: N^r\geq l\left( N\right)\}$, $r_2=\sup \{r>0: N^{-r}\geq l\left( N^{-1}\right)\}$. Then \(i) there exist $c_0, C_0>0$ such that $$c_0\leq w\left( x\right)\leq C_0, \quad \forall x\in\left[ N^{-1},N\right],$$ (ii) $r_1\geq r_2$, and for $c_0, C_0$ above, $$\begin{aligned} c_0\left( x^{r_1}\wedge x^{r_2}\right)\leq w\left( x\right)\leq C_0 \left( x^{r_1}\vee x^{r_2}\right), \quad \forall x\in\mathbf{R}_{+}, \end{aligned}$$ (iii) for the same $c_0$ and $C_0$, $$\begin{aligned} l\left( x\right)\geq \frac{c_0}{C_0} \left( x^{r_1}\wedge x^{r_2}\right), \quad \forall x\in\mathbf{R}_{+}. \end{aligned}$$ (iv) $\gamma\left(x\right):=\inf\{s:l\left( s\right)\geq x\}$. For the same $c_0$ and $C_0$, $$\begin{aligned} \gamma\left( x\right)\leq \frac{C_0}{c_0} \left( x^{\frac{1}{r_1}}\vee x^{\frac{1}{r_2}}\right), \quad \forall x\in\mathbf{R}_{+}.\end{aligned}$$ \(i) Utilize $\eqref{scale}$ and monotonicity of $l$, for $x\in\left[ N^{-1},N\right]$, $$\begin{aligned} l\left(N\right)^{-1}w\left( 1\right)\leq l\left(x^{-1}\right)^{-1}w\left( 1\right)\leq w\left( x\right)\leq l\left( x\right)w\left( 1\right)\leq l\left( N\right)w\left( 1\right). \end{aligned}$$ Set $c_0=l\left(N\right)^{-1}w\left( 1\right)$, $C_0=l\left( N\right)w\left( 1\right)$. \(ii) Apply $\eqref{scale}$ iteratively. For $x\in\left[ N^{j},N^{j+1}\right], \forall j\in\mathbf{N}$, we have $$\begin{aligned} c_0 x^{r_2}\leq l\left(N^{-1}\right)^{-j-1}w\left( N^{-j-1}x\right)\leq w\left( x\right)\leq l\left( N\right)^{j}w\left( N^{-j}x\right)\leq C_0 x^{r_1}. \end{aligned}$$ Since this is true for arbitrarily large $x$, we can conclude $r_1\geq r_2$. For $x\in\left[ N^{-j-1},N^{-j}\right], \forall j\in\mathbf{N}$, $$\begin{aligned} c_0 x^{r_1}\leq l\left(N\right)^{-j}w\left( N^{-j}x\right)\leq w\left( x\right)\leq l\left( N^{-1}\right)^{j+1}w\left( N^{j+1}x\right)\leq C_0 x^{r_2}. \end{aligned}$$ As a summary, $$\begin{aligned} c_0\left( x^{r_1}\wedge x^{r_2}\right)<w\left( x\right)<C_0 \left( x^{r_1}\vee x^{r_2}\right), \quad \forall x\in\mathbf{R}_{+}. \end{aligned}$$ \(iii) By (ii) and $\eqref{scale}$, $$\begin{aligned} l\left( x\right)\geq w\left( 1\right)^{-1} w\left( x\right)\geq \frac{c_0}{C_0}\left( x^{r_1}\wedge x^{r_2}\right), \quad \forall x\in\mathbf{R}_{+}. \end{aligned}$$ \(iv) is a direct conclusion from (iii). **Remark:** Suggested by the bounds in (ii), we redefine $$\begin{aligned} r_1&=&\inf \left\{ \sigma>0:\limsup_{r\rightarrow 0}\frac{% r^{\sigma }}{w \left( r\right) }=0\right\}\vee\sup \left\{ \sigma>0:\liminf_{r\rightarrow \infty}\frac{% r^{\sigma }}{w \left( r\right) }=0\right\},\\ r_2&=&\sup \left\{ \sigma>0:\liminf_{r\rightarrow 0}\frac{% r^{\sigma }}{w \left( r\right) }=\infty\right\}\wedge\inf \left\{ \sigma>0:\limsup_{r\rightarrow \infty}\frac{% r^{\sigma }}{w \left( r\right) }=\infty\right\}.\end{aligned}$$ Namely, we take the smallest possbile $r_1$ and the largest possible $r_2$ such that (ii) holds. In Lemma \[order\], we shall show that the order $\alpha\in\left[ r_2,r_1\right]$. In $\alpha$-stable-like examples, $r_1=r_2=\alpha$. For models illustrated by Bernstein functions, $r_1\leq 2\delta_2, r_2\geq 2\delta_1$. (iii) and (iv) still hold with the amended parameters. It can be told from the next lemma that the scaling function is rather genetic to the measure $\nu$, if **A(w,l)** holds for $\nu$. \[ess\] Let $\nu$ be a Lévy measure and $w$ be the scaling function which $\nu$ satisfies **A(w,l)** for. Then,\ a) there are constants $C_1, C_2>0$ such that $$\begin{aligned} C_1\varsigma\left( r\right)\leq w\left( r\right)^{-1}\leq C_2 \varsigma\left( r\right), \quad\forall r>0. \end{aligned}$$ b) $\int_{\left\vert y\right\vert \leq 1}w\left( \left\vert y\right\vert \right) \nu\left( dy\right)=+\infty$.\ c) For any $\varepsilon>0$, $\int_{\left\vert y\right\vert \leq 1}w \left( \left\vert y\right\vert \right) ^{1+\varepsilon }\nu\left( dy\right) <\infty$.\ d) For any $\varepsilon >0$, $\int_{\left\vert y\right\vert \leq 1}\left\vert y\right\vert ^{\varepsilon }w\left( \left\vert y\right\vert \right) \nu\left( dy\right) <\infty$. a\) First, $$\begin{aligned} \label{11} w\left( r\right)^{-1}\int_{\left\vert y\right\vert>1}\tilde{\nu}_r\left( dy\right)=\int_{\left\vert y\right\vert>r}\nu\left( dy\right)=\varsigma\left( r\right),\quad \forall r>0,\end{aligned}$$ then by (iii) in **A(w,l)**, $$\begin{aligned} \label{10} \varsigma\left( r\right)\leq C w\left( r\right)^{-1}, \quad\forall r>0.\end{aligned}$$ On the other hand, for all $r>0$, $$\label{12} w\left( r\right)^{-1}\int_{\left\vert y\right\vert\leq 1}\left\vert y\right\vert^2\tilde{\nu}_r\left( dy\right)=r^{-2}\int_{\left\vert y\right\vert\leq r}\left\vert y\right\vert^2\nu\left( dy\right)=-r^{-2}\int_0^{r}s^2 d\varsigma\left( s\right).$$ By the generalized formula of integration by parts from stochastic calculus, $$\begin{aligned} &&\int_0^{r}s^2 d\varsigma\left( s\right)=\lim_{\varepsilon\rightarrow 0}\int_{\varepsilon}^{r}s^2 d\varsigma\left( s\right)\nonumber\\ &=&\lim_{\varepsilon\rightarrow 0} \left( s^2 \varsigma\left( s\right)\big\arrowvert_{\varepsilon}^r-2\int_{\varepsilon}^{r}s \varsigma\left( s\right)ds\right)\nonumber\\ &=& r^2 \varsigma\left( r\right)-2\int_{0}^{r}s \varsigma\left( s\right)ds.\label{13}\end{aligned}$$ Note that in above derivation, we used the fact that $\lim_{\varepsilon\rightarrow 0} \varepsilon^2 \varsigma\left( \varepsilon\right)=0$. This is due to **A(w,l)**(i), $\eqref{10}$ and $\eqref{12}$, which implies $$\begin{aligned} \lim_{\varepsilon\rightarrow 0} \varepsilon^2 \varsigma\left( \varepsilon\right)\leq C\lim_{\varepsilon\rightarrow 0} \varepsilon^2 w\left( \varepsilon\right)^{-1}\leq C\lim_{\varepsilon\rightarrow 0}\int_{\left\vert y\right\vert\leq \varepsilon}\left\vert y\right\vert^2\nu\left( dy\right)=0.\end{aligned}$$ Combine $\eqref{11}$, $\eqref{12}$ and $\eqref{13}$. $$\begin{aligned} \label{14} \qquad\quad w\left( r\right)^{-1}\int\left(\left\vert y\right\vert^2\wedge 1\right)\tilde{\nu}_r\left( dy\right)=2r^{-2}\int_0^{r}s \varsigma\left( s\right) ds=2\int_0^{1}s \varsigma\left( rs\right) ds.\end{aligned}$$ Again by **A(w,l)**(i), $$\begin{aligned} w\left( r\right)^{-1}\leq C\varsigma\left( r\right)\int_0^{1}s \varsigma\left( rs\right)\varsigma\left( r\right)^{-1} ds\leq C\varsigma\left( r\right).\end{aligned}$$ b) Use a) and the Itô formula. $$\begin{aligned} &&\int_{\left\vert y\right\vert \leq 1}w\left( \left\vert y\right\vert \right) \nu\left( dy\right)=-\int_0^1 w\left( r \right)d\varsigma\left(r \right)\\ &\leq& -C\int_0^1 \varsigma\left( r \right)^{-1}d\varsigma\left(r \right)=-C\lim_{\varepsilon\rightarrow 0}\int_{\varepsilon}^1 \varsigma\left( r \right)^{-1}d\varsigma\left(r \right)\\ &=&C\left( \lim_{\varepsilon\rightarrow 0}\ln \varsigma\left( \varepsilon\right)-\ln \varsigma\left( 1\right)\right)=\infty.\end{aligned}$$ c\) For any $\varepsilon >0$, $$\int_{\left\vert y\right\vert \leq 1}w\left( \left\vert y\right\vert \right) ^{1+\varepsilon }\nu\left( dy\right) \leq C\int_{0}^{1}\varsigma \left( r\right) ^{-1-\varepsilon }d\varsigma \left( r\right) =C\varsigma \left( r\right) ^{-\varepsilon }|_{0}^{1}<\infty .$$ d) By c), for any $\varepsilon >0$, $\sigma'>\inf \left\{ \sigma\in\left( 0,2\right):\limsup_{r\rightarrow 0}\frac{r^{\sigma }}{w \left( r\right) }=0\right\}$, $$\begin{aligned} \int_{\left\vert y\right\vert \leq 1}\left\vert y\right\vert ^{\varepsilon }w \left( \left\vert y\right\vert \right) \nu\left( dy\right) &=&\int_{\left\vert y\right\vert \leq 1}\left( \frac{\left\vert y\right\vert ^{\sigma ^{\prime }}% }{w\left( \left\vert y\right\vert \right) }\right) ^{\frac{\varepsilon }{\sigma ^{\prime }}}w\left( \left\vert y\right\vert \right) ^{1+\frac{% \varepsilon }{\sigma ^{\prime }}}\nu\left( dy\right)\\ &\leq &C\int_{\left\vert y\right\vert \leq 1}w \left( \left\vert y\right\vert \right) ^{1+\frac{\varepsilon }{\sigma ^{\prime }}}\nu\left( dy\right) <\infty.\end{aligned}$$ \[order\] Let $\nu$ be a Lévy measure and $w$ be the scaling function which $\nu$ satisfies **A(w,l)** for. $\alpha$ is the order of $\nu$. Then $$\alpha=\inf \left\{ \sigma:\limsup_{r\rightarrow 0}\frac{% r^{\sigma }}{w \left( r\right) }=0\right\} .$$ Denote $\alpha'=\inf \left\{ \sigma\in\left( 0,\infty\right):\limsup_{r\rightarrow 0}\frac{% r^{\sigma }}{w \left( r\right) }=0\right\}$. We first show that $\alpha'\leq \alpha$. Note if $\sigma\in \left(0,\alpha'\right)$, $\limsup_{r\rightarrow 0}\frac{ r^{\sigma }}{w \left( r\right) }=\infty$. Otherwise $$\begin{aligned} \limsup_{r\rightarrow 0}\frac{ r^{\frac{\sigma+\alpha'}{2 }}}{w \left( r\right) }\leq \limsup_{r\rightarrow 0}\frac{ r^{\sigma }}{w \left( r\right) }\lim_{r\rightarrow 0} r^{\frac{\alpha'-\sigma}{2} }=0, \end{aligned}$$ which contradicts with the definition of $\alpha'$. Now take $0<r\leq 1$. For any $\sigma\in \left(\alpha,\infty\right)$, $$\begin{aligned} \int_{\left\vert y\right\vert \leq 1}\left\vert y\right\vert ^{\sigma }d\nu\geq\int_{\left\vert y\right\vert \leq r}\left\vert y\right\vert ^{\sigma}d\nu=\frac{r^{\sigma }}{w \left( r\right) }\int_{\left\vert y\right\vert \leq 1}\left\vert y\right\vert ^{\sigma}d\tilde{\nu}_{r}\geq \frac{r^{\sigma }}{w\left( r\right) }\int_{\left\vert y\right\vert \leq 1}\left\vert y\right\vert ^{2}d\tilde{\nu}_{r}. \end{aligned}$$ By (i) in **A(w,l)**, $\{\tilde{\nu}_{r}:r>0\}$ are non-degenerate. Hence $$\begin{aligned} c\frac{r^{\sigma }}{w\left( r\right) }\leq\frac{r^{\sigma }}{w\left( r\right) }\int_{\left\vert y\right\vert \leq 1}\left\vert y\right\vert ^{2}d\tilde{\nu}_{r}\leq \int_{\left\vert y\right\vert \leq 1}\left\vert y\right\vert ^{\sigma}d\nu<C\end{aligned}$$ for some $c,C>0$ independent of $r$. Thus $\alpha'\leq \sigma $, and thus $\alpha'\leq\alpha$. For the other direction, assume to the contrary $\alpha' <\alpha$. Then by Lemma \[ess\], for $\alpha' <\sigma'<\sigma<\alpha$, $$\begin{aligned} \int_{\left\vert y\right\vert \leq 1}\left\vert y\right\vert ^{\sigma }d\nu &=&\int_{\left\vert y\right\vert \leq 1} \frac{\left\vert y\right\vert ^{\sigma ^{\prime }}}{w\left( \left\vert y\right\vert \right) }\left\vert y\right\vert ^{\sigma-\sigma ^{\prime }}w\left( \left\vert y\right\vert \right) d\nu \\ &\leq &C\int_{\left\vert y\right\vert \leq 1}\left\vert y\right\vert ^{\sigma-\sigma ^{\prime }}w\left( \left\vert y\right\vert \right) d\nu <\infty.\end{aligned}$$But this contradicts with the definition of $\alpha$. Therefore, $\alpha\leq\alpha'$. Combining the argument above, we obtain $\alpha'=\alpha$. According to Lemma \[order\], we can claim immediately that two Lévy measures which satisfy **A(w,l)** for the same $w,l$ have the same order. The last two lemmas of this section explain why we restricted the Hölder order $\beta \in \left( 0,1/\alpha\right)$ when defining generalized Hölder spaces in section 2.2. If $\beta$ exceeds or equals to $1/\alpha$, the function may reduce to a trivial case that is no longer of interest. \[es1\] Set $\alpha'=\inf \left\{ \sigma\in\left( 0,2\right):\limsup_{r\rightarrow 0}\frac{% r^{\sigma }}{w \left( r\right) }=0\right\}$.\ a) If $0<\frac{1}{\beta }<\alpha'$ and $\sup_{x,y}\frac{\left\vert f\left( x\right) -f\left( x+y\right) \right\vert }{% w \left( \left\vert y\right\vert \right) ^{\beta }}<\infty $, then $f$ is a constant.\ b) If $\frac{1}{\beta }>\alpha'$ and $f$ is a bounded Lipschitz function, then for each $\varepsilon \in \left( 0,1\right) $, there is a positive constant $C_{\varepsilon}$ depending on $\varepsilon$ but independent of $f$ so that $$\sup_{x,y}\frac{\left\vert f\left( x\right) -f\left( x+y\right) \right\vert }{w\left( \left\vert y\right\vert \right)^{\beta }}\leq \varepsilon \sup_{x,y}% \frac{\left\vert f\left( x+y\right) -f\left( x\right) \right\vert }{% \left\vert y\right\vert }+C_{\varepsilon}\left\vert f\right\vert_{0}.$$ Namely, the space $\tilde{C}^{\beta }$ contains all bounded Lipschitz functions. a\) Let $\varepsilon\in\left(0,\beta\alpha'-1\right)$, $\beta'\left( 1+\varepsilon \right) =\beta$. Then $\frac{1}{\beta }\leq \frac{1}{\beta'}<\alpha'$ and we can find a sequence $y_{n}\rightarrow 0$ so that $\frac{\left\vert y_{n}\right\vert ^{1/\beta ^{\prime }}}{w \left( \left\vert y_{n}\right\vert \right) }\geq C>0$. Let $f_{\varepsilon }=f\ast w_{\varepsilon}$ where $w_{\varepsilon }$ is the standard mollifier. We then have $$\begin{aligned} \sup_{x,y}\frac{\left\vert f\left( x\right) -f\left( x+y\right) \right\vert }{% w \left( \left\vert y\right\vert \right) ^{\beta }} &\geq&C\frac{\left\vert f_{\varepsilon }\left( x\right) -f_{\varepsilon }\left( x+y_{n}\right) \right\vert }{w\left( \left\vert y_{n}\right\vert \right) ^{\beta }} \\ &=&C\frac{\left\vert f_{\varepsilon }\left( x\right) -f_{\varepsilon }\left( x+y_{n}\right) \right\vert }{\left\vert y_{n}\right\vert ^{1+\varepsilon }}% \left( \frac{\left\vert y_{n}\right\vert ^{\frac{1}{\beta ^{\prime }}}}{% w\left( \left\vert y_{n}\right\vert \right) }\right) ^{\beta ^{\prime }(1+\varepsilon )} \\ &\geq &C\frac{\left\vert f_{\varepsilon }\left( x\right) -f_{\varepsilon }\left( x+y_{n}\right) \right\vert }{\left\vert y_{n}\right\vert ^{1+\varepsilon }},\quad x\in \mathbf{R}^{d}. \end{aligned}$$ Hence $\nabla f_{\varepsilon }\left( x\right) =0,x\in \mathbf{R}^{d},\forall \varepsilon \in \left(0,\beta\alpha'-1\right)$, which implies $ f_{\varepsilon }\left( x\right) =C_{\varepsilon}, \forall \varepsilon \in \left(0,\beta\alpha'-1\right)$. Obviously, $f$ is continuous, then $f_{\varepsilon }\rightarrow f$ uniformly on any compact subsets, and thus $f$ is a constant. b\) Since $\limsup_{r\rightarrow 0}\frac{r^{1/\beta }}{w\left( r\right) }=0$, then for each $\varepsilon \in \left( 0,1\right) $ there is $% \delta >0$ so that $\frac{\left\vert y\right\vert ^{\frac{1}{\beta }}}{% w\left( \left\vert y\right\vert \right) }\leq \varepsilon^{\frac{1}{\beta }} $ if $% \left\vert y\right\vert \leq \delta .$ Hence, $$\begin{aligned} \frac{\left\vert f\left( x\right) -f\left( x+y\right) \right\vert }{w \left( \left\vert y\right\vert \right) ^{\beta }} &=&\frac{\left\vert f\left( x\right) -f\left( x+y\right) \right\vert }{\left\vert y\right\vert }% \left( \frac{\left\vert y\right\vert ^{\frac{1}{\beta }}}{w\left( \left\vert y\right\vert \right) }\right) ^{\beta } \\ &\leq &\varepsilon \frac{\left\vert f\left( x\right) -f\left( x+y\right) \right\vert }{\left\vert y\right\vert } \end{aligned}$$ if $\left\vert y\right\vert \leq \delta$, and $$\begin{aligned} \frac{\left\vert f\left( x\right) -f\left( x+y\right) \right\vert }{w \left( \left\vert y\right\vert \right) ^{\beta }} \leq 2w\left( \delta\right)^{-\beta}l\left( 1\right)^{\beta}\left\vert f\right\vert_0\leq C_{\varepsilon}\left\vert f\right\vert_0 \end{aligned}$$ if $\left\vert y\right\vert > \delta$. \[es2\] Let $\alpha'=\inf \left\{ \sigma\in\left( 0,2\right):\limsup_{r\rightarrow 0}\frac{% r^{\sigma }}{w \left( r\right) }=0\right\}$.\ a) If $\limsup_{r\rightarrow 0}\frac{r^{\alpha'}}{w\left( r\right) }=0$ and $f$ is a bounded Lipschitz function, then for each $\varepsilon \in \left( 0,1\right) $ there is a positive constant $C_{\varepsilon}$ depending on $\varepsilon$ but independent of $f$ so that $$\sup_{x,y}\frac{\left\vert f\left( x\right) -f\left( x+y\right) \right\vert }{w\left( \left\vert y\right\vert \right) ^{1/\alpha'}}\leq \varepsilon \sup_{x,y}\frac{\left\vert f\left( x+y\right) -f\left( x\right) \right\vert }{% \left\vert y\right\vert }+C_{\varepsilon}\left\vert f\right\vert _{0}.$$ Namely, the space $\tilde{C}^{1/\alpha'}$ contains all bounded Lipschitz functions.\ b) If $\limsup_{r\rightarrow 0}\frac{r^{\alpha'}}{w \left( r\right) }\in \left( 0,\infty \right) $, then $\tilde{C}^{1/\alpha'}$ is the space of bounded Lipschitz functions.\ c) If $\limsup_{r\rightarrow 0}\frac{r^{\alpha'}}{w\left( r\right) }=\infty $, then $\tilde{C}^{1/\alpha'}$ consists of constants only. a\) The proof is identical to part b) of Lemma \[es1\].\ b) Let $w_{\varepsilon }$ be a standard mollifier and $f_{\varepsilon }=f\ast w_{\varepsilon}$. For any $f\in \tilde{C}^{1/\alpha'}$, there is a sequence $y_{n}\rightarrow 0$ so that $\frac{\left\vert y_{n}\right\vert ^{\alpha'}}{w \left( \left\vert y_{n}\right\vert \right) }\geq c>0$. Then $$\begin{aligned} \frac{% \left\vert f_{\varepsilon }\left( x\right) -f_{\varepsilon }\left( x+y_{n}\right) \right\vert }{w\left( \left\vert y_{n}\right\vert \right) ^{1/\alpha'}}&=&\frac{\left\vert f_{\varepsilon }\left( x\right) -f_{\varepsilon }\left( x+y_n\right) \right\vert }{\left\vert y_n\right\vert }% \left( \frac{\left\vert y_n\right\vert ^{\alpha'}}{w\left( \left\vert y_n\right\vert \right) }\right) ^{1/\alpha'} \\ &\geq& c\frac{\left\vert f_{\varepsilon }\left( x\right) -f_{\varepsilon }\left( x+y_{n}\right) \right\vert }{\left\vert y_{n}\right\vert }. \end{aligned}$$ Thus, $\left\vert \nabla f_{\varepsilon }\right\vert _{0}\leq C\left\vert f\right\vert _{1/\alpha'}$, and thus $\left\vert \nabla f\right\vert _{0}\leq C\left\vert f\right\vert _{1/\alpha'}$. On the other hand, $\frac{\left\vert y\right\vert ^{\alpha'}}{w \left( \left\vert y\right\vert \right) }\leq C$ if $\left\vert y\right\vert \leq 1$, then $$\begin{aligned} \frac{\left\vert f\left( x\right) -f\left( x+y\right) \right\vert }{w\left( \left\vert y\right\vert \right) ^{1/\alpha'}} &=&\frac{\left\vert f\left( x\right) -f\left( x+y\right) \right\vert }{\left\vert y\right\vert }% \left( \frac{\left\vert y\right\vert ^{\alpha'}}{w\left( \left\vert y\right\vert \right) }\right) ^{1/\alpha'} \\ &\leq &C\frac{\left\vert f\left( x\right) -f\left( x+y\right) \right\vert }{\left\vert y\right\vert } \end{aligned}$$ if $\left\vert y\right\vert \leq 1$, and for $\left\vert y\right\vert> 1$, $$\begin{aligned} \frac{\left\vert f\left( x\right) -f\left( x+y\right) \right\vert }{w \left( \left\vert y\right\vert \right)^{1/\alpha'}} \leq 2w\left( 1\right)^{-1/\alpha'}l\left( 1\right)^{1/\alpha'}\left\vert f\right\vert_0\leq C\left\vert f\right\vert_0. \end{aligned}$$ Hence $f\in \tilde{C}^{1/\alpha'}$ if $f$ is a bounded Lipschitz function.\ c) There is a sequence $\{y_{n}:n\in\mathbf{N}\}$ so that $y_{n}\rightarrow 0$ and for any $n\in\mathbf{N}$, $\frac{\left\vert y_k\right\vert ^{\alpha'}}{w \left( \left\vert y_k\right\vert \right) }\geq n$ if $k\geq n$. Then for $k\geq n$, $$n^{\frac{1}{\alpha'}}\frac{\left\vert f_{\varepsilon }\left( x\right) -f_{\varepsilon }\left( x+y_k\right) \right\vert }{\left\vert y_k\right\vert }\leq \frac{\left\vert f_{\varepsilon }\left( x\right)-f_{\varepsilon }\left( x+y_k\right) \right\vert }{w\left( \left\vert y_k\right\vert \right) ^{1/\alpha'}}% \leq \left\vert f_{\varepsilon }\right\vert_{1/\alpha'}\leq C\left\vert f\right\vert_{1/\alpha'}.$$ Thus $\nabla f_{\varepsilon }=0$, $\forall x\in \mathbf{R}^{d}$, $\forall\varepsilon \in \left( 0,1\right)$, and thus $f$ is a constant. Characterization of Spaces and Norm Equivalence =============================================== Our target spaces of general smoothness are $\tilde{C}^{\beta}$, $\tilde{C}^{\beta}_{\infty,\infty}$, $C^{\mu,\kappa,\beta}$, $\tilde{C}^{\mu,\kappa,\beta}$ endowed with norms $\left\vert \cdot\right\vert_{\beta}$, $\left\vert \cdot\right\vert_{\beta,\infty}$, $\left\vert\cdot\right\vert_{\mu,\kappa,\beta}$, $\left\Vert\cdot\right\Vert_{\mu,\kappa,\beta}$ respectively, and our goal in this section is to establish norm equivalence among them. \[equiv\] Let $\beta\in\left( 0,\infty\right)$. If $u\in \tilde{C}^{\beta}_{\infty,\infty}\left( \mathbf{R}^d\right)$, then $u\in C\left( \mathbf{R}^d\right)$ and $u\left( x\right)=\sum_{j=0}^{\infty}\left( u\ast\varphi_j\right)\left( x\right)$. Besides, $$\label{sup} \left\vert u\right\vert_0\leq \sum_{j=0}^{\infty}\left\vert u\ast\varphi_j\right\vert_0\leq C\left( \beta\right)\left\vert u\right\vert_{\beta,\infty}.$$ Note that $u\ast\varphi_j\in C\left(\mathbf{R}^d\right), \forall j\in \mathbf{N}$, so is $\sum_{j=0}^{n} u\ast\varphi_j,\forall n\in \mathbf{N}_{+} $. Since $$\begin{aligned} \sum_{j=0}^{\infty}\left\vert u\ast\varphi_j\right\vert_0 &=& \sum_{j=0}^{\infty}w\left( N^{-j}\right)^{\beta}w\left( N^{-j}\right)^{-\beta}\left\vert u\ast\varphi_j\right\vert_0\\ &\leq& \sup_{j\geq 0}w\left( N^{-j}\right)^{-\beta}\left\vert u\ast\varphi_j\right\vert_0\sum_{j=0}^{\infty}w\left( N^{-j}\right)^{\beta}\\ &\leq& C\left\vert u\right\vert_{\beta,\infty}\sum_{j=0}^{\infty}l\left( N^{-1}\right)^{j\beta}<\infty, \end{aligned}$$ we have $\sum_{j=0}^{n} u\ast\varphi_j\rightarrow \sum_{j=0}^{\infty}u\ast\varphi_j$ uniformly in $\mathbf{R}^d$ as $n\to\infty$. Therefore, $\sum_{j=0}^{\infty}u\ast\varphi_j\in C\left(\mathbf{R}^d\right)$, and $\sum_{j=0}^{n} u\ast\varphi_j\xrightarrow{n\to\infty} \sum_{j=0}^{\infty}u\ast\varphi_j$ in the topology of $\mathcal{S}'\left(\mathbf{R}^d\right)$. By continuity of the Fourier transform, $$\mathcal{F}\left(\sum_{j=0}^{\infty}u\ast\varphi_j\right)=\lim_{n\rightarrow \infty}\mathcal{F}\left(\sum_{j=0}^{n} u\ast\varphi_j\right)= \lim_{n\rightarrow \infty}\sum_{j=0}^{n} \hat{u}\hat{\varphi_j}=\sum_{j=0}^{\infty} \hat{u}\hat{\varphi_j}=\hat{u}.$$And therefore, $u=\sum_{j=0}^{\infty}u\ast\varphi_j\in C\left(\mathbf{R}^d\right)$. \[pr2\] Let $\beta\in\left( 0,1\right)$ and $$\begin{aligned} \label{asl} \int_0^1 l\left( t\right)^{\beta}\frac{dt}{t}+\int_1^{\infty} l\left( t\right)^{\beta}\frac{dt}{t^2}<\infty. \end{aligned}$$ Then norm $\left\vert u\right\vert_{\beta}$ and norm $\left\vert u\right\vert_{\beta,\infty}$ are equivalent. Namely, there is a constant positive $C$ depending only on $d,\beta, N$ such that $$\begin{aligned} C^{-1}\left\vert u\right\vert_{\beta}\leq \left\vert u\right\vert_{\beta,\infty}\leq C\left\vert u\right\vert_{\beta}, \forall u\in C\left( \mathbf{R}^d\right). \end{aligned}$$ Suppose $\left\vert u\right\vert_{\beta}<\infty$. If $j=0$, then $$w\left(1\right)^{-\beta}\left\vert u\ast\varphi_0\right\vert_0\leq w\left( 1\right)^{-\beta}\left\vert u\right\vert_0\int\left\vert\varphi_0\left(y\right)\right\vert dy\leq C\left\vert u\right\vert_{\beta}.$$ If $j\neq 0$, then by the construction of $\varphi_j$, $\int \varphi_j\left(y\right)dy=\hat{\varphi}_j\left( 0\right)=0$. Therefore, $$\begin{aligned} &&w\left( N^{-j}\right)^{-\beta}\left\vert u\ast\varphi_j\right\vert_0\\ &=& w\left( N^{-j}\right)^{-\beta}\left\vert \int \left[u\left(y\right)-u\left(x\right)\right]\varphi_j\left(x-y\right)dy\right\vert_0\\ &\leq& w\left( N^{-j}\right)^{-\beta}\left[ u\right]_{\beta}\int w\left(\left\vert y-x\right\vert\right)^{\beta}N^{jd}\left\vert \check{ \phi}\left( N^j\left(x-y\right)\right)\right\vert dy\\ &=&w\left( N^{-j}\right)^{-\beta}\left[ u\right]_{\beta}\int w\left(N^{-j}\left\vert y\right\vert\right)^{\beta}\left\vert\check{ \phi}\left( y\right)\right\vert dy\\ &\leq& \left[ u\right]_{\beta}\int l\left(\left\vert y\right\vert\right)^{\beta}\left\vert\check{ \phi}\left( y\right)\right\vert dy\leq C\left\vert u\right\vert_{\beta}. \end{aligned}$$ That is to say $\left\vert u\right\vert_{\beta,\infty}\leq C\left\vert u\right\vert_{\beta}$ for some constant $C\left( \beta,d\right)>0$. For the other direction, by Lemma \[equiv\], $\left\vert u\right\vert_0\leq C\left\vert u\right\vert_{\beta,\infty}$. Meanwhile, we can write $$\left[ u\right]_{\beta}=\sup_{x,y}\frac{\left\vert u\left( x+y\right)-u\left( x\right)\right\vert}{w\left(\left\vert y\right\vert\right)^{\beta}}\leq\sup_{t>0}\frac{\sup_{\left\vert y\right\vert\leq t}\left\vert u\left( x+y\right)-u\left( x\right)\right\vert_0}{w\left(t\right)^{\beta}}:=\sup_{t>0}\frac{\varpi\left( t,u\right)}{w\left(t\right)^{\beta}},$$ where $\varpi\left( t,u\right):=\sup_{\left\vert y\right\vert\leq t}\left\vert u\left( x+y\right)-u\left( x\right)\right\vert_0$ is increasing in $t$. Then, for $k\geq 0$, $$\varpi\left( N^{-k-1},u\right)\leq \varpi\left( t,u\right)\leq \varpi\left( N^{-k},u\right) \mbox{ if } N^{-k-1}\leq t< N^{-k},$$ and then by monotonicity of $l$, for $N^{-k-1}\leq t< N^{-k}$, $$\begin{aligned} l\left( N \right)^{-\beta}\frac{\varpi\left( N^{-k-1},u\right)}{w\left( N^{-k-1}\right)^{\beta}} \leq \frac{\varpi\left( t,u\right)}{w\left(t\right)^{\beta}}\leq l\left( N \right)^{\beta}\frac{\varpi\left( N^{-k},u\right)}{w\left( N^{-k}\right)^{\beta}} . \end{aligned}$$ Hence, $$\begin{aligned} \left[ u\right]_{\beta} &=& \sup_{t\geq 1}\frac{\sup_{\left\vert y\right\vert\leq t}\left\vert u\left( x+y\right)-u\left( x\right)\right\vert_0}{w\left(t\right)^{\beta}}\vee \sup_{0<t<1}\frac{\varpi\left( t,u\right)}{w\left(t\right)^{\beta}}\\ &\leq& C\left[\left\vert u\right\vert_0\vee \sup_{k\geq 0}w\left( N^{-k}\right)^{-\beta} \varpi\left( N^{-k},u\right)\right]\\ &\leq& C\left[\left\vert u\right\vert^{\beta}_{\infty,\infty}\vee \sup_{k\geq 0}w\left( N^{-k}\right)^{-\beta} \varpi\left( N^{-k},u\right)\right]. \end{aligned}$$ It suffices to show that $w\left( N^{-k}\right)^{-\beta} \varpi\left( N^{-k},u\right)\leq C\left\vert u\right\vert_{\beta,\infty}$ for any $k\in\mathbf{N}$. Use the convolution functions introduced in section 2.2. Note $$\begin{aligned} &&\left\vert u\ast \varphi _{0}\left( x+y\right) -u\ast \varphi _{0}\left( x\right) \right\vert \\ &\leq &\int \left\vert \tilde{\varphi}_{0}\left( x+y-z\right) -\tilde{\varphi% }_{0}\left( x-z\right) \right\vert \left\vert u\ast \varphi _{0}\left( z\right) \right\vert dz \\ &\leq &C\left( \left\vert y\right\vert \wedge 1\right) \left\vert u\ast \varphi _{0}\right\vert _{0}, \end{aligned}$$ and $$\begin{aligned} &&\left\vert u\ast \varphi _{j}\left( x+y\right) -u\ast \varphi _{j}\left( x\right) \right\vert \\ &\leq &N^{jd}\int \left\vert \tilde{\varphi}\left( N^{j}\left( x+y-z\right) \right) -\tilde{\varphi}\left( N^{j}\left( x-z\right) \right) \right\vert \left\vert u\ast \varphi _{j}\left( z\right) \right\vert dz \\ &\leq &C\left( \left\vert N^{j}y\right\vert \wedge 1\right) \left\vert u\ast \varphi _{j}\right\vert _{0},\quad j\geq 1. \end{aligned}$$ Therefore by Lemma \[equiv\], for each $k\in \mathbf{N}$, $$\begin{aligned} \varpi\left( N^{-k},u\right) &=&\sup_{\left\vert y\right\vert \leq N^{-k}}\left\vert u\left( x+y\right) -u\left( x\right) \right\vert _{0} \\ &\leq &\sup_{\left\vert y\right\vert \leq N^{-k}}\sum_{j=0}^{\infty }\left\vert u\ast \varphi _{j}\left( x+y\right) -u\ast \varphi _{j}\left( x\right) \right\vert _{0} \\ &\leq &C\sup_{\left\vert y\right\vert \leq N^{-k}}\sum_{j=0}^{\infty }\left( N^{j}\left\vert y\right\vert \wedge 1\right) \left\vert u\ast \varphi _{j}\right\vert _{0}, \end{aligned}$$ and therefore, $$\begin{aligned} \varpi\left( N^{-k},u\right) &\leq& C\left\vert u\right\vert _{\beta ,\infty }\sup_{\left\vert y\right\vert \leq N^{-k}}\sum_{j=0}^{\infty }\left( N^{j}\left\vert y\right\vert\wedge 1\right) w \left( N^{-j}\right) ^{\beta } \\ &\leq &C\left\vert u\right\vert _{\beta ,\infty }\left[ \sum_{j=0}^{k}N^{j-k}w\left( N^{-j}\right) ^{\beta }+\sum_{j=k+1}^{\infty }w\left( N^{-j}\right) ^{\beta }\right] . \end{aligned}$$ Clearly, for all $j\in\mathbf{N}$, $j\leq x\leq j+1$, $$\begin{aligned} l\left( 1\right)^{-\beta} w\left( N^{-x}\right)^{\beta}&\leq& w\left( N^{-j}\right)^{\beta}\leq l\left( N\right)^{\beta}w\left( N^{-x}\right)^{\beta},\\ \frac{l\left( 1\right)^{-\beta}}{N} N^x w\left( N^{-x}\right)^{\beta} &\leq& N^j w\left( N^{-j}\right)^{\beta}\leq l\left( N\right)^{\beta}N^x w\left( N^{-x}\right)^{\beta}. \end{aligned}$$ Then for all $k\in\mathbf{N}$, $$\begin{aligned} C_{1}\int_{0}^{k+1}N^{x}w \left( N^{-x}\right) dx\leq \sum_{j=0}^{k}N^{j}w \left( N^{-j}\right) ^{\beta }\leq C_{2}\int_{0}^{k+1}N^{x}w \left( N^{-x}\right) ^{\beta }dx \end{aligned}$$ for some positive constants $C_1,C_2$ that do not depend on $k,j$. Hence, $$\begin{aligned} &&\sum_{j=0}^{k}N^{j-k}w\left( N^{-j}\right) ^{\beta}= N^{-k}\sum_{j=0}^{k}N^{j}w \left( N^{-j}\right) ^{\beta }\\ &\leq& C N^{-k}\int_{0}^{k+1}N^{x}w \left( N^{-x}\right) ^{\beta }dx=CN^{-k}\int_{1}^{N^{k+1}}w \left( t^{-1}\right) ^{\beta}dt\\ &\leq&CN^{-k}w \left( N^{-k}\right) ^{\beta }\int_{1}^{N^{k+1}}l\left( \frac{N^{k}}{t}\right) ^{\beta }dt\\ &=& Cw \left( N^{-k}\right) ^{\beta}\int_{N^{-1}}^{N^{k}}l\left( r\right) ^{\beta }r^{-2}dr. \end{aligned}$$ Meanwhile, $$\begin{aligned} \sum_{j=k+1}^{\infty }w\left( N^{-j}\right) ^{\beta } &\leq &C\int_{k+1}^{\infty }w \left( N^{-x}\right) ^{\beta }dx\leq Cw \left( N^{-k}\right) ^{\beta }\int_{k+1}^{\infty }l\left( N^{k}N^{-x}\right) ^{\beta }dx \\ &=&Cw \left( N^{-k}\right) ^{\beta }\int_{0}^{N^{-1}}l\left( r\right) ^{\beta }\frac{dr}{r}. \end{aligned}$$ Therefore, under the assumption $\eqref{asl}$, $$\begin{aligned} &&w\left( N^{-k}\right)^{-\beta} \varpi\left( N^{-k},u\right)\\ &\leq &Cw\left( N^{-k}\right)^{-\beta}\left\vert u\right\vert _{\beta ,\infty }\left[ \sum_{j=0}^{k}N^{j-k}w\left( N^{-j}\right) ^{\beta }+\sum_{j=k+1}^{\infty }w\left( N^{-j}\right) ^{\beta }\right]\\ &\leq &C\left\vert u\right\vert _{\beta ,\infty }\left[ \int_{N^{-1}}^{\infty}l\left( r\right) ^{\beta }r^{-2}dr+\int_{0}^{N^{-1}}l\left( r\right) ^{\beta }\frac{dr}{r}\right]\leq C\left\vert u\right\vert_{\beta,\infty}. \end{aligned}$$ That ends the proof. **Remark:** When $\mu\left( dy\right)=\frac{dy}{\left\vert y\right\vert^{d+\alpha}}$, one of Lévy measures that are of the most research interest, or when in case [@zh], $w\left( t\right)=l\left( t\right)=t^{\alpha}$, $\eqref{asl}$ reduces to $\beta<1/\alpha$, which corresponds to the classical equivalence of the Hölder-Zygmund norm and the Besov norm. The next lemma is fundamental to this paper. \[Lop\] Let $\nu$ be a Lévy measure satisfying (iii) in **A(w,l)**. For any function $\varphi\in C^{\infty}_b\left(\mathbf{R}^d\right)$, $$\begin{aligned} L^{\tilde{\nu}_R}\varphi\left( x\right):=\int\left[ \varphi\left( x+y\right)-\varphi\left( x\right)-\chi_{\alpha}\left( y\right)y\cdot \nabla \varphi\left( x\right)\right]\tilde{\nu}_R\left(d y\right), R>0. \end{aligned}$$ Then, $$\label{fubi} \int\left\vert \varphi\left( x+y\right)-\varphi\left( x\right)-\chi_{\alpha}\left( y\right)y\cdot \nabla \varphi\left( x\right)\right\vert\tilde{\nu}_R\left(dy\right)<C\left( \alpha,d,\varphi,\alpha_1,\alpha_2\right).$$ Moreover, $L^{\tilde{\nu}_R}\varphi\in C^{\infty}_b\left(\mathbf{R}^d\right)$ and $D^{\gamma}L^{\tilde{\nu}_R}\varphi=L^{\tilde{\nu}_R} D^{\gamma}\varphi$, where $\gamma\in\mathbf{N}^d$ is a multi-index. If $\varphi\left(x\right)\in\mathcal{S}\left(\mathbf{R}^d\right)$, then $L^{\tilde{\nu}_R}\varphi\left( x\right)\in L^1\left(\mathbf{R}^d\right)$ and $\left\vert L^{\tilde{\nu}_R}\varphi\right\vert_{L^1\left(\mathbf{R}^d\right)}\leq C$ for some positive $C$ that is uniform with respect to $R$. Obviously, $$\begin{aligned} \label{r1} &&\int\left\vert \varphi\left( x+y\right)-\varphi\left( x\right)-\chi_{\alpha}\left( y\right)y\cdot \nabla \varphi\left( x\right)\right\vert\tilde{\nu}_R\left(dy\right)\\ &\leq& 1_{\alpha\in\left(0,1\right)}\int_{\left\vert y\right\vert\leq 1}\int_0^1\left\vert \nabla\varphi\left(x+\theta y\right)\right\vert \left\vert y\right\vert d\theta\tilde{\nu}_R\left(dy\right)\nonumber\\ &&+ 1_{\alpha\in\left[1,2\right)}\int_{\left\vert y\right\vert\leq 1}\int_0^1\int_0^1\left\vert \nabla^2\varphi\left(x+\theta_1\theta_2 y\right)\right\vert \left\vert y\right\vert^2 d\theta_1 d\theta_2\tilde{\nu}_R\left(dy\right)\nonumber\\ &&+ \int_{\left\vert y\right\vert> 1}\left(\left\vert \varphi\left(x+y\right)\right\vert+\left\vert\varphi\left( x\right)\right\vert+\chi_{\alpha}\left( y\right)\left\vert y\right\vert\left\vert \nabla \varphi\left( x\right)\right\vert\right) \tilde{\nu}_R\left(dy\right)\nonumber. \end{aligned}$$ If $\varphi\left(x\right)\in C^{\infty}_b\left(\mathbf{R}^d\right)$, by (iii) in **A(w,l)**, $$\begin{aligned} &&\int\left\vert \varphi\left( x+y\right)-\varphi\left( x\right)-\chi_{\alpha}\left( y\right)y\cdot \nabla \varphi\left( x\right)\right\vert\tilde{\nu}_R\left(dy\right)\nonumber\\ &\leq& 1_{\alpha\in\left(0,1\right)}\int_{\left\vert y\right\vert\leq 1}\left\vert \nabla\varphi\right\vert_0\left\vert y\right\vert \tilde{\nu}_R\left(dy\right)+1_{\alpha\in\left[1,2\right)}\int_{\left\vert y\right\vert\leq 1}\left\vert \nabla^2\varphi\right\vert_0 \left\vert y\right\vert^2 \tilde{\nu}_R\left(dy\right)\nonumber\\ && +\int_{\left\vert y\right\vert> 1}\left(2\left\vert \varphi\right\vert_0+\chi_{\alpha}\left( y\right)\left\vert y\right\vert\left\vert \nabla \varphi\right\vert_0\right) \tilde{\nu}_R\left(dy\right)\nonumber\\ \quad &<&C\left( \int_{\left\vert y\right\vert\leq 1}\left\vert y\right\vert^{\alpha_1}\tilde{\nu}_R\left(dy\right)+\int_{\left\vert y\right\vert> 1}\left\vert y\right\vert^{\alpha_2}\tilde{\nu}_R\left(dy\right)\right)<C.\label{r2} \end{aligned}$$ Since $\partial_i \varphi\in C_b^{\infty}\left(\mathbf{R}^d\right), i=1,2,\ldots,d$, the same steps can be applied to $\partial_i \varphi$. Then $\eqref{r2}$ indicates that $L^{\tilde{\nu}_R} \partial_i \varphi\in C_b\left(\mathbf{R}^d\right)$ and $\partial_i L^{\tilde{\nu}_R}\varphi=L^{\tilde{\nu}_R} \partial_i\varphi$ by the dominated convergence theorem. Then, $L^{\tilde{\nu}_R}\varphi\in C^{\infty}_b\left(\mathbf{R}^d\right)$ and $D^{\gamma}L^{\tilde{\nu}_R}\varphi=L^{\tilde{\nu}_R} D^{\gamma}\varphi, \gamma\in \mathbf{N}^d$ is a consequence of induction. If $\varphi\left(x\right)\in \mathcal{S}\left(\mathbf{R}^d\right)$, then by $\eqref{r1}$, $$\begin{aligned} \left\vert L^{\tilde{\nu}_R}\varphi\right\vert_{L^1\left(\mathbf{R}^d\right)} &\leq&\int\int\left\vert \varphi\left( x+y\right)-\varphi\left( x\right)-\chi_{\alpha}\left( y\right)y\cdot \nabla \varphi\left( x\right)\right\vert\tilde{\nu}_R\left(dy\right)dx\\ &\leq& 1_{\alpha\in\left(0,1\right)}\int_{\left\vert y\right\vert\leq 1}\int_0^1\int\left\vert \nabla\varphi\left(x\right)\right\vert dx\left\vert y\right\vert d\theta\tilde{\nu}_R\left(dy\right)\\ &&+ 1_{\alpha\in\left[1,2\right)}\int_{\left\vert y\right\vert\leq 1}\int_0^1\int_0^1\int\left\vert \nabla^2\varphi\left(x\right)\right\vert dx \left\vert y\right\vert^2 d\theta_1 d\theta_2\tilde{\nu}_R\left(dy\right)\\ &&+\int_{\left\vert y\right\vert> 1}\int\left(2\left\vert\varphi\left( x\right)\right\vert+\chi_{\alpha}\left( y\right)\left\vert y\right\vert\left\vert \nabla \varphi\left( x\right)\right\vert\right) dx\tilde{\nu}_R\left(dy\right), \end{aligned}$$ again by (iii) in **A(w,l)**, $$\begin{aligned} \left\vert L^{\tilde{\nu}_R}\varphi\right\vert_{L^1\left(\mathbf{R}^d\right)}&\leq& C\left( \int_{\left\vert y\right\vert\leq 1}\left\vert y\right\vert^{\alpha_1}\tilde{\nu}_R\left(dy\right)+\int_{\left\vert y\right\vert> 1}\left\vert y\right\vert^{\alpha_2}\tilde{\nu}_R\left(dy\right)\right)\\ &\leq&C\left( \alpha,d,\varphi,\alpha_1,\alpha_2\right). \end{aligned}$$ The lemma below is about integrability of $L^{\nu,\kappa}\varphi, \kappa\in\left( 0,1\right)$ and its probabilistic representation which we shall use repeatedly. \[rep\] Let $\nu$ be a Lévy measure satisfying (iii) in **A(w,l)** and $\kappa\in\left( 0,1\right)$. $L^{\tilde{\nu}_R,\kappa},R>0$ is the associated operator defined as $\eqref{opp}$. Then for any $\varphi\left(x\right)\in C^{\infty}_b\left(\mathbf{R}^d\right)$, $$\begin{aligned} L^{\tilde{\nu}_R,\kappa}\varphi\left( x\right)=C\int_0^{\infty}t^{-1-\kappa}\mathbf{E}\left[\varphi\left( x+Z_t^{\overline{\tilde{\nu}_R}}\right)-\varphi\left( x\right)\right]dt,R>0,\label{kap} \end{aligned}$$ where $C^{-1}=\int_0^{\infty}t^{-\kappa-1}\left(1- e^{-t}\right)dt$ and $$\begin{aligned} \overline{\tilde{\nu}_R}\left( dy\right)=\frac{1}{2}\left( \tilde{\nu}_R\left( dy\right)+\tilde{\nu}_R\left(- dy\right)\right),R>0. \end{aligned}$$ Besides, $L^{\tilde{\nu}_R,\kappa}\varphi\in C^{\infty}_b\left(\mathbf{R}^d\right)$. And $\left\vert L^{\tilde{\nu}_R,\kappa}\varphi\right\vert_{L^1\left( \mathbf{R}^d\right)}<C'$ for some $C'>0$ independent of $R$ if $\varphi\left(x\right)\in \mathcal{S}\left(\mathbf{R}^d\right)$. Clearly, for all $R>0,\xi\in\mathbf{R}^d$, $\Re\psi^{\tilde{\nu}_R}\left( \xi\right)\leq 0$. Then for any $\kappa\in\left( 0,1\right)$, $$\begin{aligned} \int_0^{\infty}t^{-\kappa-1}\left( 1-\exp\{\Re\psi^{\tilde{\nu}_R}\left( \xi\right)t\}\right)dt=\left( -\Re\psi^{\tilde{\nu}_R}\left( \xi\right)\right)^{\kappa}\int_0^{\infty}t^{-\kappa-1}\left(1- e^{-t}\right)dt. \end{aligned}$$ Thus, $$\begin{aligned} L^{\tilde{\nu}_R,\kappa}\varphi\left( x\right) &=& C\mathcal{F}^{-1}\left[ \int_0^{\infty}t^{-\kappa-1}\left(\exp\{\Re\psi^{\tilde{\nu}_R}\left( \xi\right)t\}-1\right)\mathcal{F}\varphi dt \right]\left( x\right), \end{aligned}$$ where $C^{-1}=\int_0^{\infty}t^{-\kappa-1}\left(1- e^{-t}\right)dt$. Since $\Re\psi^{\tilde{\nu}_R}\left( \xi\right)=\psi^{\overline{\tilde{\nu}_R}}\left( \xi\right)$, by the Lévy-Khintchine formula, $$\begin{aligned} L^{\tilde{\nu}_R,\kappa}\varphi\left( x\right) &=& C\mathcal{F}^{-1}\left[ \int_0^{\infty}t^{-\kappa-1}\left(\exp\{\psi^{\overline{\tilde{\nu}_R}}\left( \xi\right)t\}-1\right)\mathcal{F}\varphi dt \right]\left( x\right)\\ &=& C\mathcal{F}^{-1}\left[ \int_0^{\infty}t^{-\kappa-1}\mathbf{E}\mathcal{F}\left(\varphi\left( x+Z^{\overline{\tilde{\nu}_R}}_t\right)-\varphi\left(x\right)\right) dt \right]\left( x\right)\\ &=&C\mathcal{F}^{-1}\left[ \int_0^{\infty}t^{-\kappa-1}\mathcal{F}\mathbf{E}\left(\varphi\left( x+Z^{\overline{\tilde{\nu}_R}}_t\right)-\varphi\left(x\right)\right) dt \right]\left( x\right)\end{aligned}$$ if $\varphi\left(x\right)\in \mathcal{S}\left(\mathbf{R}^d\right)$. Note $$\begin{aligned} && \int_0^{\infty}t^{-\kappa-1}\left\vert\mathbf{E}\left[\varphi\left( x+Z^{\overline{\tilde{\nu}_R}}_t\right)-\varphi\left(x\right)\right]\right\vert dt \\ &\leq& \int_0^{1}t^{-\kappa-1}\int_0^t\left\vert L^{\overline{\tilde{\nu}_R}}\varphi\left(x+Z^{\overline{\tilde{\nu}_R}}_{r-}\right)\right\vert dr dt+\int_1^{\infty}t^{-\kappa-1}\mathbf{E}\left\vert\varphi\left( x+Z^{\overline{\tilde{\nu}_R}}_t\right)-\varphi\left(x\right)\right\vert dt, \end{aligned}$$ and note $\overline{\tilde{\nu}_R}=\widetilde{\bar{\nu}}_R$. If $\nu$ satisfies (iii) in **A(w,l)**, so does $\bar{\nu}$. Then by Lemma \[Lop\], $$\begin{aligned} &&\int_0^{\infty}t^{-\kappa-1}\int\left\vert\mathbf{E}\left[\varphi\left( x+Z^{\overline{\tilde{\nu}_R}}_t\right)-\varphi\left(x\right)\right]\right\vert dxdt \nonumber\\ &\leq& \int_0^{1}t^{-\kappa-1}\int_0^t\int\left\vert L^{\overline{\tilde{\nu}_R}}\varphi\left(x\right)\right\vert dx dr dt+2\int_1^{\infty}t^{-\kappa-1}\int\left\vert\varphi\left(x\right)\right\vert dx dt\nonumber\\ &\leq& C'\label{int} \end{aligned}$$ for some $C'>0$ independent of $R$. Thus Fubini’s theorem applies, and thus $$\begin{aligned} L^{\tilde{\nu}_R,\kappa}\varphi\left( x\right) =C \int_0^{\infty}t^{-\kappa-1}\mathbf{E}\left[\varphi\left( x+Z^{\overline{\tilde{\nu}_R}}_t\right)-\varphi\left(x\right)\right]dt. \end{aligned}$$ $L^1$ integrability of $L^{\tilde{\nu}_R,\kappa}\varphi$ has been shown in $\eqref{int}$. For $\varphi\in C^{\infty}_b\left(\mathbf{R}^d\right)$, we introduce $\{\zeta_n:n\in\mathbf{N}\}\subseteq C_0^{\infty}\left( \mathbf{R}^d\right)$ such that $0\leq \zeta_n\left( x\right)\leq 1,\forall n\in\mathbf{N},\forall x\in\mathbf{R}^d$ and $\zeta_n\left( x\right)= 1,\forall x\in\{x\in\mathbf{R}^d: \left\vert x\right\vert\leq n\}$. Then $\varphi\zeta_n\xrightarrow{n\to\infty} \varphi$ pointwise, which by the dominated convergence theorem implies that $\varphi\zeta_n\xrightarrow{n\to\infty} \varphi$ in the weak topology on $\mathcal{S}'\left(\mathbf{R}^d\right)$. Clearly, $\eqref{kap}$ holds for $\varphi\zeta_n$. Hence, $$\begin{aligned} &&<L^{\tilde{\nu}_R,\kappa}\varphi\zeta_n,\eta>\\ &=&C\int \int_0^{\infty}t^{-\kappa-1}\mathbf{E}\left[\varphi\zeta_n\left( x+Z^{\overline{\tilde{\nu}_R}}_t\right)-\varphi\zeta_n\left(x\right)\right]dt\eta\left( x\right)dx\\ &=&C\int \int_0^{\infty}t^{-\kappa-1}\mathbf{E}\left[\eta\left( x-Z^{\overline{\tilde{\nu}_R}}_t\right)-\eta\left( x\right)\right]dt\varphi\zeta_n\left(x\right)dx,\forall \eta\in \mathcal{S}\left( \mathbf{R}^d\right). \end{aligned}$$ Let $n\rightarrow \infty$. $$\begin{aligned} &&\lim_{n\rightarrow \infty}<L^{\tilde{\nu}_R,\kappa}\varphi\zeta_n,\eta>\\ &=&C\int \int_0^{\infty}t^{-\kappa-1}\mathbf{E}\left[\eta\left( x-Z^{\overline{\tilde{\nu}_R}}_t\right)-\eta\left( x\right)\right]dt\varphi\left(x\right)dx\\ &=&C\int \int_0^{\infty}t^{-\kappa-1}\mathbf{E}\left[\varphi\left( x+Z^{\overline{\tilde{\nu}_R}}_t\right)-\varphi\left(x\right)\right]dt\eta\left( x\right)dx,\forall\eta\in \mathcal{S}\left( \mathbf{R}^d\right). \end{aligned}$$ This is to say $L^{\tilde{\nu}_R,\kappa}\varphi\zeta_n\xrightarrow{n\to\infty} \int_0^{\infty}t^{-\kappa-1}\mathbf{E}\left[\varphi\left( x+Z^{\overline{\tilde{\nu}_R}}_t\right)-\varphi\left(x\right)\right]dt$ in the topology of $\mathcal{S}'\left(\mathbf{R}^d\right)$. By continuity of the Fourier transform, $$\begin{aligned} &&C\mathcal{F}\int_0^{\infty}t^{-\kappa-1}\mathbf{E}\left[\varphi\left( x+Z^{\overline{\tilde{\nu}_R}}_t\right)-\varphi\left(x\right)\right]dt\\ &=& \lim_{n\to\infty}\mathcal{F}\left[ L^{\tilde{\nu}_R,\kappa}\varphi\zeta_n\right]=-\left(-\Re\psi^{\tilde{\nu}_R}\right)^{\kappa}\lim_{n\to\infty}\mathcal{F}\left[ \varphi\zeta_n\right]\\ &=&-\left(-\Re\psi^{\tilde{\nu}_R}\right)^{\kappa}\mathcal{F}\varphi. \end{aligned}$$ Therefore, $\eqref{opp}$ is well-defined for all functions in $C^{\infty}_b\left(\mathbf{R}^d\right)$ and $\eqref{kap}$ applies. That $L^{\tilde{\nu}_R,\kappa}\varphi\in C^{\infty}_b\left(\mathbf{R}^d\right)$ is a result of dominated convergence theorem and induction. **Remark:** Lemma \[rep\] claims $L^{\tilde{\nu}_R,\kappa},\forall R>0$ is a closed operation in $C^{\infty}_b\left(\mathbf{R}^d\right)$. Because of that, we can set $L^{\nu,\kappa}=L^{\nu,\kappa/2}\circ L^{\nu,\kappa/2}$ if $\kappa\in\left( 1,2\right)$, where $\circ$ means composition of two operations. Clearly, $\eqref{opp}$ is well-defined for all $\kappa\in\left( 1,2\right)$. \[co1\] Let $\nu$ be a Lévy measure satisfying **A(w,l)** and $\kappa\in\left(0,1\right]$. Denote $g_j=\mathcal{F}^{-1}\left[\mathcal{F} g \left(N^{-j}\cdot\right)\right],\forall g\left(x\right)\in \mathcal{S}\left(\mathbf{R}^d\right)$. Then there exists a constant $C>0$ independent of $j$ such that $$\begin{aligned} \left\vert L^{\nu,\kappa}g_j\right\vert_{L^1\left( \mathbf{R}^d\right)}< C w\left( N^{-j}\right)^{-\kappa}.\end{aligned}$$ If $j=0$, this is a straightforward consequence of Lemmas \[Lop\] and \[rep\]. Now consider $j\neq 0$. By $\eqref{alpha1}$ in **A(w,l)**, $$\label{symbol} \psi^{\nu}\left( \xi\right) = w\left( N^{-j}\right)^{-1}\psi^{\tilde{\nu}_{N^{-j}}}\left( N^{-j}\xi\right), \xi\in\mathbf{R}^d, \forall j\in\mathbf{N}_{+},$$ therefore, by Lemma \[Lop\], $$\begin{aligned} \left\vert L^{\nu}g_j\right\vert_{L^1\left( \mathbf{R}^d\right)} &=& \int \left\vert \mathcal{F}^{-1}\left[\psi^{\nu}\left( \xi\right)\mathcal{F}g\left( N^{-j}\xi\right)\right]\left( x\right)\right\vert dx\\ &=& \int \left\vert \mathcal{F}^{-1}\left[w\left( N^{-j}\right)^{-1}\psi^{\tilde{\nu}_{N^{-j}}}\left( N^{-j}\xi\right)\mathcal{F}g\left( N^{-j}\xi\right)\right]\left( x\right)\right\vert dx\\ &=& w\left( N^{-j}\right)^{-1}\int \left\vert \mathcal{F}^{-1}\left[\psi^{\tilde{\nu}_{N^{-j}}}\left( \xi\right)\mathcal{F}g\left( \xi\right)\right]\left( x\right)\right\vert dx\\ &=& w\left( N^{-j}\right)^{-1}\left\vert L^{\tilde{\nu}_{N^{-j}}}g\right\vert_{L^1\left( \mathbf{R}^d\right)}\\ &<&C\left( \alpha,d,\alpha_1,\alpha_2\right)w\left( N^{-j}\right)^{-1},\end{aligned}$$ and by Lemma \[rep\], $$\begin{aligned} \left\vert L^{\nu,\kappa}g_j\right\vert_{L^1\left( \mathbf{R}^d\right)}&=& \int \left\vert \mathcal{F}^{-1}\left[-\left(-\Re\psi^{\nu}\left( \xi\right)\right)^{\kappa}\mathcal{F}g\left( N^{-j}\xi\right)\right]\left( x\right)\right\vert dx\\ &=& \int \left\vert \mathcal{F}^{-1}\left[-\left(-w\left( N^{-j}\right)^{-1}\Re\psi^{\tilde{\nu}_{N^{-j}}}\left( N^{-j}\xi\right)\right)^{\kappa}\mathcal{F}g\left( N^{-j}\xi\right)\right]\left( x\right)\right\vert dx\\ &=& w\left( N^{-j}\right)^{-\kappa}\int \left\vert \mathcal{F}^{-1}\left[-\left(- \Re\psi^{\tilde{\nu}_{N^{-j}}}\left( \xi\right)\right)^{\kappa}\mathcal{F}g\left( \xi\right)\right]\left( x\right)\right\vert dx\\ &=& w\left( N^{-j}\right)^{-\kappa}\left\vert L^{\tilde{\nu}_{N^{-j}},\kappa}g\right\vert_{L^1\left( \mathbf{R}^d\right)}\\ &<&C\left( \alpha,d,\alpha_1,\alpha_2\right)w\left( N^{-j}\right)^{-\kappa}.\end{aligned}$$ The following two lemmas are crucial for proofs of norm equivalence. \[bij\] Let $a\in\left( 0,\infty\right)$ and $\nu$ be a Lévy measure satisfying (iii) in **A(w,l)**. Then the operator $aI-L^{\nu}$ defines a bijection on $C_b^{\infty}\left( \mathbf{R}^d\right)$. Moreover, for any function $\varphi\in C_b^{\infty}\left( \mathbf{R}^d\right)$, $$\begin{aligned} &&\varphi\left( x\right) =\int_{0}^{\infty }e^{-a t}\mathbf{E}\left( aI -L^{\nu}\right) \varphi\left( x+Z_{t}^{\nu}\right) dt,\label{e1} \\ &&\left( aI -L^{\nu}\right)^{-1}\varphi\left( x\right) =\int_{0}^{\infty }e^{-a t}\mathbf{E} \varphi\left( x+Z_{t}^{\nu}\right) dt,\quad x\in \mathbf{R}^{d},\label{e2} \end{aligned}$$ where $Z_{t}^{\nu}$ is the Levy process associated to $\nu$. By Lemma \[Lop\], $aI -L^{\nu}$ maps from $C_b^{\infty}\left( \mathbf{R}^d\right)$ to $C_b^{\infty}\left( \mathbf{R}^d\right)$. Apply the Itô formula to $e^{-at}\varphi\left( x+Z^{\nu}_t\right)$ on $\left[0,S\right]$ with respect to $t$, and take expectation afterwards, then $$\begin{aligned} && e^{-aS}\mathbf{E}\varphi\left( x+Z^{\nu}_S\right)-\varphi\left(x\right)\nonumber\\ &=& \int_0^S -ae^{-at}\mathbf{E}\varphi\left( x+Z^{\nu}_t\right)dt+\int_0^S e^{- at}\mathbf{E}L^{\nu}\varphi\left( x+Z^{\nu}_t\right)dt.\end{aligned}$$ Note both $\varphi$ and $L^{\nu}\varphi$ are bounded. Let $S\rightarrow \infty$ and we obtain $\eqref{e1}$, which by Fubini theorem can also be written as $$\begin{aligned} \varphi\left( x\right) =\left( aI -L^{\nu}\right)\int_{0}^{\infty }e^{-a t}\mathbf{E} \varphi\left( x+Z_{t}^{\nu}\right) dt,\end{aligned}$$ namely, $aI-L^{\nu}$ is a surjection. Meanwhile, if $\varphi$ is a function in $C_b^{\infty}\left( \mathbf{R}^d\right)$ such that $\left(aI-L^{\nu}\right)\varphi=0$, then applying the same procedure, we arrive at $\eqref{e1}$, which claims $\varphi=0$ and thus $aI-L^{\nu}$ is bijective. It follows immediately that $$\begin{aligned} \left( aI -L^{\nu}\right)^{-1}\varphi\left( x\right) =\int_{0}^{\infty }e^{-a t}\mathbf{E} \varphi\left( x+Z_{t}^{\nu}\right) dt,\quad x\in \mathbf{R}^{d}.\nonumber\end{aligned}$$ Similar results for $\left( I-L^{\nu}\right)^{\kappa},\kappa\in\left( 0,1\right)$ are stated in next lemma. Denote $$\begin{aligned} \mathcal{A}\left( \mathbf{R}^d\right)=\{\varphi\in\mathcal{S}'\left( \mathbf{R}^d\right): \left( a-\Re \psi^{\nu}\right)^{\kappa}\mathcal{F}\varphi, \left( a-\Re \psi^{\nu}\right)^{-\kappa}\mathcal{F}\varphi\in\mathcal{S}'\left( \mathbf{R}^d\right) \}.\end{aligned}$$ Define for all $ \varphi\in\mathcal{A}\left( \mathbf{R}^d\right)$, $$\begin{aligned} \left( aI-L^{\nu}\right)^{\kappa}\varphi&=&\mathcal{F}^{-1}\left[ \left( a-\Re \psi^{\nu}\right)^{\kappa}\mathcal{F}\varphi\right],\label{oopp}\\ \left( aI-L^{\nu}\right)^{-\kappa}\varphi&=&\mathcal{F}^{-1}\left[ \left( a-\Re \psi^{\nu}\right)^{-\kappa}\mathcal{F}\varphi\right].\label{oppp}\end{aligned}$$ Obviously, $\eqref{oopp}$ and $\eqref{oppp}$ offer a bijection on $\mathcal{A}\left( \mathbf{R}^d\right)$. \[rep2\] Let $\kappa\in\left( 0,1\right)$, $a\in\left( 0,\infty\right)$ and $\nu$ be a Lévy measure satisfying (iii) in **A(w,l)**. Then, $C_b^{\infty}\left( \mathbf{R}^d\right)\subset\mathcal{A}\left( \mathbf{R}^d\right)$ and thus $ \left( aI-L^{\nu}\right)^{\kappa}$ is a bijection on it. Moreover, for any function $\varphi\in C_b^{\infty}\left( \mathbf{R}^d\right)$, $$\begin{aligned} \qquad\left( aI-L^{\nu}\right)^{\kappa}\varphi\left(x\right)&=&C\int_0^{\infty}t^{-\kappa-1}\left[ \varphi\left(x\right)-e^{-at}\mathbf{E}\varphi\left( x+Z_t^{\bar{\nu}}\right)\right]dt,\quad\label{rp1}\\ \qquad\quad\left( aI-L^{\nu}\right)^{-\kappa}\varphi\left(x\right)&=&C'\int_0^{\infty}t^{\kappa-1}e^{-at}\mathbf{E}\varphi\left( x+Z_t^{\bar{\nu}}\right)dt,\label{rp2}\end{aligned}$$ where $C^{-1}=\int_0^{\infty}t^{-\kappa-1}\left(1- e^{-t}\right)dt$, $C'^{-1}=\int_0^{\infty}t^{\kappa-1}e^{-t}dt$, $Z_t^{\bar{\nu}}$ is the Lévy process associated to $\bar{\nu}$ and $\bar{\nu}\left( dy\right):=\frac{1}{2}\left( \nu\left( dy\right)+\nu\left(- dy\right)\right)$. Since $a-\Re\psi^{\nu}\left( \xi\right)> 0,\forall \xi\in\mathbf{R}^d$ and $$\begin{aligned} \int_0^{\infty}t^{-\kappa-1}\left( 1-\exp\{\Re\psi^{\nu}\left( \xi\right)t-at\}\right)dt=\left( a-\Re\psi^{\nu}\left( \xi\right)\right)^{\kappa}\int_0^{\infty}t^{-\kappa-1}\left(1- e^{-t}\right)dt, \end{aligned}$$ then for all $\varphi\in \mathcal{S}\left( \mathbf{R}^d\right)$, $$\begin{aligned} \left( a-\Re\psi^{\nu}\left( \xi\right)\right)^{\kappa}\mathcal{F}\varphi&=&C\int_0^{\infty}t^{-\kappa-1}\left( 1-\exp\{\Re\psi^{\nu}\left( \xi\right)t-at\}\right)\mathcal{F}\varphi dt,\nonumber\\ &=& C\int_0^{\infty}t^{-\kappa-1}\mathcal{F}\mathbf{E}\left[ \varphi\left( x\right)-e^{-at}\varphi\left(x+Z_t^{\bar{\nu}}\right)\right]dt,\qquad\quad\label{fub}\end{aligned}$$ where $C^{-1}=\int_0^{\infty}t^{-\kappa-1}\left(1- e^{-t}\right)dt$. Note for $t\in\left( 0,1\right)$, we have $$\begin{aligned} &&\left\vert\mathbf{E}\left[ \varphi\left( x\right)-e^{-at}\varphi\left(x+Z_t^{\bar{\nu}}\right)\right]\right\vert\nonumber\\ &\leq& \left\vert 1-e^{-at}\right\vert\left\vert \varphi\left( x\right)\right\vert+\mathbf{E}\int_0^t\left\vert L^{\bar{\nu}} \varphi\left(x+Z_{r-}^{\bar{\nu}}\right)\right\vert dr.\label{dec}\end{aligned}$$ By Lemma \[Lop\], Fubini’s theorem applies to $\eqref{fub}$, which implies $$\begin{aligned} \int_0^{\infty}t^{-\kappa-1}\mathbf{E}\left[ \varphi\left( x\right)-e^{-at}\varphi\left(x+Z_t^{\bar{\nu}}\right)\right]dt\in C_b^{\infty}\left( \mathbf{R}^d\right),\end{aligned}$$ and $$\begin{aligned} \left( a-\Re\psi^{\nu}\left( \xi\right)\right)^{\kappa}\mathcal{F}\varphi=C\mathcal{F}\int_0^{\infty}t^{-\kappa-1}\mathbf{E}\left[ \varphi\left( x\right)-e^{-at}\varphi\left(x+Z_t^{\bar{\nu}}\right)\right]dt\in \mathcal{S}'\left( \mathbf{R}^d\right).\end{aligned}$$ Thus $\eqref{oopp}$ is well-defined. As a result, $$\begin{aligned} \quad&&\left( aI-L^{\nu}\right)^{\kappa}\varphi\left( x\right)= C\int_0^{\infty}t^{-\kappa-1}\mathbf{E}\left[ \varphi\left( x\right)-e^{-at}\varphi\left(x+Z_t^{\bar{\nu}}\right)\right]dt.\label{map}\end{aligned}$$ Similarly, $$\begin{aligned} \int_0^{\infty}t^{\kappa-1}\exp\{\Re\psi^{\nu}\left( \xi\right)t-at\}dt=\left( a-\Re\psi^{\nu}\left( \xi\right)\right)^{-\kappa}\int_0^{\infty}t^{\kappa-1}e^{-t}dt,\end{aligned}$$ then for all $\varphi\in \mathcal{S}\left( \mathbf{R}^d\right)$, $$\begin{aligned} \left( a-\Re\psi^{\nu}\left( \xi\right)\right)^{-\kappa}\mathcal{F}\varphi=C'\mathcal{F}\int_0^{\infty}t^{\kappa-1}e^{-at}\mathbf{E}\varphi\left(x+Z_t^{\bar{\nu}}\right)dt\in \mathcal{S}'\left( \mathbf{R}^d\right),\end{aligned}$$ where $C'^{-1}=\int_0^{\infty}t^{\kappa-1}e^{-t}dt$. And $$\begin{aligned} \left( aI-L^{\nu}\right)^{-\kappa}\varphi\left( x\right)= C'\int_0^{\infty}t^{\kappa-1}e^{-at}\mathbf{E}\varphi\left(x+Z_t^{\bar{\nu}}\right)dt.\label{inverse}\end{aligned}$$ To extend $\eqref{map},\eqref{inverse}$ to all $\varphi\in C_b^{\infty}\left( \mathbf{R}^d\right)$, we repeat what we did in Lemma \[rep\] and introduce $\{\zeta_n:n\in\mathbf{N}\}\subseteq C_0^{\infty}\left( \mathbf{R}^d\right)$ such that $0\leq \zeta_n\left( x\right)\leq 1,\forall n\in\mathbf{N},\forall x\in\mathbf{R}^d$ and $\zeta_n\left( x\right)= 1,\forall x\in\{x\in\mathbf{R}^d: \left\vert x\right\vert\leq n\}$. Then, $\varphi\zeta_n\xrightarrow{n\to\infty} \varphi$ and $$\begin{aligned} \left( aI-L^{\nu}\right)^{-\kappa}\varphi\zeta_n&\xrightarrow{n\to\infty}&C' \int_0^{\infty}t^{\kappa-1}e^{-at}\mathbf{E}\varphi\left(x+Z_t^{\bar{\nu}}\right)dt,\\ \left( aI-L^{\nu}\right)^{\kappa}\varphi\zeta_n&\xrightarrow{n\to\infty}&C\int_0^{\infty}t^{-\kappa-1}\left[ \varphi\left(x\right)-e^{-at}\mathbf{E}\varphi\left( x+Z_t^{\bar{\nu}}\right)\right]dt\end{aligned}$$ all in the topology of $\mathcal{S}'\left( \mathbf{R}^d\right)$. Applying continuity of the Fourier transform, we know that $C_b^{\infty}\left( \mathbf{R}^d\right)\subset\mathcal{A}\left( \mathbf{R}^d\right)$ and $\eqref{rp1},\eqref{rp2}$ hold on it. **Remark:** Lemmas \[bij\] and \[rep2\] have shown that $\left( aI-L^{\nu}\right)^{\kappa},\kappa\in\left( 0,1\right]$ is a closed operation in $C_b^{\infty}\left( \mathbf{R}^d\right)$. Naturally, we may define $\left( aI-L^{\nu}\right)^{\kappa}$ for all $\kappa\in\left(1,2\right)$ and all $\varphi\in C_b^{\infty}\left( \mathbf{R}^d\right)$ as follows: $$\begin{aligned} \left( aI-L^{\nu}\right)^{\kappa}\varphi&=&\left( aI-L^{\nu}\right)^{\kappa/2}\circ\left( aI-L^{\nu}\right)^{\kappa/2}\varphi, \\ \left( aI-L^{\nu}\right)^{-\kappa}\varphi&=&\left( aI-L^{\nu}\right)^{-\kappa/2}\circ\left( aI-L^{\nu}\right)^{-\kappa/2}\varphi.\end{aligned}$$ Here $\circ$ represents composition of two operations. This definition is compatible with $\eqref{oopp},\eqref{oppp}$ when $\kappa\in\left(1,2\right)$. The corollary below says that the probabilistic representation of $\left( aI-L^{\nu}\right)^{-\kappa}$ for $\kappa\in\left( 0,1\right)$ also applies to $ \kappa\in\left(1,2\right)$. \[exbij\] Let $\kappa\in\left( 0,2\right)$, $a\in\left( 0,\infty\right)$ and $\nu$ be a Lévy measure satisfying (iii) in **A(w,l)**. Then $\left( aI-L^{\nu}\right)^{\kappa}$ is a bijection on $ C_b^{\infty}\left( \mathbf{R}^d\right)$. For any function $\varphi\in C_b^{\infty}\left( \mathbf{R}^d\right)$, $$\begin{aligned} \left( aI-L^{\nu}\right)^{-\kappa}\varphi\left(x\right)=C\int_0^{\infty}t^{\kappa-1}e^{-at}\mathbf{E}\varphi\left( x+Z_t\right)dt,\label{rp3} \end{aligned}$$ where $C$ is a constant only depending on $\kappa$, and $Z_t=Z_t^{\nu}$ if $\kappa=1$, $Z_t=Z_t^{\bar{\nu}}$ otherwise. That $\left( aI-L^{\nu}\right)^{\kappa},\kappa\in\left( 0,2\right)$ is a bijection follows from the definition. Suppose $\kappa\in\left( 1,2\right)$ and $\varphi\in C_b^{\infty}\left( \mathbf{R}^d\right)$. Use $\eqref{rp2}$. $$\begin{aligned} &&\left(aI-L^{\nu}\right)^{-\kappa}\varphi\left(x\right)\\ &=&C\int_0^{\infty}t^{\kappa/2-1}e^{-at}\mathbf{E}\left(aI-L^{\nu}\right)^{-\kappa/2}\varphi\left( x+Z_t^{\bar{\nu}}\right)dt\\ &=& C\int_0^{\infty}t^{\kappa/2-1}e^{-at}\mathbf{E}\int_0^{\infty}s^{\kappa/2-1}e^{-as}\mathbf{E}\varphi\left( x+Z_t^{\bar{\nu}}+Z_s^{\bar{\nu}}\right)dsdt. \end{aligned}$$ $Z_t^{\bar{\nu}},Z_s^{\bar{\nu}}$ denote two independent and identically distributed Lévy processes associated to $\bar{\nu}$. Therefore, $$\begin{aligned} &&\left(aI-L^{\nu}\right)^{-\kappa}\varphi\left(x\right)\\ &=& C\int_0^{\infty}t^{\kappa/2-1}e^{-at}\int_0^{\infty}s^{\kappa/2-1}e^{-as}\mathbf{E}\varphi\left( x+Z_{t+s}^{\bar{\nu}}\right)dsdt\\ &=& C\int_0^{\infty}t^{\kappa/2-1}\int_0^{\infty}s^{\kappa/2-1}e^{-at-as}\int\varphi\left( x+z\right)p\left(t+s, z\right)dzdsdt,\end{aligned}$$ where $p\left(t, z\right)$ is the probability density of $Z_{t}^{\bar{\nu}}$. Then by changing variables and applying Fubini theorem, we obtain $$\begin{aligned} &&\left(aI-L^{\nu}\right)^{-\kappa}\varphi\left(x\right)\\ &=& C\int_0^{\infty}t^{\kappa-1}\int_0^{\infty}r^{\kappa/2-1}e^{-at-atr}\int\varphi\left( x+z\right)p\left(t+tr, z\right)dzdrdt\\ &=& C\int_0^{\infty}r^{\kappa/2-1}\int\int_0^{\infty}t^{\kappa-1}e^{-at-atr}\varphi\left( x+z\right)p\left(t+tr, z\right)dtdzdr\\ &=&C\int_0^{\infty}\frac{r^{\kappa/2-1}}{\left(1+r\right)^{\kappa}}dr\int_0^{\infty}\int t^{\kappa-1}e^{-at}\varphi\left( x+z\right)p\left(t, z\right)dzdt\\ &=& C\int_0^{\infty}t^{\kappa-1}e^{-at}\mathbf{E}\varphi\left( x+Z_{t}^{\bar{\nu}}\right)dt.\end{aligned}$$ \[pr3\] Let $\nu$ be a Lévy measure satisfying **A(w,l)**, $\beta\in\left( 0,\infty\right),\kappa\in\left( 0,1\right]$. Then norm $\left\vert u\right\vert _{\nu,\kappa,\beta }$ and norm $\left\Vert u\right\Vert _{\nu,\kappa,\beta }$ are equivalent in $C_b^{\infty}\left( \mathbf{R}^d\right)$. For the purpose of clarity, we state our proof in parts.\ **Part 1:** Show $\left\vert u\right\vert _{\nu,\kappa,\beta }\leq C\left\Vert u\right\Vert _{\nu,\kappa,\beta }$ for all $\kappa\in\left( 0,1\right]$. By $\eqref{e2}$, $\eqref{rp2}$ and $\eqref{sup}$, for all $\kappa\in\left(0,1\right]$, $$\begin{aligned} \left\vert \left( I -L^{\nu}\right)^{-\kappa}u\right\vert_0\leq C\left\vert u\right\vert_0\leq C \left\vert u\right\vert_{\beta,\infty}, \forall u\in C_b^{\infty}\left( \mathbf{R}^d\right). \end{aligned}$$ Since $\left( I-L^{\nu}\right)^{\kappa}$ is a bijection, it implies $$\begin{aligned} \left\vert u\right\vert_0\leq C \left\vert \left( I -L^{\nu}\right)^{\kappa}u\right\vert_{\beta,\infty}, \forall u\in C_b^{\infty}\left( \mathbf{R}^d\right).\label{ssup} \end{aligned}$$ In the mean time, by $\eqref{e2}$, for all $\kappa\in\left(0,1\right]$, $j\in\mathbf{N}, u\in C_b^{\infty}\left( \mathbf{R}^d\right)$, $$\begin{aligned} \left\vert\left[\left( I -L^{\nu}\right)^{-\kappa}u\right]\ast\varphi_j\right\vert_0 =\left\vert\int_{0}^{\infty }t^{\kappa-1}e^{- t}\mathbf{E}\left[ u\ast\varphi_j\left( x+Z_{t}\right)\right] dt\right\vert_0\leq C\left\vert u\ast\varphi_j\right\vert_0,\nonumber \end{aligned}$$ where $Z_t=Z_{t}^{\nu}$ if $\kappa=1$ and $Z_t=Z_{t}^{\bar{\nu}}$ otherwise. Given that $\left( I-L^{\nu}\right)^{\kappa}$ is bijective, it leads to $$\begin{aligned} \left\vert u\ast\varphi_j\right\vert_0\leq C\left\vert \left[\left( I -L^{\nu}\right)^{\kappa}u\right]\ast\varphi_j\right\vert_0, \forall j\in\mathbf{N}, \end{aligned}$$ namely, for all $\kappa\in\left(0,1\right]$, $$\begin{aligned} \left\vert u\right\vert_{\beta,\infty}\leq C \left\vert \left( I -L^{\nu}\right)^{\kappa}u\right\vert_{\beta,\infty}, \forall u\in C_b^{\infty}\left( \mathbf{R}^d\right).\label{low} \end{aligned}$$ Therefore, $$\begin{aligned} \label{kap1} \left\vert L^{\nu}u\right\vert_{\beta,\infty}\leq \left\vert \left( I -L^{\nu}\right)u\right\vert_{\beta,\infty}+\left\vert u\right\vert_{\beta,\infty}\leq C \left\vert \left( I-L^{\nu}\right)u\right\vert_{\beta,\infty}. \end{aligned}$$ Similarly, for $\kappa\in\left( 0,1\right)$, by $\eqref{kap}$ and $\eqref{rp1}$, $$\begin{aligned} \left\vert L^{\nu,\kappa}u\ast \varphi_j\right\vert_0&\leq& \left\vert \left( I-L^{\nu}\right)^{\kappa}u\ast\varphi_j\right\vert_0+ C\left\vert\int_0^{\infty}t^{-\kappa-1}\left(1-e^{-t}\right) \mathbf{E}u\ast\varphi_j\left( x+Z_t^{\bar{\nu}}\right)dt\right\vert_0\\ &\leq& \left\vert \left( I-L^{\nu}\right)^{\kappa}u\ast\varphi_j\right\vert_0+ C\left\vert u\ast\varphi_j\right\vert_0, \forall j\in\mathbf{N},\end{aligned}$$ which together with $\eqref{low}$ indicate $$\begin{aligned} \label{kap2} \qquad \left\vert L^{\nu,\kappa}u\right\vert_{\beta,\infty}\leq \left\vert \left( I-L^{\nu}\right)^{\kappa}u\right\vert_{\beta,\infty}+ C\left\vert u\right\vert_{\beta,\infty}\leq C\left\vert \left( I-L^{\nu}\right)^{\kappa}u\right\vert_{\beta,\infty}.\end{aligned}$$ Combine $\eqref{ssup},\eqref{kap1},\eqref{kap2}$. $\left\vert u\right\vert _{\nu,\kappa,\beta }\leq C\left\Vert u\right\Vert _{\nu,\kappa,\beta },\forall\kappa\in\left( 0,1\right]$. **Part 2:** Show $\left\Vert u\right\Vert _{\nu,\kappa,\beta }\leq C\left\vert u\right\vert _{\nu,\kappa,\beta }$ for all $\kappa\in\left( 0,1\right)$. By $\eqref{kap}$ and $\eqref{rp1}$ again, $$\begin{aligned} \left\vert \left( I- L^{\nu}\right)^{\kappa}u\ast\varphi_j\right\vert_0&\leq&\left\vert L^{\nu,\kappa}u\ast \varphi_j\right\vert_0+ C\left\vert\int_0^{\infty}t^{-\kappa-1}\left(1-e^{-t}\right) \mathbf{E}u\ast\varphi_j\left( x+Z_t^{\bar{\nu}}\right)dt\right\vert_0\\ &\leq&\left\vert L^{\nu,\kappa}u\ast \varphi_j\right\vert_0+ C\left\vert u\ast\varphi_j\right\vert_0,\forall j\in\mathbf{N}.\end{aligned}$$ This is to say $$\begin{aligned} \label{lower} \left\vert \left( I- L^{\nu}\right)^{\kappa}u\right\vert_{\beta,\infty}\leq \left\vert L^{\nu,\kappa}u\right\vert_{\beta,\infty}+ C\left\vert u\right\vert_{\beta,\infty}. \end{aligned}$$ It then suffices to prove $\left\vert u\right\vert_{\beta,\infty}\leq C\left( \left\vert u\right\vert_{0}+\left\vert L^{\nu,\kappa}u\right\vert_{\beta,\infty}\right)$ for $\kappa\in\left(0,1\right)$. Note, $$\begin{aligned} \label{var0} \left\vert u\ast\varphi_0\right\vert_{0}\leq \left\vert\varphi_0\right\vert_{L^1\left(\mathbf{R}^d\right)}\left\vert u\right\vert_0\leq C\left\vert u\right\vert_0. \end{aligned}$$ For $j\neq 0$, $$\begin{aligned} \left\vert u\ast\varphi_j\right\vert_0 &=&\left\vert \mathcal{F}^{-1}\left[\left( -\Re \psi^{\nu}\left( \xi\right)\right)^{-\kappa}\widehat{\tilde{\varphi}_j}\left( \xi\right)\hat{\varphi_j}\left( \xi\right)\left( -\Re \psi^{\nu}\left( \xi\right)\right)^{\kappa}\mathcal{F}u\right]\right\vert_0\\ &=&\left\vert \left( \mathcal{F}^{-1}g_j\right)\ast \left(L^{\nu,\kappa}u\ast\varphi_j\right)\right\vert_0,\end{aligned}$$ where $$\begin{aligned} g_j\left(\xi\right)=-\left( -\Re \psi^{\nu}\left( \xi\right)\right)^{-\kappa}\mathcal{F}\tilde{\varphi}\left(N^{-j} \xi\right).\end{aligned}$$ We would like to show that $\left\vert \mathcal{F}^{-1}g_j\right\vert_{L^1\left( \mathbf{R}^d\right)}<C$ for some $C$ independent of $j$. Because in that case, $$\begin{aligned} \left\vert u\ast\varphi_j\right\vert_0\leq \left\vert \mathcal{F}^{-1}g_j\right\vert_{L^1\left( \mathbf{R}^d\right)}\left\vert L^{\nu,\kappa}u\ast\varphi_j\right\vert_0\leq C\left\vert L^{\nu,\kappa}u\ast\varphi_j\right\vert_0, j\in\mathbf{N}_{+},\end{aligned}$$ which together with $\eqref{var0}$ lead to $\left\vert u\right\vert_{\beta,\infty}\leq C\left( \left\vert u\right\vert_{0}+\left\vert L^{\nu,\kappa}u\right\vert_{\beta,\infty}\right)$. Indeed, $$\begin{aligned} &&\int \left\vert \mathcal{F}^{-1}g_j\left( x\right)\right\vert dx\\ &\leq& C\left\vert \mathcal{F}^{-1}\int_0^{\infty}t^{\kappa-1}\exp\{\Re\psi^{\nu}\left( \xi\right)t\}\mathcal{F}\tilde{\varphi}\left(N^{-j} \xi\right)dt\right\vert_{L^1\left( \mathbf{R}^d\right)}\\ &=& C\left\vert \mathcal{F}^{-1}\int_0^{\infty}t^{\kappa-1}\exp\{w\left( N^{-j}\right)^{-1}\Re\psi^{\tilde{\nu}_{N^{-j}}}\left( N^{-j}\xi\right)t\}\mathcal{F}\tilde{\varphi}\left(N^{-j} \xi\right)dt\right\vert_{L^1\left( \mathbf{R}^d\right)}\\ &=& C\left\vert \mathcal{F}^{-1}\int_0^{\infty}t^{\kappa-1}\exp\{w\left( N^{-j}\right)^{-1}\Re\psi^{\tilde{\nu}_{N^{-j}}}\left( \xi\right)t\}\mathcal{F}\tilde{\varphi}\left( \xi\right)dt\right\vert_{L^1\left( \mathbf{R}^d\right)}.\end{aligned}$$ Note $\Re\psi^{\tilde{\nu}_{N^{-j}}}\left( \xi\right)=\psi^{\overline{\tilde{\nu}_{N^{-j}}}}\left( \xi\right)$, where $\overline{\tilde{\nu}_{N^{-j}}}\left( dy\right)=\widetilde{\bar{\nu}}_{N^{-j}}$. Then, $$\begin{aligned} \int \left\vert \mathcal{F}^{-1}g_j\left( x\right)\right\vert dx\leq C\left\vert \mathcal{F}^{-1}\int_0^{\infty}t^{\kappa-1}\mathcal{F}\mathbf{E}\tilde{\varphi}\left( x+Z_{w\left( N^{-j}\right)^{-1}t}^{\widetilde{\bar{\nu}}_{N^{-j}}}\right) dt\right\vert_{L^1\left( \mathbf{R}^d\right)}.\end{aligned}$$ Recall that $supp \mathcal{F}\tilde{\varphi}=\{\xi: N^{-2}\leq \left\vert\xi\right\vert\leq N^2 \}$. By Lemma \[lemma2\] in Appendix, there are positive constants $C_1,C_2$ independent of $j$ ($j\neq 0$), such that $$\begin{aligned} \int_{\mathbf{R}^d}\left\vert\mathbf{E}\tilde{\varphi}\left(x+Z_t^{\widetilde{\bar{\nu}}_{N^{-j}}}\right)\right\vert dx \leq C_1 e^{-C_2 t}.\end{aligned}$$ Therefore, $$\begin{aligned} \int \left\vert \mathcal{F}^{-1}g_j\left( x\right)\right\vert dx &\leq& C\left\vert \int_0^{\infty}t^{\kappa-1}\mathbf{E}\tilde{\varphi}\left( x+Z_{w\left( N^{-j}\right)^{-1}t}^{\widetilde{\bar{\nu}}_{N^{-j}}}\right) dt\right\vert_{L^1\left( \mathbf{R}^d\right)}\\ &\leq& C\int_0^{\infty}t^{\kappa-1}\exp\{-C_2w\left( N^{-j}\right)^{-1}t\}dt\\ &\leq& Cw\left( N^{-j}\right)^{\kappa}\leq C\end{aligned}$$ for some $C$ independent of $j$. **Part 3:** Show $\left\Vert u\right\Vert _{\nu,\kappa,\beta }\leq C\left\vert u\right\vert _{\nu,\kappa,\beta }$ for all $\kappa=1$. Since $\left\vert \left( I- L^{\nu}\right)u\right\vert_{\beta,\infty}\leq \left\vert L^{\nu}u\right\vert_{\beta,\infty}+\left\vert u\right\vert_{\beta,\infty}$. Similarly as Part 2, we just need to show $\left\vert u\right\vert_{\beta,\infty}\leq C\left( \left\vert u\right\vert_{0}+\left\vert L^{\nu}u\right\vert_{\beta,\infty}\right)$. Again, $$\begin{aligned} \left\vert u\ast\varphi_0\right\vert_{0}\leq \left\vert\varphi_0\right\vert_{L^1\left(\mathbf{R}^d\right)}\left\vert u\right\vert_0\leq C\left\vert u\right\vert_0. \end{aligned}$$ And for $j\neq 0$, $$\begin{aligned} \left\vert u\ast\varphi_j\right\vert_0 &=&\left\vert \mathcal{F}^{-1}\left[\left( \psi^{\nu}\left( \xi\right)\right)^{-1}\widehat{\tilde{\varphi}_j}\left( \xi\right)\hat{\varphi_j}\left( \xi\right) \psi^{\nu}\left( \xi\right)\mathcal{F}u\right]\right\vert_0\\ &=&\left\vert \left( \mathcal{F}^{-1}g_j\right)\ast \left(L^{\nu}u\ast\varphi_j\right)\right\vert_0\\ &\leq& \left\vert \mathcal{F}^{-1}g_j\right\vert_{L^1\left( \mathbf{R}^d\right)}\left\vert L^{\nu}u\ast\varphi_j\right\vert_0, \end{aligned}$$ where $$\begin{aligned} g_j\left(\xi\right)=\left( \psi^{\nu}\left( \xi\right)\right)^{-1}\mathcal{F}\tilde{\varphi}\left(N^{-j} \xi\right). \end{aligned}$$ By Lemma \[lemma1\] in Appendix, for each $j\in\mathbf{N}_{+}$, $\Re\psi^{\nu}<0$, thus $g_j\left(\xi\right)$ is well-defined. The rest of this proof is devoted to looking for an upper bound of $\left\vert \mathcal{F}^{-1}g_j\right\vert_{L^1\left( \mathbf{R}^d\right)}$ that is uniform with respect to $j$. Analogously as before, applying Lemma \[lemma2\] in Appendix, $$\begin{aligned} \int \left\vert \mathcal{F}^{-1}g_j\left( x\right)\right\vert dx &=& \left\vert \mathcal{F}^{-1}\left[\int_0^{\infty}\exp\{\psi^{\nu}\left( \xi\right)t\} \mathcal{F}\tilde{\varphi}\left(N^{-j} \xi\right)dt\right]\right\vert_{L^1\left( \mathbf{R}^d\right)}\\ &\leq& \int_{\mathbf{R}^d} \int_0^{\infty}\left\vert\mathbf{E}\tilde{\varphi}\left( x+Z^{\tilde{\nu}_{N^{-j}}}_{w\left( N^{-j}\right)^{-1}t}\right)\right\vert dt dx\\ &\leq& Cw\left( N^{-j}\right)\leq C.\end{aligned}$$ This concludes the proof. \[pr4\] Let $\nu$ be a Lévy measure satisfying **A(w,l)**, $\beta\in\left( 0,\infty\right),\kappa\in\left( 0,1\right]$. Then norm $\left\vert u\right\vert _{\nu,\kappa,\beta }$ and norm $\left\vert u\right\vert_{\kappa+\beta,\infty}$ are equivalent in $C_b^{\infty}\left( \mathbf{R}^d\right)$. We first assume the finiteness of $\left\vert u\right\vert_{\kappa+\beta,\infty}$. It was showed in Lemma \[equiv\] that $\left\vert u\right\vert_0\leq C\left\vert u\right\vert_{\kappa+\beta,\infty}$. To prove $\left\vert L^{\nu,\kappa}u\right\vert_{\beta,\infty}\leq C\left\vert u\right\vert_{\kappa+\beta,\infty}$ for some $C>0$, it suffices to show for each $j\in \mathbf{N}$, $$\begin{aligned} \left\vert \left(L^{\nu,\kappa}u\right)\ast \varphi_j\right\vert_0\leq C w\left( N^{-j}\right)^{-\kappa}\left\vert u\ast \varphi_j\right\vert_0, \quad \kappa\in\left( 0,1\right].\end{aligned}$$ In fact, by Corollary \[co1\], $$\begin{aligned} &&\left\vert \left(L^{\nu,\kappa}u\right)\ast \varphi_j\right\vert_0=\left\vert L^{\nu,\kappa}\left(u\ast \varphi_j\ast \tilde{\varphi}_j\right)\right\vert_0=\left\vert \left(L^{\nu,\kappa}\tilde{\varphi}_j\right)\ast\left(u\ast \varphi_j\right) \right\vert_0\\ &\leq& \left\vert L^{\nu,\kappa}\tilde{\varphi}_j\right\vert_{L^1\left( \mathbf{R}^d\right)}\left\vert u\ast \varphi_j\right\vert_0\leq C w\left( N^{-j}\right)^{-\kappa}\left\vert u\ast \varphi_j\right\vert_0.\end{aligned}$$ This is to say for all $\kappa\in\left( 0,1\right]$, $$\begin{aligned} \label{lmu} \left\vert u\right\vert _{\nu,\kappa,\beta }=\left\vert u\right\vert_0+\left\vert L^{\nu,\kappa}u\right\vert_{\beta,\infty}<C\left\vert u\right\vert_{\kappa+\beta,\infty}.\end{aligned}$$ To show $\left\vert u\right\vert_{\kappa+\beta,\infty}<C\left\vert u\right\vert _{\nu,\kappa,\beta }$, according to Proposition \[pr3\], we just need to prove $\left\vert u\right\vert_{\kappa+\beta,\infty}<C\left\Vert u\right\Vert _{\nu,\kappa,\beta }$. By $\eqref{e2}$, $$\begin{aligned} &&\left\vert\left( I -L^{\nu}\right)^{-1}\left(u\ast\varphi_j\right)\right\vert_0\\ &=&\left\vert\int_{0}^{\infty }e^{- t}\mathcal{F}^{-1}\left[\exp\{\psi^{\nu}\left( \xi\right)t\}\widehat{\tilde{\varphi}}_j\left( \xi\right)\hat{\varphi_j}\left( \xi\right)\hat{ u}\left( \xi\right)\right] dt\right\vert_0,\forall j\in\mathbf{N}.\end{aligned}$$ By $\eqref{rp2}$, for all $\kappa\in\left( 0,1\right)$, $$\begin{aligned} &&\left\vert\left( I -L^{\nu}\right)^{-\kappa}\left(u\ast\varphi_j\right)\right\vert_0\\ &=&C\left\vert\int_{0}^{\infty }t^{\kappa-1}e^{- t}\mathcal{F}^{-1}\left[\exp\{\psi^{\bar{\nu}}\left( \xi\right)t\}\widehat{\tilde{\varphi}}_j\left( \xi\right)\hat{\varphi_j}\left( \xi\right)\hat{ u}\left( \xi\right)\right] dt\right\vert_0,\forall j\in\mathbf{N}.\end{aligned}$$ First we consider $j=0$. Set $Z_t=Z_{t}^{\nu}$ if $\kappa=1$ and $Z_t=Z_{t}^{\bar{\nu}}$ otherwise. For all $\kappa\in\left( 0,1\right]$, $$\begin{aligned} \left\vert\left( I -L^{\nu}\right)^{-\kappa}\left(u\ast\varphi_0\right)\right\vert_0 &\leq& \left\vert u\ast\varphi_0\right\vert_0\int_{0}^{\infty }t^{\kappa-1}e^{-t}\left\vert\mathbf{E}\tilde{\varphi}_0\left(\cdot+Z_t\right)\right\vert_{L^1\left( \mathbf{R}^d\right)}dt\nonumber\\ &\leq& C\left\vert u\ast\varphi_0\right\vert_0.\label{q2}\end{aligned}$$ For $j\neq 0$, use $\eqref{symbol}$. $$\begin{aligned} &&\left\vert\left( I -L^{\nu}\right)^{-1}\left(u\ast\varphi_j\right)\right\vert_0\\ &=&\left\vert\int_{0}^{\infty }e^{-t}\mathcal{F}^{-1}\left[\exp\{w\left( N^{-j}\right)^{-1}\psi^{\tilde{\nu}_{N^{-j}}}\left( N^{-j}\xi\right)t\}\mathcal{F}\tilde{\varphi}\left(N^{-j} \xi\right)\right]\ast \left(u\ast\varphi_j\right) dt\right\vert_0\\ &\leq& \left\vert u\ast\varphi_j\right\vert_0\int_{0}^{\infty }e^{-t}\left\vert\mathcal{F}^{-1}\left[\exp\{\psi^{\tilde{\nu}_{N^{-j}}}\left( \xi\right)w\left( N^{-j}\right)^{-1}t\}\mathcal{F}\tilde{\varphi}\left( \xi\right)\right]\right\vert_{L^1\left( \mathbf{R}^d\right)}dt\\ &\leq& \left\vert u\ast\varphi_j\right\vert_0\int_{0}^{\infty }e^{-t}\left\vert\mathbf{E}\tilde{\varphi}\left(\cdot+Z_{w\left( N^{-j}\right)^{-1}t}^{\tilde{\nu}_{N^{-j}}}\right)\right\vert_{L^1\left( \mathbf{R}^d\right)}dt,\end{aligned}$$ which, by Lemma \[lemma2\] in Appendix, leads to $$\begin{aligned} &&\left\vert\left( I -L^{\nu}\right)^{-1}\left(u\ast\varphi_j\right)\right\vert_0\label{q3}\\ &\leq& C\left\vert u\ast\varphi_j\right\vert_0\int_{0}^{\infty }e^{-C_2w\left( N^{-j}\right)^{-1} t}dt\leq Cw\left( N^{-j}\right)\left\vert u\ast\varphi_j\right\vert_0.\nonumber\end{aligned}$$ Similarly, for $\kappa\in\left( 0,1\right)$, $$\begin{aligned} &&\left\vert\left( I -L^{\nu}\right)^{-\kappa}\left(u\ast\varphi_j\right)\right\vert_0\nonumber\\ &\leq& C\left\vert u\ast\varphi_j\right\vert_0\int_{0}^{\infty }t^{\kappa-1}e^{-t}\left\vert\mathbf{E}\tilde{\varphi}\left(\cdot+Z_{w\left( N^{-j}\right)^{-1}t}^{\widetilde{\bar{\nu}}_{N^{-j}}}\right)\right\vert_{L^1\left( \mathbf{R}^d\right)}dt\nonumber\\ &\leq& Cw\left( N^{-j}\right)^{\kappa}\left\vert u\ast\varphi_j\right\vert_0.\label{q4}\end{aligned}$$ Combine $\eqref{q2}-\eqref{q4}$. $$\begin{aligned} \left\vert\left( I -L^{\nu}\right)^{-\kappa}u\right\vert_{\kappa+\beta,\infty}\leq C\left\vert u\right\vert_{\beta,\infty},\forall\kappa\in\left(0,1\right],\forall u\in C_b^{\infty}\left( \mathbf{R}^d\right).\end{aligned}$$ By Lemmas \[bij\] and \[rep2\], that means $$\begin{aligned} \left\vert u\right\vert_{\kappa+\beta,\infty}\leq C\left\vert \left( I -L^{\nu}\right)^{\kappa}u\right\vert_{\beta,\infty},\forall u\in C_b^{\infty}\left( \mathbf{R}^d\right).\end{aligned}$$ Therefore, $\left\vert u\right\vert_{\kappa+\beta,\infty}<C\left\Vert u\right\Vert _{\nu,\kappa,\beta }<C\left\vert u\right\vert _{\nu,\kappa,\beta }$. \[pr44\] Let $\nu$ be a Lévy measure satisfying **A(w,l)**, $\beta\in\left( 0,\infty\right),\kappa\in\left( 0,1\right]$. Then norm $\left\Vert u\right\Vert _{\nu,\kappa,\beta }$ and norm $\left\vert u\right\vert_{\kappa+\beta,\infty}$ are equivalent in $C_b^{\infty}\left( \mathbf{R}^d\right)$. This is a consequence of Propositions \[pr3\] and \[pr4\]. \[co2\] Let $\nu$ be a Lévy measure satisfying **A(w,l)**, $\kappa\in\left(0,2\right)$ and $\beta\in\left(0,\infty\right)$. $u\in C_b^{\infty}\left(\mathbf{R}^d\right)\cap \tilde{C}^{\kappa+\beta}_{\infty,\infty}\left(\mathbf{R}^d\right)$. Then there exists a constant $C>0$ independent of $j$ such that $$\begin{aligned} \left\vert L^{\nu,\kappa}u\right\vert_{0}\leq \left\vert L^{\nu,\kappa}u\right\vert_{\beta,\infty}\leq C \left\vert u\right\vert_{\beta+\kappa,\infty}. \end{aligned}$$ By Proposition \[pr4\], if $\kappa\in\left(0,1\right]$, $$\begin{aligned} \left\vert L^{\nu,\kappa}u\right\vert_{0}\leq \left\vert L^{\nu,\kappa}u\right\vert_{\beta,\infty}\leq C\left\vert u\right\vert_{\kappa+\beta,\infty}. \end{aligned}$$ Now suppose $\kappa\in\left(1,2\right)$. $L^{\nu,\kappa}u:=L^{\nu,\kappa/2}\circ L^{\nu,\kappa/2}u$. Then by Corollary \[co1\], $$\begin{aligned} &&\left\vert L^{\nu,\kappa}u\ast \varphi_j\right\vert_{0}=\left\vert u\ast \varphi_j\ast L^{\nu,\kappa/2}\tilde{\varphi}_j\ast L^{\nu,\kappa/2}\tilde{\varphi}_j\right\vert_{0}\\ &\leq& \left\vert u\ast \varphi_j\right\vert_{0}\left\vert L^{\nu,\kappa/2}\tilde{\varphi}_j\right\vert_{L^1\left(\mathbf{R}^d\right)}\left\vert L^{\nu,\kappa/2}\tilde{\varphi}_j\right\vert_{L^1\left(\mathbf{R}^d\right)}\\ &\leq& Cw\left( N^{-j}\right)^{-\kappa}\left\vert u\ast \varphi_j\right\vert_{0},\forall j\in\mathbf{N}. \end{aligned}$$ Therefore, $L^{\nu,\kappa} u\in\tilde{C}^{\beta}_{\infty,\infty}\left(\mathbf{R}^d\right)$ and $\left\vert L^{\nu,\kappa}u\right\vert_{0}\leq \left\vert L^{\nu,\kappa}u\right\vert_{\beta,\infty}\leq C\left\vert u\right\vert_{\kappa+\beta,\infty}$. \[pr5\] Let $0<\beta'<\beta$. Then for any $ \varepsilon\in\left(0,1\right)$ and any bounded function $u$ in $\mathbf{R}^d$, $$\begin{aligned} \left\vert u\right\vert_{\beta',\infty}\leq\varepsilon\left\vert u\right\vert_{\beta,\infty}+C_{\varepsilon}\left\vert u\right\vert_{0}, \end{aligned}$$ where $C_{\varepsilon}$ is independent of $u$. It is sufficient to show that for $\forall j\in\mathbf{N}$, $$\begin{aligned} w\left( N^{-j}\right)^{-\beta'}\left\vert u\ast\varphi_j\right\vert_0\leq \varepsilon\left\vert u\right\vert_{\beta,\infty}+C_{\varepsilon}\left\vert u\right\vert_{0} . \end{aligned}$$ Apply Young’s inequality. For any $\epsilon\in\left(0,1\right)$ and any pair of $p,q$ such that $\frac{1}{p}+\frac{1}{q}=1$, $$\begin{aligned} \left\vert u\ast\varphi_j\right\vert_0 &=& \left( \epsilon w\left( N^{-j}\right)^{\frac{\beta'-\beta}{p}}\left\vert u\ast\varphi_j\right\vert_0^{1/p}\right)\left(\epsilon^{-1} w\left( N^{-j}\right)^{\frac{\beta-\beta'}{p}}\left\vert u\ast\varphi_j\right\vert_0^{1/q}\right)\\ &\leq& \frac{\epsilon^p w\left( N^{-j}\right)^{\beta'-\beta}}{p}\left\vert u\ast\varphi_j\right\vert_0+\frac{ w\left( N^{-j}\right)^{\frac{\left(\beta-\beta'\right)q}{p}}}{q\epsilon^q}\left\vert u\ast\varphi_j\right\vert_0,\forall j\in\mathbf{N}, \end{aligned}$$ thus, $$\begin{aligned} &&w\left( N^{-j}\right)^{-\beta'}\left\vert u\ast\varphi_j\right\vert_0\\ &\leq& \frac{\epsilon^p w\left( N^{-j}\right)^{-\beta}}{p}\left\vert u\ast\varphi_j\right\vert_0+\frac{ w\left( N^{-j}\right)^{\frac{\left(\beta-\beta'\right)q}{p}-\beta'}}{q\epsilon^q}\left\vert u\ast\varphi_j\right\vert_0\\ &\leq& \frac{\epsilon^p }{p}\left\vert u\right\vert_{\beta,\infty}+\frac{1}{q\epsilon^q}w\left( N^{-j}\right)^{\frac{\left(\beta-\beta'\right)q}{p}-\beta'}\left\vert u\ast\varphi_j\right\vert_0,\forall j\in\mathbf{N}. \end{aligned}$$ Choose $p,q$ such that $\frac{\left(\beta-\beta'\right)q}{p}-\beta'\geq 0$, then for some $C>0$, $$\begin{aligned} \frac{1}{q\epsilon^q}w\left( N^{-j}\right)^{\frac{\left(\beta-\beta'\right)q}{p}-\beta'}\left\vert u\ast\varphi_j\right\vert_0\leq \frac{C}{q\epsilon^q}\left\vert u\ast\varphi_j\right\vert_0\leq \frac{C}{q\epsilon^q}\left\vert u\right\vert_0,\forall j\in\mathbf{N}. \end{aligned}$$ Take $\epsilon$ such that $\frac{\epsilon^p }{p}=\varepsilon$ and this is the end the proof. \[app\] Let $\beta\in\left(0,\infty\right)$, $u\in\tilde{C}^{\beta}_{\infty,\infty}\left(\mathbf{R}^d\right)$. Then there exists a sequence $ u_n\in C_b^{\infty}\left(\mathbf{R}^d\right)$ such that $$\begin{aligned} \left\vert u\right\vert_{\beta,\infty}\leq \liminf_n\left\vert u_n\right\vert_{\beta,\infty},\quad\left\vert u_n\right\vert_{\beta,\infty}\leq C\left\vert u\right\vert_{\beta,\infty} \end{aligned}$$ for some $C>0$ that only depends on $d,N$, and for any $0<\beta'<\beta$, $$\begin{aligned} \left\vert u_n-u\right\vert_{\beta',\infty}\rightarrow 0 \mbox{ as }n\rightarrow \infty.\end{aligned}$$ Set $u_n\left( x\right)=\sum_{j=0}^{n+2}\left( u\ast\varphi_j\right)\left( x\right),n\in\mathbf{N}$. Then $$\begin{aligned} \left\vert u_n\right\vert_{\beta,\infty}=\sup_j\left\vert \sum_{k=0}^{n+2}u\ast \varphi_k\ast\varphi_j\right\vert_0 w\left( N^{-j}\right)^{-\beta}.\end{aligned}$$ By construction of $\varphi_j,j\in\mathbf{N}$ in this note, if $j\geq 1,n\geq j-1$, $$\begin{aligned} \left\vert \sum_{k=0}^{n+2}u\ast \varphi_k\ast\varphi_j\right\vert_0=\left\vert\sum_{k=j-1}^{j+1}u\ast \varphi_k\ast\varphi_j\right\vert_0=\left\vert u\ast \varphi_j\right\vert_0.\end{aligned}$$ If $j\geq 2,n< j-1$, $$\begin{aligned} \left\vert \sum_{k=0}^{n+2}u\ast \varphi_k\ast\varphi_j\right\vert_0&\leq& \left\vert u\ast \varphi_j\ast\varphi_{j-1}\right\vert_0+\left\vert u\ast \varphi_{j}\ast\varphi_j\right\vert_0\\ &=&2\left\vert \mathcal{F}^{-1}\phi\right\vert_{L^1\left(\mathbf{R}^d\right)}\left\vert u\ast \varphi_j\right\vert_0.\end{aligned}$$ Besides, $$\begin{aligned} \left\vert \sum_{k=0}^{n+2}u\ast \varphi_k\ast\varphi_0\right\vert_0&\leq& \left\vert u\ast \varphi_0\ast\varphi_{0}\right\vert_0+\left\vert u\ast \varphi_{0}\ast\varphi_1\right\vert_0\\ &=&\left(\left\vert \varphi_0\right\vert_{L^1\left(\mathbf{R}^d\right)}+\left\vert \varphi_1\right\vert_{L^1\left(\mathbf{R}^d\right)}\right)\left\vert u\ast \varphi_j\right\vert_0.\end{aligned}$$ Therefore, for all $n\in\mathbf{N}$, $$\begin{aligned} \left\vert u_n\right\vert_{\beta,\infty}\leq C\sup_j\left\vert u\ast \varphi_j\right\vert_0 w\left( N^{-j}\right)^{-\beta}\leq C\left\vert u\right\vert_{\beta,\infty}.\end{aligned}$$ On the other hand, by Lemma \[equiv\], $u\left( x\right)=\sum_{k=0}^{\infty}\left( u\ast\varphi_k\right)\left( x\right)$. Then in the same vein as above, $$\begin{aligned} \left\vert u\ast\varphi_j\right\vert_{0}&=& \left\vert\sum_{k=0}^{n+2} u\ast\varphi_k\ast \varphi_j+\sum_{k=n+3}^{\infty} u\ast\varphi_k\ast \varphi_j\right\vert_0\\ &=&\left\vert u_n\ast\varphi_j\right\vert_{0}, \quad\forall n\geq j-1,\forall j\in\mathbf{N},\end{aligned}$$ thus, $$\begin{aligned} \left\vert u\ast\varphi_j\right\vert_{0}w\left( N^{-j}\right)^{-\beta}\leq\left\vert u_n\right\vert_{\beta,\infty}, \quad\forall n\geq j-1,\forall j\in\mathbf{N},\end{aligned}$$ and thus $\left\vert u\right\vert_{\beta,\infty}\leq \liminf_n\left\vert u_n\right\vert_{\beta,\infty}$. At last, $$\begin{aligned} \left\vert u-u_n\right\vert_{\beta',\infty}&=&\sup_j \left\vert\sum_{k=n+3}^{\infty} u\ast\varphi_k\ast \varphi_j\right\vert_0w\left( N^{-j}\right)^{-\beta'}\\ &=&\sup_{j\geq n+2} \left\vert\sum_{k=n+3}^{\infty} u\ast\varphi_k\ast \varphi_j\right\vert_0 w\left( N^{-j}\right)^{-\beta}w\left( N^{-j}\right)^{\beta-\beta'}\\ &\leq&C\sup_{j\geq n+2} \left\vert u\ast \varphi_j\right\vert_0 w\left( N^{-j}\right)^{-\beta}w\left( N^{n-j}\right)^{\beta-\beta'}l\left( N^{-n}\right)^{\beta-\beta'}\\ &\leq& C\left\vert u\right\vert_{\beta,\infty}l\left( N^{-n}\right)^{\beta-\beta'}\rightarrow 0 \mbox{ as }n\rightarrow \infty.\end{aligned}$$ Using the approximating sequence introduced in the lemma above, we can extend $L^{\nu,\kappa}u,\kappa\in\left( 0,2\right)$ to all $u\in\tilde{C}^{\kappa+\beta}_{\infty,\infty}\left(\mathbf{R}^d\right),\beta>0$ as follows: $$\begin{aligned} L^{\nu,\kappa}u\left(x\right)=\lim_{n\rightarrow \infty}L^{\nu,\kappa}u_n\left(x\right),x\in\mathbf{R}^d.\end{aligned}$$ The next proposition justifies this definition and addresses continuity of the operator defined in this sense. \[cont\] Let $\nu$ be a Lévy measure satisfying **A(w,l)**, $\beta\in\left(0,\infty\right)$ and $\kappa\in\left(0,2\right)$. Then $\eqref{opp}$ is well-defined for all $\kappa$ and all $u\in\tilde{C}^{\kappa+\beta}_{\infty,\infty}\left(\mathbf{R}^d\right)$, $$\begin{aligned} \label{ext} L^{\nu,\kappa}u\left(x\right)=\lim_{n\rightarrow \infty}L^{\nu,\kappa}u_n\left(x\right),x\in\mathbf{R}^d, \end{aligned}$$ and this convergence is uniform with respect to $x$. Moreover, $$\begin{aligned} \left\vert L^{\nu,\kappa}u\right\vert_{0}\leq\left\vert L^{\nu,\kappa}u\right\vert_{\beta,\infty}&\leq& C\left\vert u\right\vert_{\kappa+\beta,\infty} \end{aligned}$$ for some $C>0$ independent of $u$. Since $u\in\tilde{C}^{\kappa+\beta}_{\infty,\infty}\left(\mathbf{R}^d\right)$, by Proposition \[app\], there is a a sequence $ u_n\in C_b^{\infty}\left(\mathbf{R}^d\right)$ such that $$\begin{aligned} \left\vert u\right\vert_{\kappa+\beta,\infty}\leq \liminf_n\left\vert u_n\right\vert_{\kappa+\beta,\infty},\quad\left\vert u_n\right\vert_{\kappa+\beta,\infty}\leq C\left\vert u\right\vert_{\kappa+\beta,\infty} \end{aligned}$$ for some $C>0$ independent of $u$, and for any $0<\beta'<\beta$, $$\begin{aligned} \left\vert u_n-u\right\vert_{\kappa+\beta',\infty}\rightarrow 0 \mbox{ as }n\rightarrow \infty, \end{aligned}$$ which, according to Lemma \[equiv\] and $\eqref{sup}$, indicates $ u\in C\left(\mathbf{R}^d\right)$. Meanwhile, $\left\vert u_n-u\right\vert_0\to 0$ as $n\rightarrow \infty$ and thus $u_n\xrightarrow{n\to\infty}u$ in the weak topology of $ \mathcal{S}'\left(\mathbf{R}^d\right)$. For such a sequence, by Corollary \[co2\], $$\begin{aligned} \left\vert L^{\nu,\kappa}u_n\right\vert_{0}&\leq& \left\vert L^{\nu,\kappa}u_n\right\vert_{\beta,\infty}\leq C \left\vert u_n\right\vert_{\beta+\kappa,\infty},\\ \left\vert L^{\nu,\kappa}u_n-L^{\nu,\kappa}u_m\right\vert_{0}&\leq& C \left\vert u_n-u_m\right\vert_{\beta'+\kappa,\infty}\xrightarrow{n,m\to\infty}0. \end{aligned}$$ Therefore, both $L^{\nu,\kappa}u_n,\forall n\in\mathbf{N}$ and $\lim_{n\to\infty}L^{\nu,\kappa}u_n$ are continuous functions, and therefore, $$\begin{aligned} \mathcal{F}\left[\lim_{n\rightarrow \infty}L^{\nu}u_n\right]=\lim_{n\rightarrow \infty}\psi^{\nu}\mathcal{F}u_n&=&\psi^{\nu}\mathcal{F}u\in\mathcal{S}'\left(\mathbf{R}^d\right),\\ \mathcal{F}\left[\lim_{n\rightarrow \infty}L^{\nu,\kappa}u_n\right]=\lim_{n\rightarrow \infty}-\left(-\Re\psi^{\nu}\right)^{\kappa} \mathcal{F}u_n&=&-\left(-\Re\psi^{\nu}\right)^{\kappa} \mathcal{F}u\in\mathcal{S}'\left(\mathbf{R}^d\right),\kappa\neq 1.\end{aligned}$$ Namely, $$\begin{aligned} L^{\nu,\kappa}u\left(x\right)=\lim_{n\rightarrow \infty}L^{\nu,\kappa}u_n\left(x\right),x\in\mathbf{R}^d.\end{aligned}$$ Clearly, this convergence is uniform over $x$. Now given any $\beta\in\left(0,\infty\right)$, $$\begin{aligned} &&w\left( N^{-j}\right)^{-\beta}\left\vert L^{\nu,\kappa}u\ast \varphi_j\right\vert_0= \lim_{n\rightarrow\infty}w\left( N^{-j}\right)^{-\beta}\left\vert L^{\nu,\kappa}u_n\ast \varphi_j\right\vert_0\\ &\leq&\limsup_{n\rightarrow\infty}\left\vert u_n\right\vert_{\beta+\kappa,\infty}\leq C \left\vert u\right\vert_{\beta+\kappa,\infty},\forall j\in\mathbf{N}.\end{aligned}$$ Namely, $\left\vert L^{\nu,\kappa}u\right\vert_{\beta,\infty}\leq C\left\vert u\right\vert_{\beta+\kappa,\infty}$. \[thm4\] Let $\nu$ be a Lévy measure satisfying **A(w,l)**, $\beta\in\left( 0,\infty\right),\kappa\in\left( 0,1\right]$. Then norm $\left\vert u\right\vert _{\nu,\kappa,\beta }$ and norm $\left\vert u\right\vert_{\kappa+\beta,\infty}$ are equivalent. As a consequence of $\eqref{sup}$ and Proposition \[cont\], there exists a positive constant $C$ independent of $u$ such that $$\begin{aligned} \left\vert u\right\vert_0+\left\vert L^{\nu,\kappa}u\right\vert_{\beta,\infty}\leq C\left\vert u\right\vert_{\kappa+\beta,\infty}. \end{aligned}$$ Now suppose $\left\vert u\right\vert_0+\left\vert L^{\nu,\kappa}u\right\vert_{\beta,\infty}<\infty$. First, $u$ is a bounded function. Then $u_n=\sum_{i=0}^{n+2}u\ast \varphi_i\in C_b^{\infty}\left( \mathbf{R}^d\right),\forall n\in\mathbf{N}$. Meanwhile, recall that $$\begin{aligned} \left(L^{\nu,\kappa}u\right)_n:= \sum_{i=0}^{n+2}\left(L^{\nu,\kappa}u\right)\ast \varphi_i=L^{\nu,\kappa}u_n\in \tilde{C}^{\kappa+\beta}_{\infty,\infty}\left(\mathbf{R}^d\right) \end{aligned}$$ approximates $L^{\nu,\kappa}u$ and $\left\vert L^{\nu,\kappa}u_n\right\vert_{\beta,\infty}\leq C\left\vert L^{\nu,\kappa}u\right\vert_{\beta,\infty}$. Therefore, by Proposition \[pr4\], $$\begin{aligned} \left\vert u_n\right\vert_{\kappa+\beta,\infty}\leq C\left(\left\vert u_n\right\vert_0+ \left\vert L^{\nu,\kappa}u_n\right\vert_{\beta,\infty}\right)\leq C\left\vert L^{\nu,\kappa}u_n\right\vert_{\beta,\infty}\leq C\left\vert L^{\nu,\kappa}u\right\vert_{\beta,\infty}.\end{aligned}$$ That is to say, for any $j\in\mathbf{N}$, $$\begin{aligned} w\left(N^{-j}\right)^{-\kappa-\beta}\left\vert u_n\ast\varphi_j\right\vert_{0}\leq C\left\vert L^{\nu,\kappa}u\right\vert_{\beta,\infty}.\end{aligned}$$ It suffices to observe that for $j\geq 2, n\geq j-1$, $$\begin{aligned} \left\vert u_n\ast\varphi_j\right\vert_{0}=\left\vert u\ast\tilde{\varphi}_j\ast\varphi_j\right\vert_{0}=\left\vert u\ast\varphi_j\right\vert_{0}\leq Cw\left(N^{-j}\right)^{\kappa+\beta}\left\vert L^{\nu,\kappa}u\right\vert_{\beta,\infty},\end{aligned}$$ and $\left\vert u_n\ast\varphi_j\right\vert_{0}=\left\vert u\ast\varphi_j\right\vert_{0}\leq C\left\vert u\right\vert_{0}$, $j=0$ or $1$, $n\geq j-1$. Therefore, $\left\vert u\right\vert_{\kappa+\beta,\infty}\leq C\left(\left\vert u\right\vert_{0}+\left\vert L^{\nu,\kappa}u\right\vert_{\beta,\infty}\right)$. \[thm5\] Let $\nu$ be a Lévy measure satisfying **A(w,l)**, $\beta\in\left( 0,\infty\right),\kappa\in\left( 0,1\right]$. Then norm $\left\Vert u\right\Vert _{\nu,\kappa,\beta }$ and norm $\left\vert u\right\vert_{\kappa+\beta,\infty}$ are equivalent. $\kappa=1$ has been covered by Theorem \[thm4\] and Proposition \[pr5\]. Let us consider $\kappa\in\left( 0,1\right)$. First assume the finiteness of $\left\vert u\right\vert_{\kappa+\beta,\infty}$. Then by Lemma \[equiv\], $u$ is a bounded and continuous function. Set $u_n=\sum_{i=0}^{n+2}u\ast \varphi_i\in C_b^{\infty}\left( \mathbf{R}^d\right),n\in\mathbf{N}$. We have known that $\left\vert u_n-u\right\vert_{0}\leq C\left\vert u_n-u\right\vert_{\kappa+\beta',\infty}\xrightarrow{n\to\infty}0,\forall \beta'\in\left( 0,\beta\right)$. Hence, $$\begin{aligned} \mathcal{F}\left[\lim_{n\rightarrow \infty}\left(I-L^{\nu}\right)^{\kappa}u_n\right]=\lim_{n\rightarrow \infty}\left(1-\Re\psi^{\mu}\right)^{\kappa} \mathcal{F}u_n=\left(1-\Re\psi^{\mu}\right)^{\kappa} \mathcal{F}u\in\mathcal{S}'\left(\mathbf{R}^d\right),\end{aligned}$$ namely, $\left(I-L^{\nu}\right)^{\kappa}u$ is well-defined and $\left(I-L^{\nu}\right)^{\kappa}u=\lim_{n\rightarrow \infty}\left(I-L^{\nu}\right)^{\kappa}u_n$. By Lemma \[equiv\] and Corollary \[pr44\], $$\begin{aligned} &&\lim_{n,m\rightarrow \infty}\left\vert\left(I-L^{\nu}\right)^{\kappa}u_n-\left(I-L^{\nu}\right)^{\kappa}u_m\right\vert_0\\ &\leq& C\lim_{n,m\rightarrow \infty}\left\vert\left(I-L^{\nu}\right)^{\kappa}u_n-\left(I-L^{\nu}\right)^{\kappa}u_m\right\vert_{\beta'}\leq C\lim_{n,m\rightarrow \infty}\left\vert u_n-u_m\right\vert_{\kappa+\beta'}=0.\end{aligned}$$ Then the convergence is uniform on $\mathbf{R}^d$. Hence, for any $j\in\mathbf{N}$, $$\begin{aligned} &&w\left(N^{-j}\right)^{-\beta}\left\vert\left(I-L^{\nu}\right)^{\kappa}u\ast\varphi_j\right\vert_0=w\left(N^{-j}\right)^{-\beta}\lim_{n\rightarrow \infty}\left\vert\left(I-L^{\nu}\right)^{\kappa}u_n\ast\varphi_j\right\vert_{0}\\ &\leq& \lim_{n\rightarrow \infty}\left\vert\left(I-L^{\nu}\right)^{\kappa}u_n\ast\varphi_j\right\vert_{\beta}\leq C\lim_{n\rightarrow \infty}\left\vert u_n\right\vert_{\kappa+\beta}\leq C\left\vert u\right\vert_{\kappa+\beta},\end{aligned}$$ i.e. $\left\Vert u\right\Vert _{\nu,\kappa,\beta }\leq C\left\vert u\right\vert_{\kappa+\beta}$. If $\left\Vert u\right\Vert _{\nu,\kappa,\beta }$ is finite, then the approximating functions of $\left(I-L^{\nu}\right)^{\kappa}u$ $$\begin{aligned} \left(\left(I-L^{\nu}\right)^{\kappa}u\right)_n=\left(I-L^{\nu}\right)^{\kappa}u_n\in C_b^{\infty}\left( \mathbf{R}^d\right)\cap \tilde{C}^{\beta}_{\infty,\infty}\left( \mathbf{R}^d\right).\end{aligned}$$ Because $\left(I-L^{\nu}\right)^{\kappa}$ is a bijection on $ C_b^{\infty}\left( \mathbf{R}^d\right)$, $u_n\in C_b^{\infty}\left( \mathbf{R}^d\right)$. Then Corollary \[pr44\] implies immediately $$\begin{aligned} \left\vert u_n\right\vert_{0}\leq C\left\vert u_n\right\vert_{\kappa+\beta,\infty}\leq C\left\vert\left(I-L^{\nu}\right)^{\kappa}u_n\right\vert_{\beta,\infty}\leq C\left\vert\left(I-L^{\nu}\right)^{\kappa}u\right\vert_{\beta,\infty}.\end{aligned}$$ For $j\geq 2, n\geq j-1$, $$\begin{aligned} \left\vert u_n\ast\varphi_j\right\vert_{0}=\left\vert u\ast\tilde{\varphi}_j\ast\varphi_j\right\vert_{0}=\left\vert u\ast\varphi_j\right\vert_{0}\leq Cw\left(N^{-j}\right)^{\kappa+\beta}\left\vert \left(I-L^{\nu}\right)^{\kappa}u\right\vert_{\beta,\infty},\end{aligned}$$ and for $j=0$ or $1$, $n\geq j-1$, $$\begin{aligned} \left\vert u_n\ast\varphi_j\right\vert_{0}=\left\vert u\ast\varphi_j\right\vert_{0}\leq C\sup_{n}\left\vert u_n\right\vert_{0}\leq C\left\vert\left(I-L^{\nu}\right)^{\kappa}u\right\vert_{\beta,\infty}.\end{aligned}$$ Therefore, $\left\vert u\right\vert_{\kappa+\beta,\infty}\leq C\left\vert \left(I-L^{\nu}\right)^{\kappa}u\right\vert_{\beta,\infty}$. \[thm6\] Let $\nu$ be a Lévy measure satisfying **A(w,l)**, $\beta\in\left( 0,\infty\right),\kappa\in\left( 0,1\right]$. Then norm $\left\Vert u\right\Vert _{\nu,\kappa,\beta }$ and norm $\left\vert u\right\vert _{\nu,\kappa,\beta }$ are equivalent. This is an immediate consequence of Theorems \[thm4\] and \[thm5\]. Solution Estimates for Smooth Inputs ==================================== Existence and Uniqueness ------------------------ \[thm1\] Let $\nu$ be a Lévy measure, $\alpha\in\left( 0,2\right), \beta\in\left(0,1\right), \lambda\geq 0$. Assume that $f\left( t,x\right)\in C_b^{\infty}\left(H_T\right)\cap \tilde{C}^{\beta}_{\infty,\infty}\left(H_T\right)$. Then there is a unique solution $u\in\left( t,x\right)\in C_b^{\infty}\left(H_T\right)$ to $$\begin{aligned} \label{eeq1} \partial_t u\left( t,x\right)&=&L^{\nu}u\left( t,x\right)-\lambda u\left( t,x\right)+f\left( t,x\right), \\ u\left( 0,x\right)&=& 0,\qquad\left( t,x\right)\in \left[0,T\right]\times\mathbf{R}^d.\nonumber \end{aligned}$$ <span style="font-variant:small-caps;">Existence.</span> Denote $F\left(r,Z^{\nu}_r\right)=e^{-\lambda\left( r-s\right)}f\left(s, x+Z^{\nu}_r-Z^{\nu}_s\right), s\leq r\leq t,$ and apply the Itô formula to $F\left(r,Z^{\nu}_r\right)$ on $\left[s,t\right]$. $$\begin{aligned} && e^{-\lambda\left( t-s\right)}f\left(s, x+Z^{\nu}_t-Z^{\nu}_s\right)-f\left( s,x\right)\\ &=& -\lambda\int_s^t F\left(r,Z^{\nu}_{r}\right)dr+\int_s^t\int \chi_{\alpha}\left( y\right)y\cdot\nabla F\left(r, Z^{\nu}_{r-}\right)\tilde{J}\left(dr,dy\right)\\ &&+ \int_s^t \int \left[ F\left(r,Z^{\nu}_{r-}+y\right)-F\left(r,Z^{\nu}_{r-}\right)-\chi_{\alpha}\left(y\right) y\cdot \nabla F\left(r,Z^{\nu}_{r-}\right)\right]J\left( dr,dy\right). \end{aligned}$$ Take expectation for both sides and use the stochastic Fubini theorem, $$\begin{aligned} && e^{-\lambda\left( t-s\right)}\mathbf{E}f\left(s, x+Z^{\nu}_t-Z^{\nu}_s\right)-f\left( s,x\right)\\ &=& -\lambda\int_s^t e^{-\lambda\left( r-s\right)}\mathbf{E}f\left(s, x+Z^{\nu}_r-Z^{\nu}_s\right)dr+\int_s^t L^{\nu} e^{-\lambda\left( r-s\right)}\mathbf{E}f\left(s, x+Z^{\nu}_r-Z^{\nu}_s\right)dr. \end{aligned}$$ Integrate both sides over $\left[0,t\right]$ with respect to $s$ and obtain $$\begin{aligned} && \int_0^t e^{-\lambda\left( t-s\right)}\mathbf{E}f\left(s, x+Z^{\nu}_t-Z^{\nu}_s\right)ds-\int_0^t f\left( s,x\right)ds\\ &=& -\lambda\int_0^t\int_0^r e^{-\lambda\left( r-s\right)}\mathbf{E}f\left(s, x+Z^{\nu}_r-Z^{\pi}_s\right)dsdr\\ &&+\int_0^t L^{\nu}\int_0^r e^{-\lambda\left( r-s\right)}\mathbf{E}f\left(s, x+Z^{\nu}_r-Z^{\nu}_s\right)dsdr, \end{aligned}$$ which shows $u\left(t,x\right)=\int_0^t e^{-\lambda\left( t-s\right)}\mathbf{E}f\left(s, x+Z_{t-s}^{\nu}\right)ds$ solves $\eqref{eeq1}$ in the integral sense. Obviously, as a result of the dominated convergence theorem and Fubini’s theorem, $u\in C^{\infty}_b\left( H_T\right)$. And by the equation, $u$ is continuously differentiable in $t$. <span style="font-variant:small-caps;">Uniqueness.</span> Suppose there are two solutions $u_1,u_2$ solving the equation, then $u:=u_1-u_2$ solves $$\begin{aligned} \label{uni} \partial_t u\left( t,x\right)&=&L^{\nu} u\left(t,x\right)-\lambda u\left(t,x\right),\\ u\left( 0,x\right)&=& 0.\nonumber \end{aligned}$$ Fix any $t\in\left[0,T\right]$. Apply the Itô formula to $v\left( t-s,Z^{\nu}_s\right):=e^{-\lambda s}u\left(t-s,x+Z^{\nu}_s\right)$, $0\leq s\leq t,$ over $\left[0,t\right]$ and take expectation for both sides of the resulting identity, then $$\begin{aligned} u\left( t,x\right)=-\mathbf{E}\int_0^t e^{-\lambda s}\left[ \left(-\partial_t u-\lambda u+L^{\nu}u\right)\circ\left(t-s,x+Z^{\nu}_{s-}\right)\right]ds=0. \end{aligned}$$ Hölder-Zygmund Estimates of the Solution ---------------------------------------- Since $f\left(t,x\right)\in C_b^{\infty}\left( H_T\right)\cap \tilde{C}^{\beta}_{\infty,\infty}\left(H_T\right)$, by Lemma \[equiv\], $$\begin{aligned} f\left(t,x\right)&=&\left(f\left(t,\cdot\right)\ast \varphi_0\left(\cdot\right)\right)\left(x\right)+\sum_{j=1}^{\infty}\left(f\left(t,\cdot\right)\ast \varphi_j\left(\cdot\right)\right)\left(x\right)\\ &:=&f_0\left(t,x\right)+\sum_{j=1}^{\infty}f_j\left(t,x\right).\end{aligned}$$ Accordingly, $u_j\left(t,x\right)=u\left(t,x\right)\ast\varphi_j\left(x\right)=\int_0^t e^{-\lambda\left( t-s\right)}\mathbf{E}f_j\left(s, x+Z_{t-s}^{\nu}\right)ds, j=0,1\ldots$ is the solution to $\eqref{eeq1}$ with input $f_j=f\ast \varphi_j$. Then by Lemmas \[rep\] and \[Lop\], for $\kappa\in\left(0,1\right]$, $$\begin{aligned} &&L^{\mu,\kappa}u_j\left(t,x\right)=u_j\ast L^{\mu,\kappa}\tilde{\varphi}_j\\ &=&\int_0^t e^{-\lambda\left( t-s\right)}\mathbf{E}\int f_j\left(s, x-z+Z_{t-s}^{\nu}\right)L^{\mu,\kappa}\tilde{\varphi}_j\left( z\right)dz ds\\ &=&\int_0^t e^{-\lambda\left( t-s\right)}\int f_j\left(s, z\right)\mathbf{E}L^{\mu,\kappa}\tilde{\varphi}_j\left( x-z+Z_{t-s}^{\nu}\right)dz ds,\end{aligned}$$ and then $$\begin{aligned} L^{\mu,\kappa}u_j\left(t,x\right) &=& \int_0^t\int \mathcal{F}^{-1}\left[e^{\left(\psi ^{\nu }\left( \xi \right)-\lambda\right)\left(t-s\right)}\widehat{L^{\mu,\kappa}\tilde{\varphi}_j}\right]\left( x-z\right)f_j\left(s, z\right)dz ds \nonumber\\ &=& C \int_0^t \int \mathcal{F}^{-1}\left[\tilde{F}_{t-s}^{j,\kappa}\left( \xi \right)\right]\left( z\right)f_j\left(s, x-z\right)dzds, \label{rp}\end{aligned}$$ where for $j\in\mathbf{N}$, $$\begin{aligned} \quad&&\tilde{F}_{t}^{j,\kappa}\left( \xi \right) := \left\{\begin{array}{ll}\label{rr} -e^{\left(\psi ^{\nu }\left( \xi \right)-\lambda\right)t} \left( -\Re\psi^{\mu}\left( \xi\right)\right)^{\kappa} \hat{\tilde{\varphi}}_j\left(\xi\right) ,& \xi \in \mathbf{R}^{d},\kappa\in\left( 0,1\right),\\ e^{\left(\psi ^{\nu }\left( \xi \right)-\lambda\right)t} \psi^{\mu}\left( \xi\right) \hat{\tilde{\varphi}}_j\left(\xi\right), & \xi \in \mathbf{R}^{d},\kappa=1,\\ e^{\left(\psi ^{\nu }\left( \xi \right)-\lambda\right)t} \hat{\tilde{\varphi}}_j\left(\xi\right), & \xi \in \mathbf{R}^{d},\kappa=0. \end{array}\right. \end{aligned}$$ In particular, when $j\in\mathbf{N}_{+}$, $$\begin{aligned} \mathcal{F}^{-1}\left[\tilde{F}_{t}^{j,\kappa}\left( \xi \right)\right]\left( x\right)=e^{-\lambda t}w\left( N^{-j}\right)^{-\kappa}N^{jd}H^{j,\kappa}_{w\left( N^{-j}\right)^{-1}t}\left( N^j x\right),\label{rrp}\end{aligned}$$ where for $j\in\mathbf{N}_{+}$, $$\begin{aligned} \label{rrr} H_{t}^{j,\kappa}:=\left\{\begin{array}{ll} \mathcal{F}^{-1}\left[-\exp\{\psi ^{\tilde{\nu}_{N^{-j}} }\left( \xi \right)t\} \left( -\Re\psi^{\tilde{\mu}_{N^{-j}}}\left( \xi\right)\right)^{\kappa} \hat{\tilde{\varphi}}\left(\xi\right)\right] ,& \kappa\in\left( 0,1\right),\\ \mathcal{F}^{-1}\left[\exp\{\psi ^{\tilde{\nu}_{N^{-j}} }\left( \xi \right)t\} \psi^{\tilde{\mu}_{N^{-j}}}\left( \xi\right) \hat{\tilde{\varphi}}\left(\xi\right)\right] ,& \kappa=1,\\ \mathcal{F}^{-1}\left[\exp\{\psi ^{\tilde{\nu}_{N^{-j}} }\left( \xi \right)t\} \hat{\tilde{\varphi}}\left(\xi\right)\right] ,& \kappa=0. \end{array}\right. \end{aligned}$$ \[lem1\] Let $\kappa\in\left[ 0,1\right]$. For all $t\in\left[0,T\right]$ and $j\in\mathbf{N}_{+}$, there is $C_1,C_2>0$ depending only on $\alpha,d,N,\alpha_1,\alpha_2,\kappa$ such that $$\begin{aligned} \int \left\vert H^{j,\kappa}_{t}\left( x\right)\right\vert dx\leq C_1e^{-C_2 t}.\end{aligned}$$ Recall that $\hat{\tilde{\varphi}}\left(\xi\right)=\phi\left( N\xi\right)+\phi\left( \xi\right)+\phi\left( N^{-1}\xi\right)$. If we introduce $\tilde{\phi}$ such that $\hat{\tilde{\phi}}\left(\xi\right)=\phi\left( N^2\xi\right)+\phi\left( N\xi\right)+\phi\left( \xi\right)+\phi\left( N^{-1}\xi\right)+\phi\left( N^{-2}\xi\right)$, then $\hat{\tilde{\varphi}}=\hat{\tilde{\varphi}}\hat{\tilde{\phi}}$ and $\hat{\tilde{\varphi}},\hat{\tilde{\phi}}\in C_0^{\infty}\left(\mathbf{R}^d\right)$. We write $$\begin{aligned} H_{t}^{j,0} &=& \mathcal{F}^{-1}[\hat{\tilde{\phi}}\left( \xi \right)\exp \left\{ \psi ^{\tilde{\nu}_{N^{-j}}}\left( \xi \right) t\right\}\hat{\tilde{\varphi}}\left( \xi \right) ],\\ H_{t}^{j,1} &=& \mathcal{F}^{-1}[\psi^{\tilde{\mu}_{N^{-j}}}\left( \xi\right) \hat{\tilde{\phi}}\left( \xi \right)\exp \left\{ \psi ^{\tilde{\nu}_{N^{-j}}}\left( \xi \right) t\right\}\hat{\tilde{\varphi}}\left( \xi \right) ],\\ H_{t}^{j,\kappa} &=& \mathcal{F}^{-1}[-\left( -\Re\psi^{\tilde{\mu}_{N^{-j}}}\left( \xi\right)\right)^{\kappa} \hat{\tilde{\phi}}\left( \xi \right)\exp \left\{ \psi ^{\tilde{\nu}_{N^{-j}}}\left( \xi \right) t\right\}\hat{\tilde{\varphi}}\left( \xi \right) ],\quad \kappa\in\left( 0,1\right). \end{aligned}$$ Thus, for all $\kappa\in\left[ 0,1\right]$, $$\begin{aligned} H_{t}^{j,\kappa}\left( x\right) = \int\left[L^{\tilde{\mu}_{N^{-j}},\kappa}\tilde{\phi}\left(x-z\right)\right]\cdot\left[\mathbf{E}\tilde{\varphi}\left( z+Z_t^{\tilde{\nu}_{N^{-j}}} \right)\right]dz,x\in\mathbf{R}^d, \end{aligned}$$ and thus $$\begin{aligned} \left\vert H_{t}^{j,\kappa}\right\vert_{L^1\left(\mathbf{R}^d\right)} \leq \left\vert L^{\tilde{\mu}_{N^{-j}},\kappa}\tilde{\phi}\right\vert_{L^1\left(\mathbf{R}^d\right)}\left\vert \mathbf{E}\tilde{\varphi}\left( \cdot+Z_t^{\tilde{\nu}_{N^{-j}}} \right)\right\vert_{L^1\left(\mathbf{R}^d\right)}.\end{aligned}$$ Since $\nu$ verifies **A(w,l)**, by Lemma \[lemma2\], there exist positive constants $C_1,C_2$ depending only on $\alpha,d,N,\alpha_1,\alpha_2$, such that $$\begin{aligned} \left\vert\mathbf{E}\tilde{\varphi}\left( \cdot+Z_t^{\tilde{\nu}_{N^{-j}}} \right)\right\vert_{L^1\left(\mathbf{R}^d\right)} <C_1e^{-C_2 t}. \end{aligned}$$ Combining Lemmas \[Lop\] and \[rep\], we then arrive at the conclusion. \[lem2\] Let $\kappa\in\left[ 0,1\right]$. For all $t\in\left[0,T\right]$ and $j\in\mathbf{N}$, there is $C>0$ depending only on $\alpha,d,N,\alpha_1,\alpha_2$ such that $$\begin{aligned} \int \left\vert\mathcal{F}^{-1}\left[\tilde{F}_{t}^{0,\kappa}\left( \xi \right)\right]\left( x\right)\right\vert dx&<&C,\\ \int_0^t\int \left\vert\mathcal{F}^{-1}\left[\tilde{F}_{r}^{j,\kappa}\left( \xi \right)\right]\left( x\right)\right\vert dxdr&<&C,j\in\mathbf{N}_{+}.\end{aligned}$$ First by Lemmas \[Lop\] and \[rep\], $$\begin{aligned} \int \left\vert\mathcal{F}^{-1}\left[\tilde{F}_{t}^{0,\kappa}\left( \xi \right)\right]\left( x\right)\right\vert dx\leq \int \left\vert \mathbf{E}L^{\mu,\kappa}\tilde{\varphi}_0\left(x+Z^{\nu}_t\right)\right\vert dx\leq \int \left\vert L^{\mu,\kappa}\tilde{\varphi}_0\left(x\right)\right\vert dx<C. \end{aligned}$$ For $j\in\mathbf{N}_{+}$, use Lemma \[lem1\]. $$\begin{aligned} &&\int_0^t\int \left\vert\mathcal{F}^{-1}\left[\tilde{F}_{r}^{j,\kappa}\left( \xi \right)\right]\left( x\right)\right\vert dxdr\\ &\leq& \int_0^t\int \left\vert w\left( N^{-j}\right)^{-\kappa}H^{j,\kappa}_{w\left( N^{-j}\right)^{-1}r}\left(x\right)\right\vert dxdr\\ &\leq& w\left( N^{-j}\right)^{1-\kappa}\int_0^{\infty}\int \left\vert H^{j,\kappa}_{r}\left(x\right)\right\vert dxdr<C.\end{aligned}$$ \[col1\] Let $\kappa\in\left[ 0,1\right]$ and $u$ be the solution to $\eqref{eeq1}$ and $\mu$ be the reference measure. Then there exists $C>0$ depending only on $\alpha,d,N,\kappa,\alpha_1,\alpha_2, T$ such that $$\begin{aligned} \left\vert L^{\mu,\kappa}u_j\right\vert_0\leq C\left\vert f_j\right\vert_0, j\in\mathbf{N}. \end{aligned}$$ Recall that $$\begin{aligned} L^{\mu,\kappa}u_j\left(t,x\right)=C \int_0^t \int \mathcal{F}^{-1}\left[\tilde{F}_{t-s}^{j,\kappa}\left( \xi \right)\right]\left( z\right)f_j\left(s, x-z\right)dzds, j\in\mathbf{N}. \end{aligned}$$ Therefore, by Lemma \[lem2\], for all $t\in\left[0,T\right]$, $$\begin{aligned} \left\vert L^{\mu,\kappa}u_j\right\vert_0\leq C\left\vert f_j\right\vert_0 \int_0^T \int \left\vert \mathcal{F}^{-1}\left[\tilde{F}_{t-s}^{j,\kappa}\left( \xi \right)\right]\left( z\right)\right\vert dzds \leq C\left\vert f_j\right\vert_0, j\in\mathbf{N}. \end{aligned}$$ \[lem3\] Let $\kappa\in\left[ 0,1\right]$ and $\mu$ be the reference measure. Both $\mu$ and $\nu$ satisfy **A(w,l)**. Then there is $C>0$ depending only on $\alpha,d,N,\alpha_1,\alpha_2,\kappa$, such that for all $0\leq s<t$, $$\begin{aligned} \left\vert\mathbf{E}\left[ L^{\mu,\kappa}\tilde{\varphi}_0\left(\cdot+Z^{\nu}_{t}\right)-L^{\mu,\kappa}\tilde{\varphi}_0\left(\cdot+Z^{\nu}_{s}\right)\right]\right\vert_{L^1\left(\mathbf{R}^d\right)}\leq C\left( t-s\right). \end{aligned}$$ Denote $\bar{\varphi}_0=\mathcal{F}^{-1}\left[ \mathcal{F}\tilde{\varphi}_0\left( \xi\right)+\phi\left( N^{-2}\xi\right)\right]$, then $\bar{\varphi}_0\in\mathcal{S}\left( \mathbf{R}^d\right)$ and $\mathcal{F}\tilde{\varphi}_0\mathcal{F}\bar{\varphi}_0=\mathcal{F}\tilde{\varphi}_0$. And then, $$\begin{aligned} &&\left\vert\mathbf{E}\left[ L^{\mu}\tilde{\varphi}_0\left(\cdot+Z^{\nu}_{t}\right)-L^{\mu}\tilde{\varphi}_0\left(\cdot+Z^{\nu}_{s}\right)\right]\right\vert_{L^1\left(\mathbf{R}^d\right)}\\ &=&\left\vert\mathcal{F}^{-1}[\psi^{\mu}\left( \xi\right) \hat{\tilde{\varphi}}_0\left( \xi \right)\left(e^{ \psi ^{\nu}\left( \xi \right) t}-e^{ \psi ^{\nu}\left( \xi \right) s}\right)\hat{\bar{\varphi}}_0\left( \xi \right) ]\right\vert_{L^1\left(\mathbf{R}^d\right)}\\ &\leq&\left\vert L^{\mu}\tilde{\varphi}_0\right\vert_{L^1\left(\mathbf{R}^d\right)}\left\vert \mathbf{E}\left[\bar{\varphi}_0\left( \cdot+Z_{t}^{\nu} \right)-\bar{\varphi}_0\left( \cdot+Z_{s}^{\nu} \right)\right]\right\vert_{L^1\left(\mathbf{R}^d\right)}\\ &\leq& \left\vert L^{\mu}\tilde{\varphi}_0\right\vert_{L^1\left(\mathbf{R}^d\right)}\left\vert \mathbf{E}\int_s^t L^{\nu}\bar{\varphi}_0\left( \cdot+Z_{r-}^{\nu} \right)dr\right\vert_{L^1\left(\mathbf{R}^d\right)}, \end{aligned}$$ thus, by Lemmas \[Lop\], \[rep\], $$\begin{aligned} \left\vert\mathbf{E}\left[ L^{\mu}\tilde{\varphi}_0\left(\cdot+Z^{\nu}_{t}\right)-L^{\mu}\tilde{\varphi}_0\left(\cdot+Z^{\nu}_{s}\right)\right]\right\vert_{L^1\left(\mathbf{R}^d\right)}\leq C\left( t-s\right). \end{aligned}$$ Similarly, for $\kappa\in\left(0,1\right)$, $$\begin{aligned} &&\left\vert\mathbf{E}\left[ L^{\mu,\kappa}\tilde{\varphi}_0\left(\cdot+Z^{\nu}_{t}\right)-L^{\mu,\kappa}\tilde{\varphi}_0\left(\cdot+Z^{\nu}_{s}\right)\right]\right\vert_{L^1\left(\mathbf{R}^d\right)}\\ &=&\left\vert\mathcal{F}^{-1}[-\left(-\Re\psi^{\tilde{\mu}_{N^{-j}}}\left( \xi\right)\right)^{\kappa} \hat{\tilde{\varphi}}_0\left( \xi \right)\left(e^{ \psi ^{\nu}\left( \xi \right) t}-e^{ \psi ^{\nu}\left( \xi \right) s}\right)\hat{\bar{\varphi}}_0\left( \xi \right) ]\right\vert_{L^1\left(\mathbf{R}^d\right)}\\ &\leq& \left\vert L^{\mu,\kappa}\tilde{\varphi}_0\right\vert_{L^1\left(\mathbf{R}^d\right)}\left\vert \mathbf{E}\int_s^t L^{\nu}\bar{\varphi}_0\left( \cdot+Z_{r-}^{\nu} \right)dr\right\vert_{L^1\left(\mathbf{R}^d\right)}\leq C\left( t-s\right), \end{aligned}$$ and for $\kappa=0$, $$\begin{aligned} &&\left\vert\mathbf{E}\left[ \tilde{\varphi}_0\left(\cdot+Z^{\nu}_{t}\right)-\tilde{\varphi}_0\left(\cdot+Z^{\nu}_{s}\right)\right]\right\vert_{L^1\left(\mathbf{R}^d\right)}\\ &\leq& \left\vert \tilde{\varphi}_0\right\vert_{L^1\left(\mathbf{R}^d\right)}\left\vert \mathbf{E}\int_s^t L^{\nu}\bar{\varphi}_0\left( \cdot+Z_{r-}^{\nu} \right)dr\right\vert_{L^1\left(\mathbf{R}^d\right)}\leq C\left( t-s\right).\end{aligned}$$ The next Lemma is a stronger version of Lemma \[lem3\] because the Fourier transform of the underlying Schwartz function has a compact support that is away from 0. \[lem4\] Let $\kappa\in\left[ 0,1\right]$ and $j\in\mathbf{N}_{+}$. Then there are $C_1, C_2>0$ depending only on $\alpha,d,N,\alpha_1,\alpha_2,\kappa$, such that for all $0\leq s<t$, $$\begin{aligned} \int \left\vert H^{j,\kappa}_{t}\left( x\right)-H^{j,\kappa}_{s}\left( x\right)\right\vert dx\leq C_1e^{-C_2 s}\left( t-s\right). \end{aligned}$$ Similarly as what we did in Lemma \[lem1\], we introduce $\tilde{\phi}$ such that $\hat{\tilde{\phi}}\left(\xi\right)=\phi\left( N^2\xi\right)+\hat{\tilde{\varphi}}\left(\xi\right)+\phi\left( N^{-2}\xi\right)$. As a consequence, $\hat{\tilde{\varphi}}=\hat{\tilde{\varphi}}\hat{\tilde{\phi}}\hat{\tilde{\phi}}$, and $\hat{\tilde{\varphi}},\hat{\tilde{\phi}},\hat{\tilde{\phi}}\in C_0^{\infty}\left(\mathbf{R}^d\right)$. Then $$\begin{aligned} H_{t}^{j,0}-H_{s}^{j,0}=\mathcal{F}^{-1}[\hat{\tilde{\phi}}\left( \xi \right)e^{ \psi ^{\tilde{\nu}_{N^{-j}}}\left( \xi \right) s}\hat{\tilde{\phi}}\left( \xi \right)\left(e^{ \psi ^{\tilde{\nu}_{N^{-j}}}\left( \xi \right) \left( t-s\right)}-1\right)\hat{\tilde{\varphi}}\left( \xi \right) ], \end{aligned}$$ and $$\begin{aligned} &&H_{t}^{j,1}-H_{s}^{j,1}\\ &=&\mathcal{F}^{-1}[\psi^{\tilde{\mu}_{N^{-j}}}\left( \xi\right) \hat{\tilde{\phi}}\left( \xi \right)e^{ \psi ^{\tilde{\nu}_{N^{-j}}}\left( \xi \right) s}\hat{\tilde{\phi}}\left( \xi \right)\left(e^{ \psi ^{\tilde{\nu}_{N^{-j}}}\left( \xi \right) \left( t-s\right)}-1\right)\hat{\tilde{\varphi}}\left( \xi \right) ], \end{aligned}$$ and for $\kappa\in\left( 0,1\right)$, $$\begin{aligned} &&H_{t}^{j,\kappa}-H_{s}^{j,\kappa}\\ &=&\mathcal{F}^{-1}[-\left( -\Re\psi^{\tilde{\mu}_{N^{-j}}}\left( \xi\right)\right)^{\kappa} \hat{\tilde{\phi}}\left( \xi \right)e^{ \psi ^{\tilde{\nu}_{N^{-j}}}\left( \xi \right) s}\hat{\tilde{\phi}}\left( \xi \right)\left(e^{ \psi ^{\tilde{\nu}_{N^{-j}}}\left( \xi \right) \left( t-s\right)}-1\right)\hat{\tilde{\varphi}}\left( \xi \right) ]. \end{aligned}$$ Thus, for all $\kappa\in\left[ 0,1\right]$, $$\begin{aligned} &&H_{t}^{j,\kappa}-H_{s}^{j,\kappa }\\ &=&L^{\tilde{\mu}_{N^{-j}},\kappa}\tilde{\phi}\left(\cdot\right)\ast\mathbf{E}\tilde{\phi}\left( \cdot+Z_s^{\tilde{\nu}_{N^{-j}}} \right)\ast\mathbf{E}\left[\tilde{\varphi}\left( \cdot+Z_{t-s}^{\tilde{\nu}_{N^{-j}}} \right)-\tilde{\varphi}\left( \cdot \right)\right]\\ &=&L^{\tilde{\mu}_{N^{-j}},\kappa}\tilde{\phi}\left(\cdot\right)\ast\mathbf{E}\tilde{\phi}\left( \cdot+Z_s^{\tilde{\nu}_{N^{-j}}} \right)\ast\mathbf{E}\int_0^{t-s} L^{\tilde{\nu}_{N^{-j}}}\tilde{\varphi}\left( \cdot+Z_{r-}^{\tilde{\nu}_{N^{-j}}} \right)dr, \end{aligned}$$ and thus by Lemmas \[Lop\], \[rep\] and \[lemma2\], $$\begin{aligned} &&\int \left\vert H^{j,\kappa}_{t}\left( x\right)-H^{j,\kappa}_{s}\left( x\right)\right\vert dx\\ &\leq& \int_0^{t-s}\left\vert L^{\tilde{\mu}_{N^{-j}},\kappa}\tilde{\phi}\right\vert_{L^1\left(\mathbf{R}^d\right)}dr\left\vert \mathbf{E}\tilde{\phi}\left( \cdot+Z_s^{\tilde{\nu}_{N^{-j}}} \right)\right\vert_{L^1\left(\mathbf{R}^d\right)}\left\vert L^{\tilde{\nu}_{N^{-j}}}\tilde{\varphi}\right\vert_{L^1\left(\mathbf{R}^d\right)}\\ &\leq& C_1e^{-C_2 s}\left( t-s\right). \end{aligned}$$ \[lem5\] Let $u$ be the solution to $\eqref{eeq1}$, $\mu$ be the reference measure and $\kappa\in\left[ 0,1\right]$. Then there exists $C>0$ depending only on $\alpha,d,N,\alpha_1,\alpha_2,\kappa, T$ such that for all $0\leq s<t\leq T$, $$\begin{aligned} \left\vert L^{\mu,\kappa}u_0\left( t,x\right)-L^{\mu,\kappa}u_0\left( s,x\right)\right\vert&\leq& C\left( t-s\right)\left\vert f_0\right\vert_0,\forall x\in\mathbf{R}^d,\\ \left\vert L^{\mu,\kappa}u_j\left( t,x\right)-L^{\mu,\kappa}u_j\left( s,x\right)\right\vert&\leq& C\left( t-s\right)^{1-\kappa}\left\vert f_j\right\vert_0, \forall x\in\mathbf{R}^d,j\in\mathbf{N}_{+}. \end{aligned}$$ According to $\eqref{rp}$, $$\begin{aligned} &&\left\vert L^{\mu,\kappa}u_j\left(t,x\right)-L^{\mu,\kappa}u_j\left(s,x\right)\right\vert\\ &\leq& C\left\vert f_j\right\vert_0 \int_s^t \int\left\vert \mathcal{F}^{-1}\left[\tilde{F}_{t-r}^{j,\kappa}\left( \xi \right)\right]\left( z\right)\right\vert dzdr \\ &&+ C\left\vert f_j\right\vert_0\int_0^s \int\left\vert \mathcal{F}^{-1}\left[\tilde{F}_{t-r}^{j,\kappa}\left( \xi \right)-\tilde{F}_{s-r}^{j,\kappa}\left( \xi \right)\right]\left( z\right)\right\vert dzdr \\ &:=& C\left\vert f_j\right\vert_0\left( I_1+I_2\right), \quad j\in\mathbf{N}. \end{aligned}$$ When $j=0$, Lemma \[lem2\] implies $$\begin{aligned} I_1=\int_s^t \int\left\vert \mathcal{F}^{-1}\left[\tilde{F}_{t-r}^{j,\kappa}\left( \xi \right)\right]\left( z\right)\right\vert dzdr \leq C\left( t-s\right),\forall \kappa\in\left[0,1\right]. \end{aligned}$$ When $j\neq 0$, recall $\eqref{rrp}$. $$\begin{aligned} I_1 &\leq& \int_s^t \int\left\vert w\left( N^{-j}\right)^{-\kappa}H^{j,\kappa}_{w\left( N^{-j}\right)^{-1}\left(t-r\right)}\left( z\right)\right\vert dzdr \\ &=& w\left( N^{-j}\right)^{1-\kappa}\int_0^{w\left( N^{-j}\right)^{-1}\left(t-s\right)} \int\left\vert H^{j,\kappa}_{r}\left( z\right)\right\vert dzdr . \end{aligned}$$ If $w\left( N^{-j}\right)^{-1}\left(t-s\right)\leq 1$, since $\int\left\vert H^{j,\kappa}_{r}\left( z\right)\right\vert dz<C$ by Lemma \[lem1\], $$\begin{aligned} I_1\leq Cw\left( N^{-j}\right)^{1-\kappa}w\left( N^{-j}\right)^{-1}\left(t-s\right)\leq C\left(t-s\right)^{1-\kappa}.\end{aligned}$$ If $w\left( N^{-j}\right)^{-1}\left(t-s\right)> 1$, again use Lemma \[lem1\]. $$\begin{aligned} I_1\leq w\left( N^{-j}\right)^{1-\kappa}\int_0^{\infty} \int\left\vert H^{j,\kappa}_{r}\left( z\right)\right\vert dzdr\leq Cw\left( N^{-j}\right)^{1-\kappa}<C\left(t-s\right)^{1-\kappa}.\end{aligned}$$ Next we investigate $I_2$. Recall definitions $\eqref{rr}-\eqref{rrr}$. When $j=0$ and $\kappa\in\left[0,1\right]$, $$\begin{aligned} I_2&=&\int_0^s \int\left\vert \mathcal{F}^{-1}\left[\tilde{F}_{t-r}^{0,\kappa}\left( \xi \right)-\tilde{F}_{s-r}^{0,\kappa}\left( \xi \right)\right]\left( z\right)\right\vert dzdr\\ &\leq& \left\vert e^{-\lambda\left( t-s\right)}-1\right\vert\int_0^s e^{-\lambda\left( s-r\right)}\int\left\vert \mathcal{F}^{-1}\left[-e^{\psi ^{\nu }\left( \xi \right)\left(t-r\right)} \mathcal{F}L^{\mu,\kappa}\tilde{\varphi}_0\left(\xi\right)\right]\left( z\right)\right\vert dzdr\\ &+&\int_0^s \int\left\vert \mathcal{F}^{-1}\left[-\left(e^{\psi ^{\nu }\left( \xi \right)\left(t-r\right)}-e^{\psi ^{\nu }\left( \xi \right)\left(s-r\right)}\right) \mathcal{F}L^{\mu,\kappa}\tilde{\varphi}_0\left(\xi\right)\right]\left( z\right)\right\vert dzdr\\ &:=& I_{21}+I_{22}.\end{aligned}$$ Therefore, by Lemmas \[Lop\] and \[rep\], $$\begin{aligned} I_{21}&\leq& \frac{2T}{\lambda}\left\vert e^{-\lambda\left( t-s\right)}-1\right\vert \left\vert\mathbf{E} L^{\mu,\kappa}\tilde{\varphi}_0\left(\cdot+Z^{\nu}_{t-r}\right)\right\vert_{L^1\left(\mathbf{R}^d\right)}\\ &\leq& \frac{2T}{\lambda}\left\vert e^{-\lambda\left( t-s\right)}-1\right\vert \left\vert L^{\mu,\kappa}\tilde{\varphi}_0\right\vert_{L^1\left(\mathbf{R}^d\right)}\leq C\left( t-s\right),\quad\kappa\in\left[0,1\right].\end{aligned}$$ Meanwhile, by Lemma \[lem3\], $$\begin{aligned} I_{22}&\leq& T \left\vert\mathbf{E}\left[ L^{\mu,\kappa}\tilde{\varphi}_0\left(\cdot+Z^{\nu}_{t-r}\right)-L^{\mu,\kappa}\tilde{\varphi}_0\left(\cdot+Z^{\nu}_{s-r}\right)\right]\right\vert_{L^1\left(\mathbf{R}^d\right)}\\ &\leq& C\left( t-s\right),\quad\kappa\in\left[0,1\right].\end{aligned}$$ When $j\neq 0$, $$\begin{aligned} I_2 &\leq& \left\vert e^{-\lambda\left( t-s\right)}-1\right\vert w\left( N^{-j}\right)^{-\kappa}\int_0^s e^{-\lambda\left( s-r\right)} \int\left\vert H^{j,\kappa}_{w\left( N^{-j}\right)^{-1}\left(s-r\right)}\left( z\right)\right\vert dzdr\\ &+&w\left( N^{-j}\right)^{-\kappa}\int_0^s \int\left\vert H^{j,\kappa}_{w\left( N^{-j}\right)^{-1}\left(t-r\right)}\left( z\right)-H^{j,\kappa}_{w\left( N^{-j}\right)^{-1}\left(s-r\right)}\left( z\right)\right\vert dzdr\\ &:=& I'_{21}+I'_{22}.\end{aligned}$$ By Lemma \[lem1\], $$\begin{aligned} I'_{21} &\leq& \left\vert e^{-\lambda\left( t-s\right)}-1\right\vert w\left( N^{-j}\right)^{-\kappa}\int_0^s e^{-\lambda r} \int\left\vert H^{j,\kappa}_{w\left( N^{-j}\right)^{-1}r}\left( z\right)\right\vert dzdr\\ &\leq& C\left\vert e^{-\lambda\left( t-s\right)}-1\right\vert w\left( N^{-j}\right)^{-\kappa}\int_0^s e^{-\lambda r} e^{-C'w\left( N^{-j}\right)^{-1}r}dr.\end{aligned}$$ If $w\left( N^{-j}\right)^{-1}\left(t-s\right)\leq 1$, $$\begin{aligned} I'_{21}&\leq& C\left\vert e^{-\lambda\left( t-s\right)}-1\right\vert w\left( N^{-j}\right)^{-\kappa}\int_0^s e^{-\lambda r} dr\\ &\leq& C\left(t-s\right)w\left( N^{-j}\right)^{-\kappa}\leq C \left(t-s\right)^{1-\kappa}.\end{aligned}$$ If $w\left( N^{-j}\right)^{-1}\left(t-s\right)> 1$, use Lemma \[lem1\]. $$\begin{aligned} I'_{21} &\leq& C w\left( N^{-j}\right)^{-\kappa}\int_0^{s} e^{-C'w\left( N^{-j}\right)^{-1}r}dr\leq C w\left( N^{-j}\right)^{1-\kappa}\leq C \left(t-s\right)^{1-\kappa}.\end{aligned}$$ On the other hand, $$\begin{aligned} I'_{22}&=&w\left( N^{-j}\right)^{1-\kappa}\int_0^{w\left( N^{-j}\right)^{-1}s} \int\left\vert H^{j,\kappa}_{w\left( N^{-j}\right)^{-1}\left(t-s\right)+r}\left( z\right)-H^{j,\kappa}_{r}\left( z\right)\right\vert dzdr\\ &\leq& w\left( N^{-j}\right)^{1-\kappa}\int_0^{\infty} \int\left\vert H^{j,\kappa}_{w\left( N^{-j}\right)^{-1}\left(t-s\right)+r}\left( z\right)-H^{j,\kappa}_{r}\left( z\right)\right\vert dzdr.\end{aligned}$$ If $w\left( N^{-j}\right)^{-1}\left(t-s\right)\leq 1$, use Lemma \[lem4\]. $$\begin{aligned} I'_{22} \leq C w\left( N^{-j}\right)^{1-\kappa}w\left( N^{-j}\right)^{-1}\left(t-s\right)\leq C \left(t-s\right)^{1-\kappa}.\end{aligned}$$ If $w\left( N^{-j}\right)^{-1}\left(t-s\right)> 1$, use Lemma \[lem1\]. $$\begin{aligned} I'_{22} \leq 2 w\left( N^{-j}\right)^{1-\kappa}\int_0^{\infty} \int\left\vert H^{j,\kappa}_{r}\left( z\right)\right\vert dzdr\leq C w\left( N^{-j}\right)^{1-\kappa}\leq C \left(t-s\right)^{1-\kappa}.\end{aligned}$$ This is the end of the proof. \[thm11\] Let $\nu$ be a Lévy measure satisfying **A(w,l)** and $\beta\in\left( 0,\infty\right)$. Then the unique solution $u\in\left( t,x\right)$ to $\eqref{eeq1}$ satisfies $$\begin{aligned} \left\vert u\right\vert_{\beta,\infty}&\leq& C\left( \lambda^{-1}\wedge T\right) \left\vert f\right\vert_{\beta,\infty},\label{est5}\\ \left\vert u\right\vert_{1+\beta,\infty}&\leq& C\left\vert f\right\vert_{\beta,\infty}\label{est1} \end{aligned}$$ for some $C$ depending on $\alpha,\alpha_1,\alpha_2,N, \beta, d,T$. Meanwhile, for all $\kappa\in\left[ 0,1\right]$, there exists a constant $C$ depending on $\alpha,\kappa,\beta,\alpha_1,\alpha_2,N, d, T,\nu$ such that for all $0\leq s<t\leq T$, $$\label{est2} \left\vert u\left(t,\cdot\right)-u\left(s,\cdot\right)\right\vert_{\kappa+\beta,\infty}\leq C\left\vert t-s\right\vert^{1-\kappa}\left\vert f\right\vert_{\beta,\infty}.$$ Denote as before $u_j=u\ast \varphi_j, j\in\mathbf{N}$. We have known that $$\begin{aligned} u_j\left(t,x\right)=\int_0^t e^{-\lambda\left( t-s\right)}\mathbf{E}f_j\left(s, x+Z_{t-s}^{\nu}\right)ds,\forall j\in\mathbf{N}. \end{aligned}$$ Obviously, $$\begin{aligned} \left\vert u_j\right\vert_0\leq \left\vert f_j\right\vert_0\int_0^t e^{-\lambda s}ds\leq C\left( \lambda^{-1}\wedge T\right) \left\vert f_j\right\vert_0,\forall j\in\mathbf{N}, \end{aligned}$$ which implies $\left\vert u\right\vert_{\beta,\infty}\leq C\left( \lambda^{-1}\wedge T\right) \left\vert f\right\vert_{\beta,\infty}$, $\forall \beta\in\left( 0,\infty\right)$. Recall $\eqref{sup}$. $$\begin{aligned} \left\vert u\right\vert_{0}\leq\left\vert u\right\vert_{\beta,\infty}\leq C\left\vert f\right\vert_{\beta,\infty}. \end{aligned}$$ In the mean time, note $L^{\mu}u\ast \varphi_j= L^{\mu}u_j$, and by taking $\kappa=1$ in Corollary \[col1\], $\left\vert L^{\mu}u_j\right\vert_0\leq C\left\vert f_j\right\vert_0$. This is to say, $\left\vert L^{\mu}u\right\vert_{\beta,\infty}\leq C\left\vert f\right\vert_{\beta,\infty}$. By Proposition \[pr4\], $$\begin{aligned} \left\vert u\right\vert_{1+\beta,\infty}\leq C\left(\left\vert u\right\vert_{0}+\left\vert L^{\mu}u\right\vert_{\beta,\infty}\right)\leq C\left\vert f\right\vert_{\beta,\infty}.\end{aligned}$$ Similarly, by Lemma \[lem5\], we know that for all $j\in\mathbf{N}$, $$\begin{aligned} \left\vert L^{\mu,\kappa}u_j\left( t,x\right)-L^{\mu,\kappa}u_j\left( s,x\right)\right\vert\leq C\left( t-s\right)^{1-\kappa}\left\vert f_j\right\vert_0,\forall x\in\mathbf{R}^d,\kappa\in\left[ 0,1\right],\end{aligned}$$ namely, for all $\beta\in\left( 0,\infty\right)$, $$\begin{aligned} \left\vert L^{\mu,\kappa}u\left( t,\cdot\right)-L^{\mu,\kappa}u\left( s,\cdot\right)\right\vert_{\beta,\infty}\leq C\left( t-s\right)^{1-\kappa}\left\vert f\right\vert_{\beta,\infty},\kappa\in\left[ 0,1\right].\end{aligned}$$ Therefore, for all $\kappa\in\left[0,1\right]$ and all $0\leq s<t\leq T$, $$\begin{aligned} &&\left\vert u\left(t,\cdot\right)-u\left(s,\cdot\right)\right\vert_{\mu,\kappa,\beta}\\ &\leq&\left\vert u\left(t,\cdot\right)-u\left(s,\cdot\right)\right\vert_{\beta,\infty}+\left\vert L^{\mu,\kappa}u\left(t,\cdot\right)-L^{\mu,\kappa}u\left(s,\cdot\right)\right\vert_{\beta,\infty}\\ &\leq& C\left\vert t-s\right\vert^{1-\kappa}\left\vert f\right\vert_{\beta,\infty}.\end{aligned}$$ By Proposition \[pr4\], this is equivalent to $$\begin{aligned} \left\vert u\left(t,\cdot\right)-u\left(s,\cdot\right)\right\vert_{\kappa+\beta,\infty}\leq C\left\vert t-s\right\vert^{1-\kappa}\left\vert f\right\vert_{\beta,\infty}.\end{aligned}$$ Proof of Theorem 1.1: Generalized Hölder-Zygmund inputs ======================================================= <span style="font-variant:small-caps;">Existence and Estimates.</span> Given $f\in \tilde{C}^{\beta}_{\infty,\infty}\left( H_T\right)$, by Proposition \[app\], we can find a sequence of functions $f_n$ in $C_b^{\infty}\left( H_T\right)$ such that $$\left\vert f_n\right\vert_{\beta,\infty}\leq C\left\vert f\right\vert_{\beta,\infty},\quad\left\vert f\right\vert_{\beta,\infty}\leq\liminf_{n}\left\vert f_n\right\vert_{\beta,\infty},$$ and for any $0<\beta'<\beta$, $$\left\vert f_n-f\right\vert_{\beta',\infty}\rightarrow 0 \mbox{ as }n\rightarrow\infty.$$ According to Theorems \[thm1\] and \[thm11\], for each pair of functions $f_m,f_n$, there are corresponding solutions $u_m, u_n\in C_b^{\infty}\left( H_T\right)$ verifying $$\begin{aligned} \left\vert u_m- u_n\right\vert_{1+\beta',\infty}\leq C\left\vert f_m-f_n\right\vert_{\beta',\infty}\rightarrow 0, \mbox{ as }m,n\rightarrow\infty \end{aligned}$$ for all $\beta'\in\left( 0,\beta\right)$, which by Proposition \[pr4\] implies $$\begin{aligned} \left\vert u_n-u_m\right\vert_0\rightarrow 0 \mbox{ as } m,n\rightarrow\infty. \end{aligned}$$ Clearly, $\{u_n:n\geq 0\}$ has a limit in the space of continuous functions. We denote it by $u$. $\lim_{n\rightarrow\infty}\left\vert u_n-u\right\vert_0=0$. Therefore, for any given $j\in\mathbf{N}$, $$\begin{aligned} &&w\left( N^{-j}\right)^{-1-\beta}\left\vert u\ast \varphi_j\right\vert_0= \lim_{n\rightarrow\infty}w\left( N^{-j}\right)^{-1-\beta}\left\vert u_n\ast \varphi_j\right\vert_0\\ &\leq&\limsup_{n\rightarrow\infty}\left\vert u_n\right\vert_{1+\beta,\infty}\leq C \limsup_{n\rightarrow\infty}\left\vert f_n\right\vert_{\beta,\infty}\leq C\left\vert f\right\vert_{\beta,\infty},\end{aligned}$$ which indicates $u\in \tilde{C}^{1+\beta}_{\infty,\infty}\left( H_T\right)$ and $\left\vert u\right\vert_{1+\beta,\infty}\leq C\left\vert f\right\vert_{\beta,\infty}$. Meanwhile, for any given $j\in\mathbf{N}$ and any $\beta'\in\left( 0,\beta\right)$, $$\begin{aligned} &&\lim_{n\rightarrow\infty}w\left( N^{-j}\right)^{-1-\beta'}\left\vert \left( u_n-u\right)\ast \varphi_j\right\vert_0\\ &=&\lim_{n\rightarrow\infty}\lim_{m\rightarrow\infty}w\left( N^{-j}\right)^{-1-\beta'}\left\vert \left( u_n-u_m\right)\ast \varphi_j\right\vert_0\\ &\leq&\lim_{n\rightarrow\infty}\lim_{m\rightarrow\infty}\left\vert \left( u_n-u_m\right)\right\vert_{1+\beta',\infty}\\ &\leq& C \lim_{n\rightarrow\infty}\lim_{m\rightarrow\infty}\left\vert f_n-f_m\right\vert_{\beta',\infty}= 0,\end{aligned}$$ Namely, for all $\beta'\in\left( 0,\beta\right)$. $$\begin{aligned} \label{supp} \lim_{n\rightarrow\infty}\left\vert u_n-u\right\vert_{1+\beta',\infty}\rightarrow 0 , \mbox{ as }n\rightarrow\infty.\end{aligned}$$ Analogously, for any given $j\in\mathbf{N}$, $$\begin{aligned} &&w\left( N^{-j}\right)^{-\beta}\left\vert u\ast \varphi_j\right\vert_0= \lim_{n\rightarrow\infty}w\left( N^{-j}\right)^{-\beta}\left\vert u_n\ast \varphi_j\right\vert_0\\ &\leq&\limsup_{n\rightarrow\infty}\left\vert u_n\right\vert_{\beta,\infty}\leq C\left( \lambda^{-1}\wedge T\right) \limsup_{n\rightarrow\infty}\left\vert f_n\right\vert_{\beta,\infty}\leq C\left( \lambda^{-1}\wedge T\right) \left\vert f\right\vert_{\beta,\infty}.\end{aligned}$$ This implies $\left\vert u\right\vert_{\beta,\infty}\leq C\left( \lambda^{-1}\wedge T\right) \left\vert f\right\vert_{\beta,\infty}$. Using Theorems \[thm1\] and \[thm11\], we can show in the same vein that for all $0\leq s\leq t\leq T$, $\kappa\in\left[ 0,1\right]$, $$\begin{aligned} u\left(t,\cdot\right)-u\left(s,\cdot\right)&=&\lim_{n\rightarrow \infty}\left( u_n\left(t,\cdot\right)-u_n\left(s,\cdot\right)\right)\in \tilde{C}^{\kappa+\beta}_{\infty,\infty}\left( \mathbf{R}^d\right),\\ \left\vert u\left(t,\cdot\right)-u\left(s,\cdot\right)\right\vert_{\kappa+\beta,\infty}&\leq& C\limsup_{n\rightarrow\infty}\left\vert u_n\left(t,\cdot\right)-u_n\left(s,\cdot\right)\right\vert_{\kappa+\beta,\infty}\\ &\leq& C\left\vert t-s\right\vert^{1-\kappa}\left\vert f\right\vert_{\beta,\infty}. \end{aligned}$$ Now we claim that such a function $u$ solves $\eqref{eeq1}$, i.e., $$\begin{aligned} \label{eq2} \quad &&u\left( t,x\right)=\int_0^t \left[ L^{\nu}u\left( r,x\right)-\lambda u\left( r,x\right)+f\left( r,x\right)\right]dr, \quad\left( t,x\right)\in \left[0,T\right]\times\mathbf{R}^d. \end{aligned}$$ Indeed, according to $\eqref{supp}$ and Proposition \[cont\], $L^{\nu}u=\lim_{n\rightarrow\infty}L^{\nu}u_n$ and $\lim_{n\rightarrow\infty}\left\vert L^{\nu}u_n-L^{\nu}u\right\vert_0=0$. Passing the limit on both sides of $$\begin{aligned} u_n\left( t,x\right)=\int_0^t \left[ L^{\nu}u_n\left( r,x\right)-\lambda u_n\left( r,x\right)+f_n\left( r,x\right)\right]dr, \end{aligned}$$ we obtain $\eqref{eq2}$. <span style="font-variant:small-caps;">Uniqueness.</span> Suppose there are two solutions $u_1,u_2\in \tilde{C}^{1+\beta}_{\infty,\infty}\left( H_T\right)$ to $\eqref{eq2}$, then $u:=u_1-u_2$ solves $$\begin{aligned} u\left( t,x\right)=\int_0^t \left[ L^{\nu}u\left( r,x\right)-\lambda u\left( r,x\right)\right]dr, \quad\left( t,x\right)\in \left[0,T\right]\times\mathbf{R}^d. \end{aligned}$$ By Proposition \[app\], there is a sequence of functions $u_n\in C_b^{\infty}\left( H_T\right)$ such that for any $0<\beta'<\beta$, $$\label{mmm} \left\vert u_n-u\right\vert_{1+\beta',\infty}\rightarrow 0 \mbox{ as } n\rightarrow\infty.$$ Clearly, $\tilde{u}_n\left( t,x\right):=\int_0^t u_n\left( s,x\right)ds$ solves $$\begin{aligned} \int_0^t u_n\left( s,x\right)ds&=&\int_0^t \lbrack L^{\nu}\int_0^s u_n\left( r,x\right)dr-\lambda \int_0^s u_n\left( r,x\right)dr+\Big( u_n\left( s,x\right)\\ &&-L^{\nu}\int_0^s u_n\left( r,x\right)dr+\lambda\int_0^s u_n\left( r,x\right)dr\Big)\rbrack ds. \end{aligned}$$ By Lemma \[Lop\], $ u_n\left( t,x\right)-L^{\nu}\int_0^t u_n\left( s,x\right)ds+\lambda\int_0^t u_n\left( s,x\right)ds\in C_b^{\infty}\left( H_T\right)$. Then according to Theorem \[thm11\], $$\begin{aligned} \left\vert \tilde{u}_n\right\vert_{1+\beta',\infty}\leq\left\vert u_n\left( t,x\right)-L^{\nu}\int_0^t u_n\left( s,x\right)ds+\lambda\int_0^t u_n\left( s,x\right)ds\right\vert_{\beta',\infty}. \end{aligned}$$ Use $\eqref{mmm}$, $\eqref{sup}$ and Proposition \[cont\]. $$\begin{aligned} &&\left\vert \int_0^t u\left( s,x\right)ds\right\vert_{1+\beta',\infty}= \lim_{n\rightarrow\infty}\left\vert \int_0^t u_n\left( s,x\right)ds\right\vert_{1+\beta',\infty}\\ &\leq&\liminf_{n\rightarrow\infty}\left\vert u_n\left( t,x\right)-L^{\nu}\int_0^t u_n\left( s,x\right)ds+\lambda\int_0^t u_n\left( s,x\right)ds\right\vert_{\beta',\infty}\\ &\leq& \left\vert u\left( t,x\right)-L^{\nu}\int_0^t u\left( s,x\right)ds+\lambda\int_0^t u\left( s,x\right)ds\right\vert_{\beta',\infty}=0. \end{aligned}$$ By $\eqref{sup}$ again, $\int_0^t u\left( s,x\right)ds=0$ for all $t\in\left[ 0,T\right]$ and $x\in\mathbf{R}^d$, and thus $u=0$ $\left(t,x\right)$-a.e.. Appendix ======== We simply state a few results that were used in this paper. Please look up in the references for proofs if you are interested. Recall all parameters introduced in assumptions **A(w,l)**. [@cr Lemma 7]\[lemma1\] Let $\nu$ be a Lévy measure of order $\alpha$ and $w$ be a scaling function.\ (i) Suppose there exists $N_2>0$ such that for all $R>0$, $$\begin{aligned} \int \left(\left\vert y\right\vert \wedge 1\right)\tilde{\nu}_{R}\left( dy\right)&\leq&N_2 \mbox{ if } \alpha\in\left( 0,1\right),\\ \int \left(\left\vert y\right\vert^2 \wedge 1\right)\tilde{\nu}_{R}\left( dy\right)&\leq&N_2 \mbox{ if } \alpha=1,\\ \int \left(\left\vert y\right\vert^2 \wedge \left\vert y\right\vert\right)\tilde{\nu}_{R}\left( dy\right)&\leq&N_2 \mbox{ if } \alpha\in\left(1,2\right). \end{aligned}$$ Then there is a constant $C>0$ depending only on $c_1,N_0,N_1,N_2$ such that for all $\xi\in\mathbf{R}^d$, $$\begin{aligned} \int\left[ 1-\cos\left( 2\pi\xi\cdot y\right)\right]\nu\left(dy\right)&\leq& C w\left( \left\vert \xi\right\vert^{-1}\right)^{-1},\\ \int\left[ \sin\left( 2\pi\xi\cdot y\right)-2\pi\chi_{\alpha}\left( y\right) \xi\cdot y\right]\nu\left(dy\right)&\leq& C w\left( \left\vert \xi\right\vert^{-1}\right)^{-1}, \end{aligned}$$ where $w\left( \left\vert \xi\right\vert^{-1}\right)^{-1}:=0$ if $\xi=0$.\ (ii) Suppose there is $n_1>0$ such that for all $R>0$ and all $\xi\in\{\xi\in\mathbf{R}^d:\left\vert \xi\right\vert=1\}$, $$\begin{aligned} \int_{\left\vert y\right\vert \leq 1}\left\vert \xi\cdot y\right\vert^2 \tilde{\nu}_R\left(dy\right)\geq n_1.\end{aligned}$$ Then there is a constant $c>0$ depending only on $c_1,N_0,N_1,N_2,n_1$ such that for all $\xi\in\mathbf{R}^d$, $$\begin{aligned} \int\left[ 1-\cos\left( 2\pi\xi\cdot y\right)\right]\nu\left(dy\right)\geq c w\left( \left\vert \xi\right\vert^{-1}\right)^{-1},\end{aligned}$$ where $w\left( \left\vert \xi\right\vert^{-1}\right)^{-1}:=0$ if $\xi=0$. [@cr Lemma 5]\[lemma3\] Let $\nu$ be a Lévy measure satisfying **A(w,l)**. $Z^{\tilde{\nu}_R}_t$ is the Lévy process associated to $\tilde{\nu}_R,R>0$. For each $t, R$, $Z^{\tilde{\nu}_R}_t$ has a bounded and continuous density function $p^R\left( t,x\right),t\in\left( 0,\infty\right), x\in\mathbf{R}^d$. And $p^R\left( t,x\right)$ has bounded and continuous derivatives up to order $4$. Meanwhile, for any multi-index $\left\vert\vartheta\right\vert\leq 4$, $$\begin{aligned} \int\left\vert \partial^{\vartheta}p^R\left( t,x\right)\right\vert dx &\leq& C\gamma\left( t\right)^{-\left\vert\vartheta\right\vert},\\ \sup_{x\in\mathbf{R}^d}\left\vert \partial^{\vartheta}p^R\left( t,x\right)\right\vert &\leq& C\gamma\left( t\right)^{-d-\left\vert\vartheta\right\vert},\end{aligned}$$ where $C>0$ is independent of $t,R$. For any $\beta\in\left(0,1\right)$ such that $\left\vert\vartheta\right\vert+\beta<4$, $$\begin{aligned} \int\left\vert \partial^{\beta}\partial^{\vartheta}p^R\left( t,x\right)\right\vert dx &\leq& C\gamma\left( t\right)^{-\left\vert\vartheta\right\vert-\beta}.\end{aligned}$$ For any $a>0$, there is a constant $C>0$ independent of $t,R$, so that $$\begin{aligned} \int_{\left\vert x\right\vert>a}\left\vert \partial^{\vartheta}p^R\left( t,x\right)\right\vert dx &\leq& C\left( \gamma\left( t\right)^{2-\left\vert\vartheta\right\vert}+t\gamma\left( t\right)^{-\left\vert\vartheta\right\vert}\right). \end{aligned}$$ [@cr2 Lemma 2]\[lemma2\] Let $\nu$ be a Lévy measure satisfying **A(w,l)** and $Z^{\tilde{\nu}_R}_t$ be the Lévy process associated to $\tilde{\nu}_R$. Then for any $\varphi,\varphi_0\in \mathcal{S}\left( \mathbf{R}^d\right)$ such that $\mathcal{F}\varphi_0\in C_0^{\infty}\left( \mathbf{R}^d\right)$, $supp \left(\mathcal{F}\varphi\right)\subseteq \{\xi: 0<R_1\leq \left\vert \xi\right\vert\leq R_2\}$, and $$\begin{aligned} \max_{\left\vert\gamma\right\vert\leq \left[d/2\right]+3}\left\vert D^{\gamma}\hat{\varphi}\left(\xi\right)\right\vert \leq N_2, R_1\leq \left\vert\xi\right\vert\leq R_2. \end{aligned}$$ Then there are constants $C_1,C_2>0$ depending only on $c_1,N_0,N_1,N_2,R_1,R_2,d$ such that $$\begin{aligned} \int\left( 1+\left\vert x\right\vert^{\alpha_2}\right)\left\vert \mathbf{E}\varphi\left( x+Z^{\tilde{\nu}_R}_t\right)\right\vert dx&\leq& C_1 e^{-C_2 t}, \quad t\geq 0,\\ \int\left\vert x\right\vert^{\alpha_2}\left\vert \mathbf{E}\varphi_0\left( x+Z^{\tilde{\nu}_R}_t\right)\right\vert dx &\leq& C_1 \left( 1+t\right), \quad t\geq 0,\\ \int \left\vert \mathbf{E}\varphi_0\left( x+Z^{\tilde{\nu}_R}_t\right)\right\vert dx &\leq& C_1, \quad t\geq 0. \end{aligned}$$ [9]{} Bergn, J. and Löfström, J., Interpolation Spaces. An Introduction, Springer Verlag, 1976. Farkas, W., Jacob, N. and Schilling, R.L., Function spaces related to continuous negative definite functions: $\psi$-Bessel potential spaces, Dissertationes Mathematicae, 1-60, 2001. Farkas,W. and Leopold, H.G., Characterization of function spaces of generalized smoothness, Annali di Matematica, 185, 1-62, 2006. Kalyabin, G.A., Description of functions in classes of Besov-Lizorkin-Triebel type, Trudy Mat. Inst. Steklov, 156, 160-173, 1980. Kalyabin, G.A. and Lizorkin, P.I., Spaces of functions of generalized smoothness, Math. Nachr., 133, 7-32, 1987. Mikulevičius, R. and Phonsom, C., On $L^p$ theory for parabolic and elliptic integro-differential equations with scalable operators in the whole space, Stoch PDE: Anal Comp, DOI 10.1007/s40072-017-0095-4, 2017. Mikulevičius, R. and Phonsom, C., On the Cauchy problem for integro-differential equations in the scale of spaces of generalized smoothness. arXiv: 1705.09256v1, 2017. Mikulevičius, R., On the Cauchy problem for parabolic SPDEs in Hölder classes, Ann. Probab. 28, 74-108, 2000. Mikulevičius, R. and Pragarauskas, H., On the Cauchy problem for certain integro-differential operators in Sobolev and Hölder spaces, Lithuanian Math. J. 32, 238-264, 1992. Mikulevičius, R. and Pragarauskas, H., On Hölder solutions of the integro-differential Zakai equation, Stoch. Process. Appl. 119, 3319-3355, 2009. Mikulevičius, R. and Pragarauskas, H., On the Cauchy problem for integro-differential operators in Hölder classes and the uniqueness of the Martingale problem, Potential Anal, DOI 10.1007/s11118-013-9359-4, 2013. Rozovskii, B.L., On stochastic partial differential equations, Mat. Sb. 96, 314-341, 1975. Triebel, H., Interpolation Theory, Function Spaces, Differential Operators, North-Holland Pub. Co., 1978. Zhang, X., $L^p$ maximal regularity of nonlocal parabolic equations and applications, Ann. I. H. Poincaré – AN 30, 573-614, 2013.
--- abstract: 'The classical theorem of Weitzenböck states that the algebra of invariants $K[X]^g$ of a single unipotent transformation $g\in GL_m(K)$ acting on the polynomial algebra $K[X]=K[x_1,\ldots,x_m]$ over a field $K$ of characteristic 0 is finitely generated. This algebra coincides with the algebra of constants $K[X]^{\delta}$ of a linear locally nilpotent derivation $\delta$ of $K[X]$. Recently the author and C. K. Gupta have started the study of the algebra of invariants $F_m({\mathfrak V})^g$ where $F_m({\mathfrak V})$ is the relatively free algebra of rank $m$ in a variety $\mathfrak V$ of associative algebras. They have shown that $F_m({\mathfrak V})^g$ is not finitely generated if $\mathfrak V$ contains the algebra $UT_2(K)$ of $2\times 2$ upper triangular matrices. The main result of the present paper is that the algebra $F_m({\mathfrak V})^g$ is finitely generated if and only if the variety ${\mathfrak V}$ does not contain the algebra $UT_2(K)$. As a by-product of the proof we have established also the finite generation of the algebra of invariants $T_{nm}^g$ where $T_{nm}$ is the mixed trace algebra generated by $m$ generic $n\times n$ matrices and the traces of their products.' address: 'Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria' author: - Vesselin Drensky title: | Invariants of unipotent transformations\ acting on noetherian relatively free algebras --- [^1] Introduction {#introduction .unnumbered} ============ Let $K$ be any field of characteristic 0 and let $X=\{x_1,\ldots,x_m\}$, where $m>1$. Let $g\in GL_m=GL_m(K)$ be a unipotent linear operator acting on the vector space $KX=Kx_1\oplus\cdots\oplus Kx_m$. By the classical theorem of Weitzenböck [@W], the algebra of invariants $$K[X]^g=\{f\in K[X]\mid f(g(x_1),\ldots,g(x_m))=f(x_1,\ldots,x_m)\}$$ is finitely generated. A proof in modern language was given by Seshadri [@S]. An elementary proof based on the ideas of [@S] was presented by Tyc [@T]. Since $g-1$ is a nilpotent linear operator of $KX$, we may consider the linear locally nilpotent derivation $$\delta=\log g=\sum_{i\geq 1}(-1)^{i-1}\frac{(g-1)^i}{i}$$ called a Weitzenböck derivation. (The $K$-linear operator $\delta$ acting on an algebra $A$ is called a derivation if $\delta(uv)=\delta(u)v+u\delta(v)$ for all $u,v\in A$.) The algebra of invariants ${\mathbb C}[X]^g$ coincides with the algebra of constants ${\mathbb C}[X]^{\delta}\ (=\ker(\delta)$ ). See the book by Nowicki [@N] for a background on the properties of the algebras of constants of Weitzenböck derivations. Looking for noncommutative generalizations of invariant theory, see e. g. the survey by Formanek [@F1], let $K\langle X\rangle=K\langle x_1,\ldots,x_m\rangle$ be the free unitary associative algebra freely generated by $X$. The action of $GL_m$ is extended diagonally on $K\langle X\rangle$ by the rule $$h(x_{j_1}\cdots x_{j_n})=h(x_{j_1})\cdots h(x_{j_n}),\ h\in GL_m,\ x_{j_1},\ldots,x_{j_n}\in X.$$ For any PI-algebra $R$, let $T(R)\subset K\langle X\rangle$ be the T-ideal of all polynomial identities in $m$ variables satisfied by $R$. The class $\mathfrak V=\text{var}(R)$ of all algebras satisfying the identities of $R$ is called the variety of algebras generated by $R$ (or determined by the polynomial identities of $R$). The factor algebra $F_m({\mathfrak V})=K\langle X\rangle/T(R)$ is called the relatively free algebra of rank $m$ in $\mathfrak V$. We shall use the same symbols $x_j$ and $X$ for the generators of $F_m({\mathfrak V})$. Since $T(R)$ is $GL_m$-invariant, the action of $GL_m$ on $K\langle X\rangle$ is inherited by $F_m({\mathfrak V})$ and one can consider the algebra of invariants $F_m({\mathfrak V})^G$ for any linear group $G$. As in the commutative case, if $g\in GL_m$ is unipotent, then $F_m({\mathfrak V})^g$ coincides with the algebra $F_m({\mathfrak V})^{\delta}$ of the constants of the derivation $\delta=\log g$. Till the end of the paper we fix the integer $m>1$, the variety $\mathfrak V$, the unipotent linear operator $g\in GL_m$ and the derivation $\delta=\log g$. The author and C. K. Gupta [@DG] have started the study of the algebra of invariants $F_m({\mathfrak V})^g$. They have shown that if $\mathfrak V$ contains the algebra $UT_2(K)$ of $2\times 2$ upper triangular matrices and $g$ is different from the identity of $GL_m$, then $F_m({\mathfrak V})^g$ is not finitely generated for any $m>1$. They have also established that, if $UT_2(K)$ does not belong to $\mathfrak V$, then, for $m=2$, the algebra $F_2({\mathfrak V})^g$ is finitely generated. In the present paper we close the problem for which varieties $\mathfrak V$ and which $m$ the algebra $F_m({\mathfrak V})^g$ is finitely generated. Our main result is that this holds, and for all $m>1$, if and only if the variety ${\mathfrak V}$ does not contain the algebra $UT_2(K)$. It is natural to expect such a result by two reasons. First, it follows from the proof of Tyc [@T], see also the earlier paper by Onoda [@O], that the algebra $K[X]^g$ is isomorphic to the algebra of invariants of certain $SL_2$-action on the polynomial algebra in $m+2$ variables. One can prove a similar fact for $F_m({\mathfrak V})^g$ and $(K[y_1,y_2]\otimes_KF_m({\mathfrak V}))^{SL_2}$. Second, the results of Vonessen [@V], Domokos and the author [@DD] give that $F_m({\mathfrak V})^G$ is finitely generated for all reductive $G$ if and only if the finitely generated algebras in $\mathfrak V$ are one-side noetherian. For unitary algebras this means that $\mathfrak V$ does not contain $UT_2(K)$ or, equaivalently, $\mathfrak V$ satisfies the Engel identity $[x_2,x_1,\ldots,x_1]=0$. In our proof we use the so called proper polynomial identities introduced by Specht [@Sp], the fact that the Engel identity implies that the vector space of proper polynomials in $F_m({\mathfrak V})$ is finite dimensional and hence $F_m({\mathfrak V})$ has a series of ideals such that the factors are finitely generated $K[X]$-modules. As a by-product of the proof we have established also the finite generation of the algebra of invariants $T_{nm}^g$, where $T_{nm}$ is the mixed trace algebra generated by $m$ generic $n\times n$ matrices $x_1,\ldots,x_m$ and and the traces of their products $\text{\rm tr}(x_{i_1}\cdots x_{i_k})$, $k\geq 1$. Preliminaries ============= We fix two finite dimensional vector spaces $U$ and $V$, $\dim U=p$, $\dim V=q$, and representations of the infinite cyclic group $G=\langle g\rangle$: $$\rho_U:G\to GL(U)=GL_p,\quad \rho_V:G\to GL(V)=GL_q,$$ where $\rho_U(g)$ and $\rho_V(g)$ are unipotent linear operators. Fixing bases $Y=\{y_1,\ldots,y_p\}$ and $Z=\{z_1,\ldots,z_q\}$ of $U$ and $V$, respectively, we consider the free left $K[Y]$-module $M(Y,Z)$ with basis $Z$. Then $g$ acts diagonally on $M(Y,Z)$ by the rule $$g:\sum_{j=1}^qf_j(y_1,\ldots,y_p)z_j\to \sum_{j=1}^qf_j(g(y_1),\ldots,g(y_p))g(z_j),\quad f_j\in K[Y],$$ where, by definition, $g(y_i)=\rho_U(g)(y_i)$ and $g(z_j)=\rho_V(g)(z_j)$. Let $M(Y,Z)^g$ be the set of fixed points in $M(Y,Z)$ under the action of $g$. Since $\rho_U(g)$ and $\rho_V(g)$ are unipotent operators, the operators $\delta_U=\log\rho_U(g)$ and $\delta_V=\log\rho_V(g)$ are well defined. Denote by $\delta$ the induced derivation of $K[Y]$. We extend $\delta$ to a derivation of $M(Y,Z)$, denoted also by $\delta$, i. e. $\delta$ is the linear operator of $M(Y,Z)$ defined by $$\delta:\sum_{j=1}^qf_j(Y)z_j\to \sum_{j=1}^q\delta(f_j(Y))z_j +\sum_{j=1}^qf_j(Y)\delta(z_j).$$ It is easy to see that $\delta=\log g$ on $M(Y,Z)$ and $M(Y,Z)^g$ coincides with the kernel of $\delta$, i. e. the set of constants $M(Y,Z)^{\delta}$. Changing the bases of $U$ and $V$, we may assume that $\delta_U$ and $\delta_V$ have the form $$\delta_U=\begin{pmatrix} J_{p_1}&0&\cdots&0&0\\ 0&J_{p_2}&\cdots&0&0\\ \vdots&\vdots&\cdots&\vdots&\vdots\\ 0&0&\cdots&J_{p_{k-1}}&0\\ 0&0&\cdots&0&J_{p_k}\\ \end{pmatrix},\quad\ \delta_V=\begin{pmatrix} J_{q_1}&0&\cdots&0&0\\ 0&J_{q_2}&\cdots&0&0\\ \vdots&\vdots&\cdots&\vdots&\vdots\\ 0&0&\cdots&J_{q_{l-1}}&0\\ 0&0&\cdots&0&J_{q_l}\\ \end{pmatrix},$$ where $J_r$ is the $(r+1)\times(r+1)$ Jordan cell $$\label{Jordan cell} J_r=\begin{pmatrix} 0&1&0&\cdots&0&0\\ 0&0&1&\cdots&0&0\\ \vdots&\vdots&\vdots&\cdots&\vdots&\vdots\\ 0&0&0&\cdots&1&0\\ 0&0&0&\cdots&0&1\\ 0&0&0&\cdots&0&0\\ \end{pmatrix}$$ with zero diagonal. We denote by $W_r$ the irreducible $(r+1)$-dimensional $SL_2$-module. It is isomorphic to the $SL_2$-module of the forms of degree $r$ in two variables $x,y$. This is the unique structure of an $SL_2$-module on the $(r+1)$-dimensional vector space which agrees with the action of $\delta$ (and hence of $g$) as the Jordan cell (\[Jordan cell\]): We can think of $\delta$ as the derivation of $K[x,y]$ defined by $\delta(x)=0$, $\delta(y)=x$. We fix the “canonical” basis of $W_r$ $$\label{module of forms in two variables} u^{(0)}=x^r,u^{(1)}=\frac{x^{r-1}y}{1!},u^{(2)}=\frac{x^{r-2}y^2}{2!},\ldots, u^{(r-1)}=\frac{xy^{r-1}}{(r-1)!},u^{(r)}=\frac{y^r}{r!}.$$ We give $U$ and $V$ the structure of $SL_2$-modules $$\label{U and V are SL2-modules} U=W_{p_1}\oplus\cdots\oplus W_{p_k},\quad V=W_{q_1}\oplus\cdots\oplus W_{q_l},$$ and extend it on $K[Y]$ and $M(Y,Z)$ via the diagonal action of $SL_2$. Again, this agrees with the action of $g$ and $\delta$. Then $K[U]$ and $M(Y,Z)$ are direct sums of irreducible $SL_2$-modules $U_{ri}\subset K[Y]$ and $W_{rj}\subset M(Y,Z)$ isomorphic to $W_r$, $i,j=1,2,\ldots$, $r=0,1,2,\ldots$, with canonical bases $\{u_{ri}^{(0)},u_{ri}^{(1)},\ldots,u_{ri}^{(r)}\}$ and $\{w_{rj}^{(0)},w_{rj}^{(1)},\ldots,w_{rj}^{(r)}\}$, respectively. \[lemma 1\] The elements $u\in K[Y]$ and $w\in M(Y,Z)$ belong to $K[Y]^{\delta}$ and $M(Y,Z)^{\delta}$, respectively, if and only if they have the form $$\label{canonical form of constants} u=\sum_{r,i}\alpha_{ri}u_{ri}^{(0)},\quad w=\sum_{r,j}\beta_{rj}w_{rj}^{(0)},\quad \alpha_{ri},\beta_{rj}\in K.$$ Almkvist, Dicks and Formanek [@ADF] translated in the language of $g$-invariants results of De Concini, Eisenbud and Procesi [@DEP] and proved that, in our notation, $g(u)=u$ and $g(w)=w$ if and only if $u$ and $w$ have the form (\[canonical form of constants\]). Since $g(u)=u$ if and only if $\delta(u)=0$, and similarly for $w$, we obtain that (\[canonical form of constants\]) holds if and only if $u$ and $w$ are $\delta$-constants. (The same fact is contained in the paper by Tyc [@T] but in the language of representations of the Lie algebra $sl_2(K)$.) In each component $W_r$ of $U$ in (\[U and V are SL2-modules\]), using the basis (\[module of forms in two variables\]), we define a linear operator $d$ by $$d(u^{(k)})=(k+1)(r-k)u^{(k+1)},\quad k=0,1,2,\ldots,r,$$ i. e., up to multiplicative constants, $d$ acts by $u^{(0)}\to u^{(1)}\to u^{(2)}\to\cdots\to u^{(r)}\to 0$. We extend $d$ to a derivation of $K[Y]$. As in the case of $\delta$, again we can think of $d$ as the derivation of $K[x,y]$ defined by $d(x)=y$, $d(y)=0$. \[lemma 2\] [(i)]{} The derivation $d$ acts on each irreducible component $U_{ri}$ of $K[Y]$ by $$d(u_{ri}^{(k)})=(k+1)(r-k)u_{ri}^{(k+1)},\quad k=0,1,\ldots,r.$$ [(ii)]{} If $f=f(Y)\in K[Y]$, then $\delta^{s+1}(f)=0$ if and only if $f$ belongs to the vector space $$\label{constants of level s} K[Y]_s=\sum_{t=0}^sd^t(K[Y]^{\delta}).$$ Part (i) follows from the fact that the $SL_2$-action on $U$ is the only action which agrees with the action of $\delta$ as well as with the action of $d$ (as the derivations of $K[x,y]$ defined by $\delta(x)=0$, $\delta(y)=x$ and $d(x)=y$, $d(y)=0$, respectively), and the extension of this $SL_2$-action to $K[U]$ also agrees with the action of $\delta$ and $d$ on $K[U]$. \(ii) Since the irreducible $SL_2$-submodules of $K[Y]$ are $\delta$- and $d$-invariant, it is sufficient to prove the statement only for $f\in W_r\subset K[Y]$. Considering the basis (\[module of forms in two variables\]) of $W_r$, we have that $\delta^{s+1}(f)=0$ if and only if $$f=\alpha_0u^{(0)}+\alpha_1u^{(1)}+\cdots+\alpha_su^{(s)},\quad \alpha_k\in K.$$ Since $W_r^{\delta}=Ku^{(0)}$ and $d^t(u^{(0)})\in Ku^{(t)}$, we obtain that $W_r\cap K[Y]_s$ is spanned by $u^{(0)},u^{(1)},\ldots,u^{(s)}$ and coincides with the kernel of $\delta^{s+1}$ in $W_r$. In principle, the proof of the following proposition can be obtained following the main steps of the proof of Tyc [@T] of the Weitzenböck theorem. The proof of the three main lemmas in [@T] uses only the fact that the ideals of the algebra $K[Y]$ are finitely generated $K[Y]$-modules. Instead, we shall give a direct proof, using the idea of the proof of Lemma 3 in [@T]. \[proposition 3\] The set of constants $M(Y,Z)^{\delta}$ is a finitely generated $K[Y]^{\delta}$-module. Let $N_i$ be the $K[Y]$-submodule of $M(Y,Z)$ generated by the basis elements $z_j$ of $V=Kz_1\oplus\cdots\oplus Kz_q$ corresponding to the $i$-th Jordan cell $J_{q_i}$. Since $M(Y,Z)=N_1\oplus\cdots\oplus N_l$ and $M(Y,Z)^{\delta}=N_1^{\delta}\oplus\cdots\oplus N_l^{\delta}$, it is sufficient to show that each $N_i^{\delta}$ is a finitely generated $K[Y]^{\delta}$-module. Hence, without loss of generality we may assume that $q=r+1$ and $\delta(z_0)=0$, $\delta(z_j)=z_{j-1}$, $j=1,2,\ldots,r$. Let $$\label{expression of constants} f=f_0(Y)z_0+f_1(Y)z_1+\cdots+f_r(Y)z_r\in M(Y,Z)^{\delta},\quad f_j(Y)\in K[Y].$$ Then $$\delta(f)=(\delta(f_0)+f_1)z_0+(\delta(f_1)+f_2)z_1+\cdots +(\delta(f_{r-1})+f_r)z_{r-1}+\delta(f_r)z_r$$ and this implies that $$\delta(f_j)=-f_{j+1},\quad j=0,1,\ldots,r-1,$$ $$\delta(f_r)=\delta^2(f_{r-1})=\cdots=\delta^r(f_1)=\delta^{r+1}(f_0)=0.$$ Hence, fixing any element $f_0(Y)$ from $K[Y]_r$, we determine all the coefficients $f_1,\ldots,f_r$ from (\[expression of constants\]). By Lemma \[lemma 2\] it is sufficient to show that the $K[Y]^{\delta}$-module generated by $d^t(K[Y]^{\delta})$ is finitely generated. By the theorem of Weitzenböck, $K[Y]^{\delta}$ is a finitely generated algebra. Let $\{h_1,\ldots,h_n\}$ be a generating set of $K[Y]^{\delta}$. Then $d^t(K[Y]^{\delta})$ is spanned by the elements $d^t(h_1^{a_1}\cdots h_n^{a_n})$. Since $d$ is a derivation, $d^t(K[Y]^{\delta})$ is spanned by elements of the form $$h_1^{c_1}\cdots h_n^{c_n}\left(\prod d^{t_{i_1}}(h_1)\right) \cdots \left(\prod d^{t_{i_n}}(h_n)\right),\quad \sum t_{i_1}+\cdots+\sum t_{i_n}=t.$$ There is only a finite number of possibilities for $t_{i_1},\ldots,t_{i_n}$, and we obtain that $d^t(K[Y]^{\delta})$ generates a finitely generated $K[Y]^{\delta}$-module. \[corollary of proposition 3\] Let, in the notation of this section, $U$ and $V$ be polynomial $GL_m$-modules, let $g\in GL_m$ be a unipotent matrix and let $M(Y,Z)$ be equipped with the diagonal action of $GL_m$. Then, for every $GL_m$-submodule $M_0$ of $M(Y,Z)$, the natural homomorphism $M(Y,Z)\to M(Y,Z)/M_0$ induces an epimorphism $M(Y,Z)^g\to (M(Y,Z)/M_0)^g$, i. e. we can lift the $g$-invariants of $M(Y,Z)/M_0$ to $g$-invariants of $M(Y,Z)$. The lifting of the constants was established in [@DG] in the case of relatively free algebras and the same proof works in our situation. Since $U$ and $V$ are polynomial $GL_m$-modules, the module $M(Y,Z)$ is completely reducible. Hence $M(Y,Z)=M_0\oplus M'$ for some $GL_m$-submodule $M'$ of $M(Y,Z)$ and $M(Y,Z)/M_0\cong M'$. If $w+M_0=\bar w\in (M(Y,Z)/M_0)^g$, then $w=w_0+w'$, $w_0\in M_0$, $w'\in M'$, and $g(w)=g(w_0)+g(w')$. Since $g(\bar w)=\bar w$, we obtain that $g(w')=w'$ and the $g$-invariant $\bar w$ is lifted to the $g$-invariant $w'$. The proof of Proposition \[proposition 3\] gives also an algorithm to find the generators of $M(Y,Z)^{\delta}$ in terms of the generators of $K[Y]^{\delta}$. The latter problem is solved by van den Essen [@E] and his algorithm uses Gröbner bases techniques. The Main Results ================ The following theorem is the main result of our paper. For $m=2$ it was established in [@DG] using the description of the $g$-invariants of $K\langle x,y\rangle$. \[main theorem\] For any variety $\mathfrak V$ of associative algebras which does not contain the algebra $UT_2(K)$ of $2\times 2$ upper triangular matrices, the algebra of invariants $F_m({\mathfrak V})^g$ of any unipotent $g\in GL_m$ is finitely generated. We shall work with the linear locally nilpotent derivation $\delta=\log g$ instead with $g$. It is well known that any variety $\mathfrak V$ which does not contain $UT_2(K)$ satisfies some Engel identity $[x_2,x_1,\ldots,x_1]=0$. By the theorem of Zelmanov [@Z] any Lie algebra over a field of characteristic zero satisfying the Engel identity is nilpotent. Hence we may assume that $\mathfrak V$ satisfies the polynomial identity of Lie nilpotency $[x_1,\ldots,x_{c+1}]=0$. (Actually, this follows from much easier and much earlier results on PI-algebras.) Let us consider the vector space $B_m({\mathfrak V})$ of so called proper polynomials in $F_m({\mathfrak V})$. It is spanned by all products $[x_{i_1},\ldots,x_{i_k}]\cdots[x_{j_1},\ldots,x_{j_l}]$ of commutators of length $\geq 2$. One of the main results of the paper by the author [@D] states that if $\{f_1,f_2,\ldots\}$ is a basis of $B_m({\mathfrak V})$, then $F_m({\mathfrak V})$ has a basis $$\{x_1^{p_1}\cdots x_m^{p_m}f_i\mid p_j\geq 0, i=1,2,\ldots\}.$$ Let $B_m^{(k)}({\mathfrak V})$ be the homogeneous component of degree $k$ of $B_m({\mathfrak V})$. It follows from the proof of Theorem 5.5 in [@D], that for any Lie nilpotent variety $\mathfrak V$, and for a fixed positive integer $m$, the vector space $B_m({\mathfrak V})$ is finite dimensional. Hence $B_m^{(n)}({\mathfrak V})=0$ for $n$ sufficiently large, e. g. for $n>n_0$. Let $I_k$ be the ideal of $F_m({\mathfrak V})$ generated by $B_m^{(k+1)}({\mathfrak V})+B_m^{(k+2)}({\mathfrak V})+\cdots+B_m^{(n_0)}({\mathfrak V})$. Since $wx_i=x_iw+[w,x_i]$, $w\in F_m({\mathfrak V})$, applying Lemma 2.4 [@D], we obtain that $I_k/I_{k+1}$ is a free left $K[X]$-module with any basis of the vector space $B_m^{(k)}({\mathfrak V})$ as a set of free generators. Since $\delta$ is a nilpotent linear operator of $U=KX=Kx_1\oplus \cdots\oplus Kx_m$, it acts also as a nilpotent linear operator of $V_k=B_m^{(k)}({\mathfrak V})$. Proposition \[proposition 3\] gives that $(I_k/I_{k+1})^{\delta}$ is a finitely generated $K[X]^{\delta}$-module. Clearly, $B_m^{(0)}({\mathfrak V})=K$, $B_m^{(1)}({\mathfrak V})=0$, $B_m^{(2)}({\mathfrak V})$ is spanned by the commutators $[x_{i_1},x_{i_2}]$, etc. Hence $I_0/I_1\cong K[X]$ and by the theorem of Weitzenböck $(I_0/I_1)^{\delta}$ is a finitely generated algebra. We fix a system of generators $\bar f_1,\ldots,\bar f_a$ of the algebra $(I_0/I_1)^{\delta}$ and finite sets of generators $\{\bar f_{k1},\ldots,\bar f_{kb_k}\}$ of the $K[X]^{\delta}$-modules $(I_k/I_{k+1})^{\delta}$, $k=2,3,\ldots,n_0$. The vector space $U$ is a $GL_m$-module and its $GL_m$-action makes $V_k$ a polynomial $GL_m$-module. We apply Corollary \[corollary of proposition 3\] and lift all $\bar f_i$ and $\bar f_{kj}$ to some $\delta$-constants $f_i,f_{kj}\in F_m({\mathfrak V})^{\delta}$. The algebra $S$ generated by $f_1,\ldots,f_a$ maps onto $(I_0/I_1)^{\delta}$ and hence $(I_k/I_{k+1})^{\delta}$ is a left $S$-module generated by $\bar f_{k1},\ldots,\bar f_{kb_k}$. The condition $I_{n_0+1}=0$ together with Corollary \[corollary of proposition 3\] gives that the $f_i$ and $f_{kj}$ generate $F_m({\mathfrak V})^{\delta}$. Together with the results of [@DG] Theorem \[main theorem\] gives immediately: For $m\geq 2$ and for any fixed unipotent operator $g\in GL_m$, $g\not=1$, the algebra of $g$-invariants $F_m({\mathfrak V})^g$ is finitely generated if and only if $\mathfrak V$ does not contain the algebra $UT_2(K)$. We refer to the books [@F2] and [@DF] for a background on the theory of matrix invariants. We fix an integer $n>1$ and consider the generic $n\times n$ matrices $x_1,\ldots,x_m$. Let $C_{nm}$ be the pure trace algebra, i. e. the algebra generated by the traces of products $\text{tr}(x_{i_1}\cdots x_{i_k})$, $k=1,2,\ldots$, and let $T_{nm}$ be the mixed trace algebra generated by $x_1,\ldots,x_m$ and $C_{nm}$. It is well known that $C_{nm}$ is finitely generated. (The Nagata-Higman theorem states that the nil polynomial identity $x^n=0$ implies the identity of nilpotency $x_1\cdots x_d=0$. If $d(n)$ is the minimal $d$ with this property, one may take as generators $\text{tr}(x_{i_1}\cdots x_{i_k})$ with $k\leq d(n)$.) Also, $T_{nm}$ is a finitely generated $C_{nm}$-module. For any unipotent operator $g\in GL_m$, the algebra $T_{nm}^g$ is finitely generated. Consider the vector space $U$ of all formal traces $y_i=\text{tr}(x_{i_1}\cdots x_{i_k})$, $i_j=1,\ldots,m$, $1\leq k\leq d(n)$. Let $Y$ be the set of all $y_i$. It has a natural structure of a $GL_m$-module and hence $\delta=\log g$ acts as a nilpotent linear operator on $U$. Also, consider a finite system of generators $f_1,\ldots,f_a$ of the $C_{nm}$-module $T_{nm}$. We may assume that the $f_j$ do not depend on the traces and fix some elements $h_j\in K\langle X\rangle$ such that $h_j\to f_j$ under the natural homomorphism $K\langle X\rangle\to T_{nm}$ extending the mapping $x_i\to x_i$, $i=1,\ldots,m$. Let $V$ be the $GL_m$-submodule of $K\langle X\rangle$ generated by the $h_j$. Again, $\delta$ acts as a nilpotent linear operator on $V$. We fix a basis $Z=\{z_1,\ldots,z_q\}$ of $V$. Consider the free $K[Y]$-module $M(Y,Z)$ with basis $Z$. Proposition \[proposition 3\] gives that $M(Y,Z)^{\delta}$ is a finitely generated $K[Y]^{\delta}$-module and the theorem of Weitzenböck implies that $K[Y]^{\delta}$ is a finitely generated algebra. Since the algebra $C_{nm}$ and the $C_{nm}$-module $T_{nm}$ are homomorphic images of the algebra $K[Y]$ and the $K[Y]$-module $M(Y,Z)$, Corollary \[corollary of proposition 3\] gives that $K[Y]^{\delta}$ and $M(Y,Z)^{\delta}$ map on $C_{nm}^{\delta}$ and $T_{nm}^{\delta}$, respectively. Hence $T_{nm}^{\delta}$ is a finitely generated module of the finitely generated algebra $C_{nm}^{\delta}$ and, therefore, the algebra $T_{nm}^{\delta}$ is finitely generated. [99]{} G. Almkvist, W. Dicks, E. Formanek, Hilbert series of fixed free algebras and noncommutative classical invariant theory, J. Algebra [**93**]{} (1985), 189-214. C. De Concini, D. Eisenbud, C. Procesi, Young diagrams and determinantal varieties, Invent. Math. [**56**]{} (1980), 129-165. M. Domokos, V. Drensky, A Hilbert-Nagata theorem in noncommutative invariant theory, Trans. Amer. Math. Soc. [**350**]{} (1998), 2797-2811. V. Drensky, Codimensions of T-ideals and Hilbert series of relatively free algebras, J. Algebra [**91**]{} (1984), 1-17. V. Drensky, E. Formanek, Polynomial Identity Rings, Advanced Courses in Mathematics, CRM Barcelona, Birkhäuser, Basel-Boston, 2004 (to appear). V. Drensky, C. K. Gupta, Constants of Weitzenböck derivations and invariants of unipotent transformations acting on relatively free algebras, preprint. A. van den Essen, An algorithm to compute the invariant ring of a $G_a$-action on an affine variety, J. Symbolic Computation [**16**]{} (1993), 551-555. E. Formanek, Noncommutative invariant theory, Contemp. Math. [**43**]{} (1985), 87-119. E. Formanek, The Polynomial Identities and Invariants of $n \times n$ Matrices, CBMS Regional Conf. Series in Math. [**78**]{}, Published for the Confer. Board of the Math. Sci. Washington DC, AMS, Providence RI, 1991. A. Nowicki, Polynomial Derivations and Their Rings of Constants, Uniwersytet Mikolaja Kopernika, Torun, 1994. N. Onoda, Linear actions of $G_a$ on polynomial rings, Proceedings of the 25th Symposium on Ring Theory (Matsumoto, 1992), 11-16, Okayama Univ., Okayama, 1992. C. S. Seshadri, On a theorem of Weitzenböck in invariant theory, J. Math. Kyoto Univ. [**1**]{} (1962), 403-409. W. Specht, Gesetze in Ringen. I, Math. Z. [**52**]{} (1950), 557-589. A. Tyc, An elementary proof of the Weitzenböck theorem, Colloq. Math. [**78**]{} (1998), 123-132. N. Vonessen, Actions of Linearly Reductive Groups on Affine PI-Algebras, Mem. Amer. Math. Soc. [**414**]{}, 1989. R. Weitzenböck, Über die Invarianten von linearen Gruppen, Acta Math. [**58**]{} (1932), 231-293. E.I. Zelmanov, On Engel Lie algebras (Russian), Sibirsk. Mat. Zh. [**29**]{} (1988), No. 5, 112-117. Translation: Sib. Math. J. [**29**]{} (1988), 777-781. [^1]: Partially supported by Grant MM-1106/2001 of the Bulgarian National Science Fund.
--- abstract: 'We show that for all $d\in \{3,\ldots,n-1\}$ the size of the largest component of a random $d$-regular graph on $n$ vertices around the percolation threshold $p=1/(d-1)$ is $\Theta(n^{2/3})$, with high probability. This extends known results for fixed $d\geq 3$ and for $d=n-1$, confirming a prediction of Nachmias and Peres on a question of Benjamini. As a corollary, for the largest component of the percolated random $d$-regular graph, we also determine the diameter and the mixing time of the lazy random walk. In contrast to previous approaches, our proof is based on a simple application of the switching method.' author: - Felix Joos - Guillem Perarnau bibliography: - 'crit\_ref.bib' title: Critical percolation on random regular graphs --- Introduction ============ For every $d\in \{3,\ldots,n-1\}$, let ${\mathcal{G}}_{n,d}$ be the set of all simple and vertex-labelled $d$-regular graphs on $n$ vertices and let $G_{n,d}$ be a graph chosen uniformly at random from ${\mathcal{G}}_{n,d}$. For $p\in[0,1]$, let $G_{n,d,p}$ be a graph obtained from $G_{n,d}$ by retaining each edge independently with probability $p$. The goal of this paper is to study the order of the largest component of $G_{n,d,p}$, denoted by $L_1(G_{n,d,p})$, in terms of $n,d$ and $p$. Most of the literature in the area focuses either on fixed $d\geq 3$ or on $d=n-1$. Goerdt [@goerdt2001giant] showed the existence of a critical probability, $p_{{crit}}:=1/(d-1)$, such that for every fixed $d\geq 3$ and every $\epsilon>0$ the following holds with probability $1-o(1)$: if $p\leq (1-\epsilon)p_{{crit}}$, then $L_1(G_{n,d,p})=O(\log{n})$, while if $p\geq (1+\epsilon)p_{{crit}}$, then $L_1(G_{n,d,p})=\Theta(n)$. Similar results were also obtained in a more general setting by Alon, Benjamini and Stacey [@alon2004percolation]. For $d=n-1$, the random graph $G_{n,d,p}$ corresponds to the classic Erdős-Rényi random graph $G_{n,p}$. In their seminal paper [@erd6s1960evolution], Erdős and Rényi proved that for every $\epsilon>0$, the following holds with probability $1-o(1)$: if $p\leq (1-\epsilon)/n$, then the largest component of $G_{n,p}$ has order $O(\log n)$, if $p= 1/n$ (critical probability), then it has order $\Theta(n^{2/3})$, while if $p\geq (1+ \epsilon)/n$, then it has linear order. Both for fixed $d\geq 3$ and for $d=n-1$, the behaviour around the critical probability has attracted a lot of interest. It is well established that the critical window in $G_{n,p}$ around $p=1/n$ is of order $n^{-1/3}$ (see e.g. [@nachmias2010critical2]). More precise estimates can be found in [@LPW94]. Benjamini posed the problem of determining the width of the critical window in $G_{n,d,p}$ around $p_{{crit}}=1/(d-1)$ (see [@nachmias2010critical; @pittel2008edge]). Nachmias and Peres [@nachmias2010critical] and Pittel [@pittel2008edge], independently showed that the critical window exhibits mean-field behaviour for fixed $d\geq 3$, namely, the following holds with probability $1-o(1)$: for every fixed $\lambda\in{\mathbb{R}}$, if $p=\frac{1+\lambda n^{-1/3}}{d-1}$, then $L_1(G_{n,d,p})=\Theta(n^{2/3})$. See also Riordan [@riordan2012phase] for more precise results on $L_1(G_{n,d,p})$ in the critical window. The case when $d$ is an arbitrary function of $n$ is much less understood. It follows from existing results in the literature[^1] that for every $d\in \{3,\ldots,n-1\}$, the critical probability for the existence of a linear order component in $G_{n,d,p}$ is $1/(d-1)$. Results inside the critical window for given $d$-regular graphs have also been obtained in the context of transitive graphs under the finite triangle condition [@borgs2005random] or under certain expansion conditions [@nachmias2009mean]. Finally, similar results have been obtained for irregular degree sequences whenever the average degree is bounded by a constant [@bollobas2015old; @fountoulakis2007percolation; @fountoulakis2016percolation; @janson2008percolation]. In view that both the sparse regime (fixed $d\geq 3$) and the densest one ($d=n-1$) exhibit similar properties, Nachmias and Peres [@nachmias2010critical] suggested that the mean-field behaviour extends to every $d\in \{3,\ldots,n-1\}$. In this paper we confirm this prediction in the critical window and thus answer the question posed by Benjamini for all $d\in \{3,\ldots,n-1\}$. \[thm:main\] Suppose $\lambda\in \mathbb{R}$ and $d,n\in {\mathbb{N}}$ such that $3\leq d \leq n-1$ and $n$ is sufficiently large. Let $p=\frac{1+\lambda n^{-1/3}}{d-1}$. Then for every sufficiently large $A=A(\lambda)$, we have $${\mathbb{P}}[L_1(G_{n,d,p}) \notin [A^{-1}n^{2/3}, An^{2/3}]]\leq 20A^{-1/2}\;.$$ The upper bound in Theorem \[thm:main\] directly follows from the upper bound for $d$-regular graphs in Proposition 1 in [@nachmias2010critical]. The proof of the lower bound is more intricate and we devote the rest of the paper to it. Most of the previous work on the component structure of $G_{n,d,p}$ uses the configuration model introduced by Bollobás in [@bollobas1980probabilistic]. The configuration model, denoted by $G_{n,d}^*$, is a model of random $d$-regular multigraphs on $n$ vertices. Conditional on $G_{n,d}^*$ being simple, one obtains the uniform distribution on ${\mathcal{G}}_{n,d}$. It is well-known (see for example [@wormald1999models]) that $$\begin{aligned} \label{eq:simple} {\mathbb{P}}[G_{n,d}^* \text{ simple}] = e^{-\Omega({d^2})}\;.\end{aligned}$$ While ${\mathbb{P}}[G_{n,d}^* \text{ simple}]$ is constant for fixed $d\geq 3$, it quickly tends to $0$ if $d$ grows with $n$, and new ideas are needed to study $G_{n,d}$. A standard tool to estimate probabilities for $G_{n,d}$ when $d$ grows with $n$ is the switching method, introduced by McKay in [@mckay1985asymptotics]. For instance, this method has been used to estimate  for $d=o(\sqrt{n})$ [@mckay1991asymptotic] or to determine several combinatorial properties of $G_{n,d}$ when $d$ grows with $n$ [@krivelevich2001random]. The proof of the lower bound in Theorem \[thm:main\] is based on the analysis of an exploration process in $G_{n,d,p}$ using the switching method. [The central quantity that we track through the process is the number of edges between the explored and unexplored parts of the graph, denoted by $X_t$. Our proof relies on sharp estimations of the first and second moments of $X_t$.]{} This approach is inspired by recent developments of the switching method for the study of the component structure of random graphs with a given degree sequence [@fountoulakis2016percolation; @joos2016how]. We take this opportunity to illustrate the use of our method with a simple proof that makes no assumptions on $d$. [**The critical window.** Theorem \[thm:main\] shows that the critical window has width $\Omega(n^{-1/3})$. Proposition 1 in [@nachmias2010critical] implies that, as $\lambda\to -\infty$, the typical order of the largest component is $o(n^{2/3})$. Following analogous ideas as the ones used in the proof of Theorem \[thm:main\], one obtains that, as $\lambda\to \infty$, the typical order of the largest component is $\omega(n^{2/3})$. More precisely, there exist constants $c,C>0$ such that for every $3\leq d\leq n-1$ and $\lambda >0$, if $p=\frac{1+\lambda n^{-1/3}}{d-1}$, then $${\mathbb{P}}\left[L_1(G_{n,d,p}) \leq c\cdot \lambda n^{2/3}\right]\leq C\lambda^{-1}\;.$$ The proof of this statement is simpler than the proof of our main theorem, since the assumption $\lambda>0$ implies that $X_t$ has positive drift. In particular, the first part of the exploration process can be analysed using a first moment argument only and for the entire process it suffices to control the variance of $X_t$ from above. It follows that the width of the critical window is $\Theta(n^{-1/3})$. ]{} [In its current form, our method does not give sharp estimates for $L_1(G_{n,d,p})$ in the barely subcritical and barely supercritical regimes. However, we believe that similar estimates as the ones in Lemma \[lem:exp\] hold in general and may be used to extend the results of Nachmias and Peres in [@nachmias2010critical] to all $d\in \{3,\ldots,n-1\}$.]{} [**Diameter and Mixing Time.** We present a consequence]{} of Theorem \[thm:main\]. For a component ${\mathcal{C}}$, let ${\text{\rm diam}}({\mathcal{C}})$ denote its diameter and let $T_{\text{\rm mix}}({\mathcal{C}})$ denote the mixing time of the lazy random walk on ${\mathcal{C}}$. Theorem 1.2 in [@nachmias2008critical] implies the following corollary. \[cor:diam\] Suppose $\lambda\in {\mathbb{R}}$ and $d,n\in {\mathbb{N}}$ such that $3\leq d \leq n-1$ and $n$ is sufficiently large. Let $p=\frac{1+\lambda n^{-1/3}}{d-1}$. Let $\mathcal{C}$ be the largest component of $G_{n,d,p}$. Then, for every $\epsilon>0$, there exists $A= A(\lambda,\epsilon)$ such that $${\mathbb{P}}[{\text{\rm diam}}(\mathcal{C}) \notin [A^{-1}n^{1/3}, An^{1/3}]]< \epsilon\;.$$ and $${\mathbb{P}}[T_{{\text{\rm mix}}}(\mathcal{C}) \notin [A^{-1}n, An]]< \epsilon\;.$$ [**Organisation of the paper.**]{} The paper is organized as follows. In Section \[sec:explo\], we describe our exploration process of $G_{n,d,p}$ and introduce different quantities we will track during the process. In Section \[sec:swi\], we present our main combinatorial tool (switching method) and prove two technical lemmas. In Section \[sec:analy\], we use these lemmas to study a single step of the exploration process. Finally, in Section \[sec:proof\], we conclude with the proof of the lower bound in Theorem \[thm:main\]. The exploration process {#sec:explo} ======================= Before describing the exploration process, we briefly introduce some notation. For a graph $G$, a subset of vertices $X$ of $G$, and a vertex $u$ of $G$, we write $d_G(u)$ for the number of neighbours of $u$ in $G$ and $d_{G,X}(u)$ for the number of neighbours of $u$ in $G$ that belong to $X$. We also write $\Delta(G)$ for the maximum degree of $G$. Finally, for $p\in [0,1]$, we write $G_p$ for the graph where each edge in $G$ is independently retained with probability $p$. We will use an exploration process to reveal the component structure of $G_{n,d,p}$. Let us denote the vertex set by $V$, which we equip with a linear order (from now on $V$ is always a vertex set of size $n$). For technical reasons, we perform our exploration process not on $G_{n,d,p}$, but on what we call an input. An *input* is a tuple $(G,{\mathfrak{S}})$, where $G\in \mathcal{G}_{n,d}$ and ${\mathfrak{S}}=\{\sigma_v\}_{v\in V}$ is a collection of $n$ permutations of length $d$. For each vertex of $G$, arbitrarily label the edges incident to it with distinct elements from $\{1,\ldots, d\}$. Thus every edge receives two labels. In fact, we may think about this as a labelling of the semi-edges of $G$. Let ${\mathcal{I}}$ be the set of all inputs $(G,{\mathfrak{S}})$ where $G\in {\mathcal{G}}_{n,d}$ and ${\mathfrak{S}}$ is a collection of $n$ permutations of length $d$. Observe that every graph in $G\in {\mathcal{G}}_{n,d}$ gives rise to exactly $(d!)^n$ inputs. Thus, choosing an input uniformly at random from ${\mathcal{I}}$ and ignoring the edge-labels is equivalent to choosing $G_{n,d}$. Let ${\mathfrak{S}}_{n,d}$ be a collection of $n$ permutations of length $d$ each chosen independently and uniformly at random. Hence, if an input is chosen uniformly at random from ${\mathcal{I}}$, then this input is distributed as $(G_{n,d},{\mathfrak{S}}_{n,d})$. Next, we describe our exploration process on an input $(G,{\mathfrak{S}})$. First, for every $uv\in E(G)$, we denote by $I(uv)$ the indicator random variable that is $1$ if $uv$ belongs to $G_p$ (it percolates) and $0$ otherwise. If $I(uv)$ is revealed, we say that the edge $uv$ has been exposed. For each integer $t\geq 0$, the set $S_t$ consists of the vertices explored up to time $t$ (with $S_0=\emptyset$); the bipartite graph $F_t$, with bipartition $(S_t,V{\setminus}S_t)$, consists of all edges in $G$ between $S_t$ and $V {\setminus}S_t$ that have been exposed and have failed to percolate; and the graph $H_t$, with vertex set $S_t$, consists of all edges in $G$ within $S_t$, that is, $H_{t}:=G[S_{t}]$. Let ${\mathcal{H}}_t$ be the history of all random choices we make until time $t$ (which we will treat as an event). We now describe how to obtain ${\mathcal{H}}_{t+1}$, given ${\mathcal{H}}_{t}$. Suppose there exists at least one vertex $u\in S_t$ such that $d_{H_t}(u)+d_{F_t}(u)<d$. Among all such vertices $u$, let $v_{t+1}$ be the vertex which comes first in the linear order of $V$. Let $w_{t+1}$ be the vertex $w\in V {\setminus}{S_t}$ with $v_{t+1}w\in E(G){\setminus}E(F_t)$ that minimizes $\sigma_{v_{t+1}}(\ell(w))$, where $\ell(w)$ is the label of the semi-edge incident to $v_{t+1}$ that corresponds to $v_{t+1}w$. Thereafter, we expose $v_{t+1}w_{t+1}$. If $I(v_{t+1}w_{t+1})=0$, then we set $S_{t+1}:=S_t$, [$Y_{t+1}:=0$, $Z_{t+1}:=0$]{} and we let $F_{t+1}$ be the graph obtained from $F_t$ by adding $v_{t+1}w_{t+1}$. If $I(v_{t+1}w_{t+1})=1$, then we set $$\begin{aligned} S_{t+1}:=S_t\cup \{w_{t+1}\},\enspace Y_{t+1}:=d_{F_t}(w_{t+1}) ,\enspace Z_{t+1}:=d_{G,S_t}(w_{t+1})-Y_{t+1}-1,\enspace \end{aligned}$$ and we let $F_{t+1}$ be the graph obtained from $F_t$ by deleting all edges incident to $w_{t+1}$ and moving $w_{t+1}$ to the other side of the bipartition. [Since $H_{t+1}=G[S_{t+1}]$, we also reveal all the edges between $w_{t+1}$ and $S_t$.]{} Observe that $Z_{t+1}$ counts the number of neighbours of $w_{t+1}$ in $S_t{\setminus}\{v_{t+1}\}$ whose corresponding edge has not yet been exposed. If $d_{H_t}(u)+d_{F_t}(u)=d$ for all $u\in S_t$, that is, every edge incident to a vertex in $S_t$ has been exposed, then we pick a vertex $x\in V{\setminus}S_t$ that minimises $d_{F_t}(x)$ and set $w_{t+1}:=x$, $S_{t+1}:=S_t\cup \{w_{t+1}\}$, [$Y_{t+1}:=d_{F_t}(w_{t+1})$, $Z_{t+1}:=0$]{} and we let $F_{t+1}$ be the graph obtained from $F_t$ by deleting all edges incident to $w_{t+1}$ and by moving $w_{t+1}$ to the other side of the bipartition. Observe that, in any of the above-mentioned cases, $|E(F_{t+1})|\leq |E(F_{t})|+1$ and hence $|E(F_{t})|\leq t$. A crucial parameter of our exploration process is the number of edges between $S_t$ and $V{\setminus}S_t$ which have not yet been exposed: $$\begin{aligned} X_t:=\sum_{u\in S_t}(d -d_{H_t}(u)-d_{F_t}(u))\;.\end{aligned}$$ For the sake of simplicity, we define $\eta_{t+1}:=X_{t+1}-X_t$. If $X_t>0$, then $$\begin{aligned} \label{eq:change} \eta_{t+1}&= -(1-I(v_{t+1}w_{t+1}))+I(v_{t+1}w_{t+1})(d-2- Y_{t+1} - 2Z_{t+1}) \;,\end{aligned}$$ and if $X_t=0$, then $$\begin{aligned} \label{eq:change2} \eta_{t+1}&= d-Y_{t+1}\;.\end{aligned}$$ [Note that $Y_{t+1}$ and $Z_{t+1}$ are measurable random variables given ${\mathcal{H}}_t$ and thus $\eta_{t+1}$ is a predictable sequence with respect to ${\mathcal{H}}_t$.]{} The switching method and some applications {#sec:swi} ========================================== In this section we explain the switching method and we present two simple applications. In Lemma \[lem:UB\_prob\] we use the switching method to bound the probability from above that two vertices are adjacent. In Lemma \[lem: back edges\] we provide an upper bound on the expectation of the number of neighbours of a vertex in a specified set of vertices. Let $G$ be a graph and let $x_1,x_2,x_3,x_4$ be distinct vertices of $G$. Suppose $x_1x_2,x_3x_4\in E(G)$ and $x_1x_4,x_2x_3\notin E(G)$. A *switching* on the $4$-cycle $x_1x_2x_3x_4$ transforms $G$ into a graph $G'$ by deleting $x_1x_2,x_3x_4$ and adding $x_1x_4,x_2x_3$. Observe that the degree sequence of $G$ is preserved by the switching. In particular, if $G$ is $d$-regular, then so is $G'$. Moreover, the switching operation is reversible: if $G$ can be transformed into $G'$ by a switching, then $G$ can be also obtained from $G'$ by a switching on the same $4$-cycle. Finally, there is a natural way to extend the notion of a switching from graphs to inputs by simply preserving the labels on each semi-edge. Switchings can be used to obtain bounds on the probability that $G_{n,d}$ satisfies a certain property. Suppose ${\mathcal{A}},{\mathcal{B}}$ are disjoint subsets of ${\mathcal{G}}_{n,d}$. Suppose that for every graph $G\in {\mathcal{A}}$, there are at least $a$ switchings that transform $G$ into a graph in ${\mathcal{B}}$ and for every graph $G'\in {\mathcal{B}}$, there are at most $b$ switchings that transform $G'$ into a graph in ${\mathcal{A}}$. By double-counting the number of switchings between ${\mathcal{A}}$ and ${\mathcal{B}}$, we obtain $a|{\mathcal{A}}| \leq b |{\mathcal{B}}|$. Thus $a{\mathbb{P}}[{\mathcal{A}}]\leq b{\mathbb{P}}[{\mathcal{B}}]$, where we define ${\mathbb{P}}[{\mathcal{S}}]:=|{\mathcal{S}}|/{|\mathcal{G}_{n,d}|}$ for every ${\mathcal{S}}\subseteq {\mathcal{G}}_{n,d}$. \[lem:UB\_prob\] Suppose $d,n\in {\mathbb{N}}$ such that $3\leq d \leq n/4$ and $S\subseteq V$ such that $|S|\leq n/6$. Let $H$ be a graph with vertex set $S$ and let $F$ be a bipartite graph with vertex partition $(S,V{\setminus}S)$ with $\Delta(F\cup H)\leq d$. Let $u\in S$ and $v\in V{\setminus}S$ such that $uv\notin E(F)$. Then $$\begin{aligned} {\mathbb{P}}[uv\in E(G_{n,d})\mid G_{n,d}[S]=H,\, F\subseteq G_{n,d}]\leq \frac{6(d-d_H(u)-d_F(u))}{n}\;.\end{aligned}$$ Let ${\mathcal{F}}^+$ be the set of graphs $G\in {\mathcal{G}}_{n,d}$ such that $G[S]=H$, $F\subseteq G$ and $uv\in E(G)$, and let ${\mathcal{F}}^-$ be the set of graphs $G\in {\mathcal{G}}_{n,d}$ such that $G[S]=H$, $F\subseteq G$ but $uv\notin E(G)$. We will only perform switchings that involve edges and non-edges that are not contained in $E(H)\cup E(F)$. This ensures that the graph $G'$ obtained from a switching also satisfies $G'[S]=H$ and $F\subseteq G'$. Suppose $G\in {\mathcal{F}}^+$. In order to bound the number of switchings from below it suffices to switch on a cycle $uvxy$ that satisfies $xy\in E(G)$, $uy,vx\notin E(G)$, and $x,y\in V{\setminus}S$. There are at least $dn-2d|S|$ ordered edges $xy$ with both endpoints in $V{\setminus}S$. There are at most $d^2$ edges $xy$ such that $x$ is at distance at most $1$ from $v$ and at most $d^2$ edges $xy$ such that $y$ is at distance at most $1$ from $u$. Thus, there are at least $dn-2d|S|-2d^2 \geq dn/6$ switchings that transform $G$ into a graph in ${\mathcal{F}}^-$. Suppose now $G\in {\mathcal{F}}^-$. Then there are clearly at most $d\cdot(d-d_H(u)-d_F(u))$ switchings that transform $G$ into a graph in ${\mathcal{F}}^+$. It follows that $$\begin{aligned} {\mathbb{P}}[uv\in E(G_{n,d}) &\mid G_{n,d}[S]=H,\, F\subseteq G_{n,d}]\\ &\leq \frac{d(d-d_H(u)-d_F(u))}{dn/6}\cdot {\mathbb{P}}[uv\notin E(G_{n,d})\mid G_{n,d}[S]=H,\, F\subseteq G_{n,d}]\\ &\leq \frac{6(d-d_H(u)-d_F(u))}{n}\;. \qedhere\end{aligned}$$ \[lem: back edges\] Suppose $d,n\in {\mathbb{N}}$ such that $3\leq d \leq n/4$ and $S\subseteq V$ such that $|S|\leq n/6$. Let $H$ be a graph with vertex set $S$ and let $F$ be a bipartite graph with vertex partition $(S,V{\setminus}S)$ with $\Delta(F\cup H)\leq d$. Let $v\in V{\setminus}S$. Then $$\begin{aligned} {\mathbb{E}}[d_{G,S}(v)-d_{F}(v) \mid G_{n,d}[S]=H,\; F\subseteq G_{n,d}]\leq 6d|S|/n.\end{aligned}$$ For every $k\geq 0$, let ${\mathcal{F}}_k$ be the set of graphs $G\in {\mathcal{G}}_{n,d}$ such that $G[S]=H$, $F\subseteq G$, and $d_{G,S}(v)-d_{F}(v)=k$. As in Lemma \[lem:UB\_prob\], we will only perform switchings using edges and non-edges that are not contained in $E(H)\cup E(F)$. Consider a graph in ${\mathcal{F}}_k$. There are at most $(d-d_{F}(v))\cdot d|S|\leq d^2|S|$ switchings that lead to a graph in ${\mathcal{F}}_{k+1}$. For every graph in ${\mathcal{F}}_{k+1}$, we can use a switching on a cycle $uvxy$ that satisfies $uv,xy\in E(G){\setminus}E(F)$, $uy,vx\notin E(G)$ and $u\in S$, and $v,x,y\in V{\setminus}S$. There are $k+1$ choices for $uv$ and, for any particular choice of $uv$, there are at least $dn-2d|S|-2d^2\geq dn/6$ choices for the (ordered) edge $xy$. Hence, there are at least $(k+1)dn/6$ switchings that lead to a graph in ${\mathcal{F}}_k$. Thus, for every $k\geq 0$, we obtain $$\begin{aligned} \label{eq:bound} {\mathbb{P}}[{\mathcal{F}}_{k+1}] \leq \frac{6d|S|/n}{(k+1)}\cdot {\mathbb{P}}[{\mathcal{F}}_{k}]\;.\end{aligned}$$ Let $X$ be a Poisson distributed random variable with mean $6d|S|/n$. Lemma 3.4 in [@Mrandom2012] together with  implies that for every $m\geq 0$ $${\mathbb{P}}[d_{G,S}(v)-d_{F}(v)\geq m \mid G_{n,d}[S]=H,\; F\subseteq G_{n,d}] \leq {\mathbb{P}}[X\geq m]\;,$$ which implies the statement of the lemma. Analysis of the exploration process {#sec:analy} =================================== In this section we show how to control the expectation of $\eta_t$ and $\eta_t^2$. We first use Lemmas \[lem:UB\_prob\] and \[lem: back edges\] to bound the expectation of $Y_{t+1}$ and $Z_{t+1}$ from above. \[lem:exp2\] Suppose $d,n\in {\mathbb{N}}$ such that $3\leq d\leq n-1$ and $n$ is sufficiently large. Fix $p\in [0,1]$. Consider the exploration process described above on $(G_{n,d},{\mathfrak{S}}_{n,d})$ with percolation probability $p$ and suppose $t\leq d n^{2/3}$. Conditional on ${\mathcal{H}}_t$ satisfying $|S_t|\leq 5n^{2/3}$, we have $$\begin{aligned} {\mathbb{E}}[Y_{t+1}|{\mathcal{H}}_t]\leq 20dn^{-1/3} \text{\enspace and \enspace } {\mathbb{E}}[Z_{t+1}|{\mathcal{H}}_t]\leq 180dn^{-1/3}\;.\end{aligned}$$ If ${\mathcal{H}}_t$ satisfies $X_t=0$, then $Y_{t+1}\leq t/(n-|S_t|)\leq 2dn^{-1/3}$ by our choice of $w_{t+1}$ (we always choose the vertex $x$ that minimises $d_{F_t}(x)$) and $|E(F_t)|\leq t$. Note that $Z_{t+1}=0$ by definition. Hence we may assume from now on that $X_t>0$. Note that if $d\geq n/4$, then the lemma follows directly from the fact that $Y_{t+1}\leq |S_t|\leq 5n^{2/3} \leq 20dn^{-1/3}$, and similarly for $Z_{t+1}$. Thus, in the following we assume that $d\leq n/4$. Given $w\in V{\setminus}S_t$ such that $v_{t+1}w\notin E(F_t)$, we apply Lemma \[lem:UB\_prob\] with $S=S_t$, $F=F_t$, $H=H_t$, $u=v_{t+1}$ and $v=w$ to obtain $${\mathbb{P}}[v_{t+1}w\in E(G_{n,d}) \mid v_{t+1}w\notin E(F_t),{\mathcal{H}}_t]\leq \frac{6(d-d_{H_t}(v_{t+1})-d_{F_t}(v_{t+1}))}{n}\;.$$ Observe that we run our exploration process on inputs. In order to apply Lemma \[lem:UB\_prob\], we fix the semi-edge labelings and perform switchings on the graphs. Since $\sigma_{v_{t+1}}$ is a random permutation, each edge incident to $v_{t+1}$ that is not contained in $E(F_t)\cup E(H_t)$ is chosen with the same probability to continue the exploration process. Hence, given that $v_{t+1}w\in E(G_{n,d}){\setminus}E(F_t)$, the probability that $w_{t+1}=w$ is precisely $(d-d_{H_t}(v_{t+1})-d_{F_t}(v_{t+1}))^{-1}$. Therefore, $$\begin{aligned} &\quad\,\, {\mathbb{P}}[w_{t+1}=w\mid v_{t+1}w\notin E(F_t),{\mathcal{H}}_t]\nonumber \\ &={\mathbb{P}}[w_{t+1}=w\mid v_{t+1}w\in E(G_{n,d}){\setminus}E(F_t),{\mathcal{H}}_t]\cdot{\mathbb{P}}[v_{t+1}w\in E(G_{n,d}) \mid v_{t+1}w\notin E(F_t),{\mathcal{H}}_t]\nonumber \leq \frac{6}{n}\;.\end{aligned}$$ Since ${\mathbb{P}}[w_{t+1}=w\mid v_{t+1}w\in E(F_t) , {\mathcal{H}}_t]=0$, it follows that for every $w\in V{\setminus}S_t$ $$\begin{aligned} \label{eq:prob} {\mathbb{P}}[w_{t+1}=w\mid {\mathcal{H}}_t]&\leq \frac{6}{n}\;.\end{aligned}$$ Using that $|E(F_t)|\leq t$, we conclude $$\begin{aligned} {\mathbb{E}}[Y_{t+1}|{\mathcal{H}}_t] &= \sum_{w\in V{\setminus}S_t} d_{F_t}(w){\mathbb{P}}[w_{t+1}=w|{\mathcal{H}}_t] \stackrel{\eqref{eq:prob}}{\leq} \frac{6}{n}\sum_{w\in V{\setminus}S_t} d_{F_t}(w) \leq \frac{6}{n}\cdot t\leq 6d n^{-1/3}\;.\end{aligned}$$ We now prove the second statement. Given $w\in V{\setminus}S_t$ with ${\mathbb{P}}[w_{t+1}=w\mid {\mathcal{H}}_t]>0$ (that is, $v_{t+1}w\notin E(F_t)$), we apply Lemma \[lem: back edges\] with $S=S_t$, $F$ obtained from $F_t$ by adding $v_{t+1}w$, $H=H_t$, and $v=w$, to obtain $$\begin{aligned} {\mathbb{E}}[Z_{t+1}|{\mathcal{H}}_t] &= \sum_{w\in V{\setminus}S_t} {\mathbb{E}}[Z_{t+1}|w_{t+1}=w,v_{t+1}w\notin E(F_t),{\mathcal{H}}_t]{\mathbb{P}}[w_{t+1}=w\mid v_{t+1}w\notin E(F_t), {\mathcal{H}}_t] \\ &\stackrel{\eqref{eq:prob}}{\leq} \sum_{w\in V{\setminus}S_t} \frac{6d|S_t|}{n} \cdot \frac{6}{n} \leq 180d n^{-1/3}\;.\qedhere\end{aligned}$$ \[lem:exp\] Suppose $\mu\geq 0$ and $d,n\in {\mathbb{N}}$ such that $3 \leq d\leq n-1$ and $n$ is sufficiently large. Consider the exploration process described above on $(G_{n,d},{\mathfrak{S}}_{n,d})$ with $p=\frac{1-\mu n^{-1/3}}{d-1}$ and suppose $t\leq d n^{2/3}$. Conditional on $|S_t|\leq 5n^{2/3}$, then $$\begin{aligned} {\mathbb{E}}[\eta_{t+1}|{\mathcal{H}}_t]\geq -(570+\mu)n^{-1/3} \enspace \text{ and }\enspace {\mathbb{E}}[\eta_{t+1}^2|{\mathcal{H}}_t] \geq d/4\;.\end{aligned}$$ Moreover, if $X_t>0$, then ${\mathbb{E}}[\eta_{t+1}^2|{\mathcal{H}}_t] \leq d$. First assume that $X_t>0$. Recall that for any ${\mathcal{H}}_t$ [and for any edge $uv$ that has not been exposed yet]{}, we have ${\mathbb{E}}[I(uv)\mid {\mathcal{H}}_t]={p=}(1-\mu n^{-1/3})/(d-1)$. [Recall that $Y_{t+1}$ and $Z_{t+1}$ are measurable with respect to ${\mathcal{H}}_t$.]{} Taking conditional expectations on  and using Lemma \[lem:exp2\], we obtain $$\begin{aligned} {\mathbb{E}}[\eta_{t+1}|{\mathcal{H}}_t] &=-\left(1-\frac{1-\mu n^{-1/3}}{d-1}\right) + \frac{1-\mu n^{-1/3}}{d-1}(d-2 -{\mathbb{E}}[Y_{t+1}|{\mathcal{H}}_t]-2{\mathbb{E}}[Z_{t+1}|{\mathcal{H}}_t])\\ &\geq -\frac{{\mathbb{E}}[Y_{t+1}|{\mathcal{H}}_t]+2{\mathbb{E}}[Z_{t+1}|{\mathcal{H}}_t]}{d-1} -\mu n^{-1/3}\\ &\geq -\frac{380 dn^{-1/3}}{d-1} -\mu n^{-1/3} \geq -(570+\mu)n^{-1/3}\;,\end{aligned}$$ since $d\geq 3$. Again, by Lemma \[lem:exp2\] and , we obtain $$\begin{aligned} {\mathbb{E}}[\eta_{t+1}^2|{\mathcal{H}}_t] &=\left(1-\frac{1-\mu n^{-1/3}}{d-1}\right)(-1)^2+ \frac{1-\mu n^{-1/3}}{d-1}{\mathbb{E}}[(d-2 -Y_{t+1}-2Z_{t+1})^2\mid{\mathcal{H}}_t]\\ &\geq \frac{d-2}{d-1}+ \frac{(1-\mu n^{-1/3})(d-2)^2}{d-1} -\frac{2(d-2)({\mathbb{E}}[Y_{t+1}|{\mathcal{H}}_t]+2{\mathbb{E}}[Z_{t+1}|{\mathcal{H}}_t])}{d-1}\\ &\geq (1-\mu n^{-1/3})(d-2) -2({\mathbb{E}}[Y_{t+1}|{\mathcal{H}}_t]+2{\mathbb{E}}[Z_{t+1}|{\mathcal{H}}_t])\\ &\geq (1-\mu n^{-1/3})(d-2) -760dn^{-1/3}\\ &\geq d/4\;,\end{aligned}$$ where the last inequality holds since $d\geq 3$ and $n$ is sufficiently large. Observe that ${\mathbb{E}}[\eta_{t+1}^2|{\mathcal{H}}_t]\leq d$ follows from a similar argument as $(d-2 -Y_{t+1}-2Z_{t+1})^2\leq (d-2){^2}$. If $X_t=0$, then clearly ${\mathbb{E}}[\eta_{t+1}|{\mathcal{H}}_t]\geq 0$ and, since ${\mathbb{E}}[\eta_{t+1}^2|{\mathcal{H}}_t]={\mathbb{E}}[(d-Y_{t+1})^2|{\mathcal{H}}_t]$, similarly as before, one can prove that ${\mathbb{E}}[\eta_{t+1}^2|{\mathcal{H}}_t]\geq d/4$. \[lem:S\_t\] Suppose $\mu\geq 0$ and $d,n\in {\mathbb{N}}$ such that $3\leq d\leq n-1$ and $n$ is sufficiently large. Consider the exploration process described above on $(G_{n,d},{\mathfrak{S}}_{n,d})$ with $p=\frac{1-\mu n^{-1/3}}{d-1}$. Then, for every fixed $\delta>0$ and all $0\leq t_1\leq t_2 \leq 5dn^{2/3}$, we have $$\begin{aligned} &{\mathbb{P}}\left[ |S_{t_2}{\setminus}S_{t_1}| - \frac{t_2-t_1}{d-1} \geq -\delta n^{2/3} \right]= {1-o(n^{-2})}\quad\text{ and}\\ &{\mathbb{P}}\left[ |S_{t_2}{\setminus}S_{t_1}| - \frac{t_2-t_1}{d-1} - \left\lceil\frac{{t_2}}{5d/6}\right\rceil\leq \delta n^{2/3} \right]= {1-o(n^{-2})}\;.\end{aligned}$$ We add a vertex to $S_t$ either if $I(v_{t+1}w_{t+1})=1$ or if we start exploring a new component of $G_{n,d,p}$ at time $t+1$. Thus, [$|S_{t_2}{\setminus}S_{t_1}|$]{} stochastically dominates a binomial random variable with parameters [$t_2-t_1$]{} and $(1-\mu n^{-1/3})/(d-1)$. A standard application of Chernoff’s inequality implies the first statement. Let $A_t\subseteq S_t$ be the set of vertices that start a new component in $G_{n,d,p}$. [For every $0\leq t\leq 5dn^{2/3}$, let $a_t:=|A_t|$, let $c_t:=|S_t{\setminus}A_t|$ and let $b_t:=|S_t{\setminus}(S_{t_1}\cup A_t)|$]{}. Observe that [$c_t$]{} is stochastically dominated by a binomial random variable with parameters $t$ and $1/(d-1)$. Using Chernoff’s inequality, [we have $ c_t\leq 8 n^{2/3}$]{} with probability [$1-o(n^{-2})$]{} for any $t\leq 5dn^{2/3}$. We claim that for every [$0\leq t\leq 5dn^{2/3}$ and conditional on $c_t\leq 8n^{2/3}$]{}, we have $a_t\leq \lceil\frac{t}{5d/6}\rceil$. Indeed, the claim is true for $t\in \{0,1\}$. Assume that $t\geq 2$ and that the claim holds for every $t'\in \{0,\ldots,t-1\}$. If $X_{t-1}>0$, then $a_t=a_{t-1}$ and we are done. Thus, assume that $X_{t-1}=0$. Let $s$ be the largest integer $s'\in \{0,\ldots,t-2\}$ such that $X_{s'}=0$ (it exists since $X_0=0$ and $t\geq 2$). Recall that $w_{s+1}$ is a vertex $x\in V{\setminus}S_s$ that minimises $d_{F_s}(x)$. It follows that $$d_{F_{s}}(w_{s+1})\leq \frac{|E(F_{s})|}{n-(a_s+{c_s})}\leq \frac{s}{n- \lceil s/(5d/6)\rceil- 8n^{2/3}}\leq \frac{d}{6}\;,$$ provided that $n$ is large enough. Hence, $X_{s+1}\geq 5d/6$ and the process will not start a new component for the next $5d/6$ steps. In particular, $s+ 5d/6\leq t$. This implies $a_{t}= a_s+1\leq \lceil\frac{s}{5d/6}\rceil+1\leq \lceil\frac{t}{5d/6}\rceil$. [ Since $|S_{t_2}{\setminus}S_{t_1}|\leq a_{t_2}+b_{t_2}$, the second part of the lemma now follows from the upper bound on $a_{t_2}$ (which holds as we assume $c_t\leq 8n^{2/3}$) and an upper bound on $b_{t_2}$ obtained by Chernoff’s inequality.]{} Proof of Theorem \[thm:main\] {#sec:proof} ============================= As we mentioned in the introduction, due to the result of Nachmias and Peres, we only need to prove a lower bound. Since it suffices to prove the lower bound of the statement for $\lambda\leq 0$, we use the definition $\mu:=-\lambda$. We now present a brief overview of the proof. In the first phase, we show that with probability at least $1-A^{-1/2}$, the process $X_t$ exceeds $A^{-1/4} dn^{1/3}$ in the first $dn^{2/3}$ steps. In the second phase and conditional on the success of the first phase, we show that $X_t$ stays positive for at least $2A^{-1}dn^{2/3}$ steps with probability at least $1-A^{-1/2}$. From standard concentration inequalities, this gives the existence of a component of order at least $A^{-1}n^{2/3}$, concluding the proof. This proof strategy was introduced by Nachmias and Peres to prove the same statement for fixed $d\geq 3$ [@nachmias2010critical] and for $d=n-1$ [@nachmias2010critical2]. We remark that, in comparison to [@nachmias2010critical], our analysis of the exploration process is simpler, as we do not need to track the number of vertices $x\in V{\setminus}S_t$ which satisfy $d_{F_t}(x)=k$ for $k\in \{0,1,\ldots,d\}$. If $d\geq 3$ is fixed, as in [@nachmias2010critical], almost every vertex $x$ satisfies $d_{F_t}(x)\in \{0,1\}$. However, this is no longer true if $d$ is an arbitrary function of $n$. We avoid the technicalities involved with this issue by averaging over the values of $d_{F_t}(x)$. **First phase:** We start with the definition of a few parameters. Let $h:=A^{-1/4} dn^{1/3}$, $T_1:= 5dn^{2/3}/6$ and $T_2:=2A^{-1} d n^{2/3}$. In addition, we define the following stopping times: $$\begin{aligned} \tau_h&:=\min\{t:\, X_t\geq h\}\wedge T_1\\ \tau_S^1&:=\min\{t:\, |S_t|\geq 3n^{2/3}\}\\ \tau_1&:= \tau_h\wedge \tau_S^1\;.\end{aligned}$$ Recall that $X_{t+1}=\eta_{t+1}+X_t$. Note also that for every $t< \tau_1$, we have $X_t\leq h$ and $|S_t|\leq 5n^{2/3}$. Hence, Lemma \[lem:exp\] implies that $$\begin{aligned} {\mathbb{E}}[X_{t+1}^2-X_t^2|{\mathcal{H}}_t] &\geq {\mathbb{E}}[\eta_{t+1}^2|{\mathcal{H}}_t] +2{\mathbb{E}}[\eta_{t+1} X_{t}|{\mathcal{H}}_t] \geq d/4- 2\cdot(570+\mu) n^{-1/3} h\geq {d/5}\;,\end{aligned}$$ provided that $A$ is large enough with respect to $\mu$ (and thus, with respect to $\lambda$). Hence $X_{t\wedge \tau_1}^2 - (t\wedge \tau_1)d/5$ is a submartingale. By the Optional Stopping theorem for submartingales (see for example [@GrimStirz] p.491), ${\mathbb{E}}[X_{\tau_1}^2 - \frac{d}{5}\tau_1]\geq {\mathbb{E}}[X_{0}^2] = 0$, which implies that ${\mathbb{E}}[\tau_1]\leq \frac{5}{d}{\mathbb{E}}[X_{\tau_1}^2]$. Since $X^2_{\tau_1}\leq (h+d)^2\leq 2 h^2$, we obtain $${\mathbb{P}}[\tau_1=T_1]\leq \frac{{\mathbb{E}}[\tau_1]}{T_1} \leq \frac{5{\mathbb{E}}[X_{\tau_1}^2]}{d T_1}\leq\frac{10 h^2}{dT_1} = 12A^{-1/2}\;.$$ By Lemma \[lem:S\_t\] [with $t_1=0$ and $t_2=T_1$]{}, we have ${\mathbb{P}}[\tau_S^1\leq T_1]=o(1)$. Thus $$\begin{aligned} \label{eq:T1} {\mathbb{P}}[\{\tau_h=T_1\} \cup \{\tau_S^1\leq \tau_h\}] \leq {\mathbb{P}}[\tau_1=T_1]+{\mathbb{P}}[\tau_S^1\leq T_1] \leq 12A^{-1/2}+o(1) \leq 13A^{-1/2}\;.\end{aligned}$$ We conclude that [the event]{} ${\mathcal{E}:=}\{\tau_h<T_1,\tau_h< \tau^1_S\}$ holds with probability at least $1-13A^{-1/2}$. In particular, with probability at least $1-13A^{-1/2}$, the random process $X_t$ exceeds $h$ before time $T_1$. **Second phase:** Write ${\mathbb{P}}_*$ and ${\mathbb{E}}_*$ for the probability and the expectation conditional on ${\mathcal{E}}$. We define $$\begin{aligned} \tau_0:&=\min\{t: X_{\tau_h+t}=0\}\wedge T_2\\ \tau^2_S:&=\min\{t: |S_{\tau_h+t}{\setminus}S_{\tau_h}|\geq {2}n^{2/3}\}\\ \tau_2:&=\tau_0\wedge \tau_S^2\;.\end{aligned}$$ Consider the random variable $$W_t:=h- \min \{h, X_{\tau_h+t}\} \;.$$ Hence $$\begin{aligned} W_{t+1}^2-W_{t}^2 &\leq (h- \min\{h,X_{\tau_h+t}\}-\eta_{\tau_h+t+1})^2-(h- \min\{h,X_{\tau_h+t}\})^2\\ &= \eta_{\tau_h+t+1}^2-2 \eta_{\tau_h+t+1}(h- \min\{h,X_{\tau_h+t}\})\\ &\leq \eta_{\tau_h+t+1}^2 -2 \eta_{\tau_h+t+1}h \;.\end{aligned}$$ If $t<\tau_2$ and $n$ is sufficiently large, we can apply Lemma \[lem:exp\] and this leads to (provided $A$ is sufficiently large with respect to $\mu$) $$\begin{aligned} {\mathbb{E}}_*[W_{t+1}^2-W_{t}^2\mid {\mathcal{H}}_{\tau_h+t}] \leq d+2 \cdot (570+\mu)n^{-1/3} \cdot h \leq 2d\;.\end{aligned}$$ Thus, $W_{t\wedge \tau_2}^2-2d(t\wedge\tau_2)$ is a supermartingale. Similar as before, we use the Optimal Stopping theorem to conclude that $$\begin{aligned} {\mathbb{E}}_*[W_{\tau_2}^2] \leq 2d{\mathbb{E}}_*[\tau_2] \leq 2dT_2 \;.\end{aligned}$$ Thus $$\begin{aligned} {\mathbb{P}}_*[\tau_2<T_2]\notag &= {\mathbb{P}}_*[\tau_0<T_2,\tau^2_S>T_2]+{\mathbb{P}}_*[\tau^2_S\leq T_2]\\ &\leq {\mathbb{P}}_*[W_{\tau_2}\geq h]+{\mathbb{P}}_*[|S_{\tau_h +T_2}{\setminus}S_{\tau_h}|\geq {2}n^{2/3}]\\ &\leq {\mathbb{P}}_*[W^2_{\tau_2}\geq h^2]+o(1)\\ &\leq \frac{{\mathbb{E}}_*[W_{\tau_2}^2]}{h^2}+o(1)\leq 5A^{-1/2}\;,\end{aligned}$$ where we used Lemma \[lem:S\_t\] [with $t_1=\tau_h$ and $t_2=\tau_h+T_2$]{} for the second inequality. [(Observe that we cannot apply Lemma \[lem:S\_t\] directly, because we assume $\mathcal{E}$ holds and $\tau_h$ is a random time. However, as $\tau_h\leq T_1$, a simple union bound with $t_1=k$ and $t_2=k+T_2$ for all $k\leq T_1$ together with the fact that ${\mathbb{P}}[\mathcal{E}]\geq 1-13A^{-1/2}\geq 1/2$, yields the desired result.)]{} It follows that $$\begin{aligned} {\mathbb{P}}[\{\tau_2<T_2\}\cup \{\tau_h=T_1\} \cup \{\tau_S^1\leq \tau_h\}] &\leq {\mathbb{P}}[\{\tau_h=T_1\} \cup \{\tau_S^1\leq \tau_h\}]+ {\mathbb{P}}_*[\tau_2<T_2]\\ &\stackrel{\eqref{eq:T1}}{\leq} 13A^{-1/2} +5A^{-1/2}=18A^{-1/2}\;.\end{aligned}$$ Since all the vertices explored from time $\tau_h$ to $\tau_h+\tau_2$ belong to the same component of $G_{n,d,p}$, there exists a component of size at least $|S_{\tau_h+\tau_2}{\setminus}S_{\tau_h}|$. As $\tau_2=T_2= 2A^{-1} dn^{2/3}$ with probability at least $1-18A^{-1/2}$, by Lemma \[lem:S\_t\] [with $t_1=\tau_h$ and $t_2=\tau_h+T_2$ (as above, strictly speaking, we apply Lemma \[lem:S\_t\] with $t_1=k$ and $t_2=k+T_2$ for all $k\leq T_1$ and use the fact that ${\mathbb{P}}[\mathcal{E}]\geq 1/2$]{}) with probability at least $1-18A^{-1/2}-o(1)\geq 1-19A^{-1/2}$, there exists a component of size at least $A^{-1} n^{2/3}$. **Acknowledgements:** The authors want to thank Nikolaos Fountoulakis, Michael Krivelevich, and Asaf Nachmias for fruitful discussions on the topic [and the anonymous referees for their valuable comments.]{} plus 1fill Version Felix Joos\ \ Guillem Perarnau\ \ School of Mathematics, University of Birmingham, Birmingham\ United Kingdom [^1]: The non-existence of a linear order component when $p\leq (1-\epsilon)p_{{crit}}$ follows from Proposition 1 in [@nachmias2010critical]. The existence of a linear order component when $p\geq (1+\epsilon)p_{{crit}}$ follows from the expansion properties of $G_{n,d}$ (see Corollary 2.8 in [@krivelevich2001random]) and the results on $(n,d,\lambda)-$graphs in [@krivelevich2013phase].
--- abstract: 'Three-dimensional topological insulators harbour metallic surface states with exotic properties. In transport or optics, these properties are typically masked by defect-induced bulk carriers. Compensation of donors and acceptors reduces the carrier density, but the bulk resistivity remains disappointingly small. We show that measurements of the optical conductivity in BiSbTeSe$_2$ pinpoint the presence of electron-hole puddles in the bulk at low temperatures, which is essential for understanding DC bulk transport. The puddles arise from large fluctuations of the Coulomb potential of donors and acceptors, even in the case of full compensation. Surprisingly, the number of carriers appearing within puddles drops rapidly with increasing temperature and almost vanishes around 40K. Monte Carlo simulations show that a highly non-linear screening effect arising from thermally activated carriers destroys the puddles at a temperature scale set by the Coulomb interaction between neighbouring dopants, explaining the experimental observation semi-quantitatively. This mechanism remains valid if donors and acceptors do not compensate perfectly.' author: - 'N. Borgwardt' - 'J. Lux' - 'I. Vergara' - Zhiwei Wang - 'A.A. Taskin' - Kouji Segawa - 'P.H.M. van Loosdrecht' - Yoichi Ando - 'A. Rosch' - 'M. Grüninger[^1]' date: 'August 13, 2015' title: Revealing puddles of electrons and holes in compensated topological insulators --- Introduction {#sec:intro} ============ Three-dimensional topological insulators attract significant attention mostly because they feature two-dimensional Dirac fermions on the surface that possess the peculiar characteristics of spin-momentum locking and topological protection [@Hasan10; @Qi11; @Hasan11; @Ando13]. The existence of such Dirac fermions has been confirmed by surface-sensitive techniques such as angle-resolved photoelectron spectroscopy or scanning-tunneling microscopy [@Xia09; @Chen09; @Hsieh09; @Sanchez14; @Alpichshev10; @Beidenkopf11]. However, the exotic phenomena expected in the electromagnetic response of these systems largely remain unexplored to date. Prominent examples are a topological magnetoelectric effect related to the quantum Hall effect of the surface states yielding magnetic monopoles as mirror charges of electric charges [@Hasan10; @Qi11], a universal Faraday rotation angle given by the vacuum fine-structure constant $\alpha$ [@Qi08; @Tse10], or a universal surface conductance $G(\omega)$=$\pi e^2/8h$ for energies $\hbar\omega$ larger than twice the Fermi energy $E_F$ [@Schmeltzer13; @Li13]. These effects are blurred by a dominant bulk conductivity in real specimen of topological insulators such as the prototypical binary tetradymites Bi$_2$Te$_3$ and Bi$_2$Se$_3$, which can be categorized as degenerate semiconductors. Typically, single crystals of these compounds show defect-induced charge carriers with densities above a few $10^{18}$cm$^{-3}$ [@Stordeur92; @Analytis10; @Ren10; @LaForge10; @Butch10; @Eto10; @Post13]. Understanding the defect chemistry allowed for a dramatic reduction of the carrier density [@Ando13; @Cava13]. Near-stoichiometric Bi$_2$Se$_3$ exhibits $n$-type conductivity originating from Se vacancies acting as donors, whereas $p$-type conductivity predominates in Bi$_2$Te$_3$ due to antisite defects. The most successful route to reduced bulk conductivity aims at two goals in parallel: reduction of the defect density and compensation of the remaining defects, i.e., $K$=$N_A/N_D$=1, where $N_D$ and $N_A$ denote the densities of donors and acceptors, respectively. In Bi$_{2-x}$Sb$_x$Te$_{3-y}$Se$_y$, a reduced defect density is achieved by chalcogen order [@Taskin11; @Ren11] (see *Methods*, Sec. \[subsec:samples\]) while variation of $x$ and $y$ allows for optimized compensation in combination with the possibility to tune the energy of the Dirac point with respect to the Fermi energy $E_F$ [@Arakane12; @Neupane12]. In BiSbTeSe$_2$, the Dirac point nearly coincides with $E_F$, it thus may serve as a benchmark for the bulk carrier dynamics at very low carrier concentrations. For a sample thickness $d \! \lesssim \! 10$$\mu$m, the bulk conductance of BiSbTeSe$_2$ is low enough at low temperatures to be out-weighted by the surface conductance [@Taskin11; @Xu14; @Pan14]. This allows to observe a hallmark of topological transport, the half-integer quantum Hall effect, at temperatures up to 35K [@Xu14]. The bulk resistivity $\rho_b(T)$ nevertheless raises several questions. At low temperatures, $\rho_b(T)$ of thick samples of Bi$_{2-x}$Sb$_x$Te$_{3-y}$Se$_y$ and also of Bi$_2$Te$_2$Se does not exceed 10–20$\Omega$cm [@Ren10; @Ren12; @Xiong12; @Xiong12a; @Jia12; @Shekhar14; @Akrap14; @Kushawa14; @Ren11; @Pan14], even when the shunting effect of the surface is taken into account [@Pan14]. Above about 100K, $\rho_b(T)$ shows activated behavior $\propto \! \exp(E_A/k_B T)$ but the activation energy $E_A$ appears to be substantially smaller than the intrinsic value given by half the gap size, ${ \Delta}/2$ [@Ren11]. The bulk conduction mechanism of this important class of materials should be better understood and controlled for future investigations of novel topological phenomena. A theoretical explanation of the small activation energy has recently been suggested by Skinner, Chen, and Shklovskii [@Skinner12; @Skinner13; @Chen13] building on previous work [@Shklovskii72]. They considered a perfectly compensated semiconductor ($N_D$=$N_A$=$N_{\rm def}$) with shallow donor and acceptor levels. In such a system, donors give electrons to acceptors, resulting in positively charged donors and negatively charged acceptors. In this situation, however, the long-range Coulomb interactions necessarily enforce the formation of large puddles, i.e., regions in the bulk which are either $p$- or $n$-doped. The reason is that in a volume of size $R^3$, random fluctuations of the donor and acceptor densities $N_D$ and $N_A$ lead to a typical charge of order $e \sqrt{N_{\rm def} R^3}$ and therefore to a Coulomb potential of order $e^2 \sqrt{N_{\rm def} R^3}/(4 \pi \varepsilon_0 \varepsilon R)$, where $\varepsilon$ denotes the dielectric constant and $e$ the elementary charge. The potential fluctuations grow proportional to $\sqrt{R}$ and become as large as $\Delta/2$ at a length scale $ R_g=(\Delta /E_c)^2 \, d_{\rm def}/ 8\pi$ [@Skinner12] which is much larger than the average defect distance $d_{\rm def}$=$N_{\rm def}^{-1/3}$ in the experimentally relevant case $\Delta \! \gg \! E_c$, where $E_c$=$e^2/(4 \pi \varepsilon_0 \varepsilon d_{\rm def})$ denotes the Coulomb interaction between neighbouring dopants. On this length scale $R_g$, the valence and conduction bands are deformed so strongly that they touch and cross the chemical potential, giving rise to electrically conducting puddles, see Fig. \[fig:puddles\]. In the case of electron puddles, for example, some of the donor states become occupied resulting in neutral donors in this region. Based on this puddle scenario, Shklovskii and coworkers [@Skinner12] find activated behavior of the resistivity, $\rho_b(T) \propto \! \exp(E_A/k_B T)$, above roughly 40K with a *small* activation energy $E_A \approx 0.15\, { \Delta}$, consistent with experimental values [@Ren11]. At lower temperatures one expects to observe the famous Efros-Shklovskii law [@Efros75] for variable-range hopping, $\rho_b(T) \propto \! \exp[(T_{\rm ES}/T)^{1/2}]$. This, however, does not describe the experimentally observed small bulk resistivity at the lowest temperatures. Nevertheless, the physics of puddle formation is a prime candidate to explain why it is so difficult to reach high bulk resistivities in compensated topological insulators. A direct experimental detection of electrically conducting puddles in the bulk of topological insulators is therefore highly desirable. Surface-sensitive techniques are not ideally suited, as puddles are strongly suppressed close to the metallic surface which provides an extra screening channel [@Skinner13]. Nevertheless, the size of potential fluctuations observed in scanning tunneling microscopy [@Beidenkopf11] appears to be consistent with puddle formation [@Skinner13]. Optical spectroscopy is a bulk-sensitive method ideally suited to detect large conducting regions. The optical properties of Bi$_2$Te$_3$, Bi$_2$Se$_3$, and of solid solutions thereof were investigated intensively already half a century ago [@Black57; @Austin58; @Greenaway65; @Gobrecht66; @Koehler74ph] in view of their favorable thermoelectric properties [@Poudel08; @Eibl15]. Recently, optical data were reported for single crystals of Bi$_2$Te$_2$Se and Bi$_{2-x}$Sb$_x$Te$_{3-y}$Se$_y$ showing reduced carrier density [@Akrap12; @DiPietro12; @Reijnders14; @Aleshchenko14; @Post15]. However, the bulk carrier dynamics at very low densities were not addressed in detail. In particular, these data do not allow to draw conclusions on the presence of puddles. Here, we give a detailed account of the optical properties of the approximately fully compensated topological insulator BiSbTeSe$_2$ in the infrared range. We reveal clear signatures of conducting puddles, making use of the recent achievement [@Ren11] of very low carrier densities in BiSbTeSe$_2$ and the sensitivity of transmittance measurements to weak absorption features. The corresponding spectral weight is strongly temperature dependent at low temperatures. Based on numerical simulations, we will argue that this temperature dependence is indeed characteristic of the mechanism of puddle formation by fluctuations of the Coulomb potential. ![Illustration of puddle formation. The left panel depicts the spatial variation of the energies $E_{\pm}({{\mathbf r}}) = V({{\mathbf r}}) \pm { \Delta}/2 $ of conduction and valence bands (upper and lower lines) caused by the long-ranged Coulomb potential $V({{\mathbf r}})$ arising from randomly placed donors and acceptors. At $T$=0, the bands fluctuate so strongly that the chemical potential $\mu$ is crossed (shaded areas; data for ${ \Delta}/E_c$=$5$). This leads to the formation of metallic puddles, i.e., extended regions which are either $n$- or $p$-doped. An example is shown on the right ($T$=0, ${ \Delta}/E_c$=10, green/brown: $n/p$-doped). With increasing temperature, the fluctuations of the potential decrease (dashed lines in left panel) due to screening by thermally activated carriers thereby suppressing puddle formation. []{data-label="fig:puddles"}](Figure-1a-Borgwardt.pdf "fig:"){width="0.65\columnwidth"} ![Illustration of puddle formation. The left panel depicts the spatial variation of the energies $E_{\pm}({{\mathbf r}}) = V({{\mathbf r}}) \pm { \Delta}/2 $ of conduction and valence bands (upper and lower lines) caused by the long-ranged Coulomb potential $V({{\mathbf r}})$ arising from randomly placed donors and acceptors. At $T$=0, the bands fluctuate so strongly that the chemical potential $\mu$ is crossed (shaded areas; data for ${ \Delta}/E_c$=$5$). This leads to the formation of metallic puddles, i.e., extended regions which are either $n$- or $p$-doped. An example is shown on the right ($T$=0, ${ \Delta}/E_c$=10, green/brown: $n/p$-doped). With increasing temperature, the fluctuations of the potential decrease (dashed lines in left panel) due to screening by thermally activated carriers thereby suppressing puddle formation. []{data-label="fig:puddles"}](Figure-1b-Borgwardt.pdf "fig:"){width="0.33\columnwidth"} Experimental Results {#sec:results} ==================== Optical spectroscopy {#subsec:optics} -------------------- The complex optical conductivity $\sigma_1(\omega) + i \sigma_2(\omega)$ of single-crystalline BiSbTeSe$_2$ was determined from infrared transmittance and reflectance data which were complemented by ellipsometric measurements at higher energies (see *Methods*, Sec. \[subsec:opticalmeas\]). An overview of $\sigma_1(\omega)$ in the infrared range is plotted in Fig. \[fig:siglog\] on a logarithmic scale. The spectra reveal the steep increase of $\sigma_1(\omega)$ caused by the onset of excitations across the gap $\Delta$. At 5K, we find $\Delta$=0.26eV (2100cm$^{-1}$). At 300K, $\Delta$ is reduced by about 40%, it decreases with a slope of roughly 3.6cm$^{-1}$/K. Similar results for the temperature dependence were reported for related topological insulators. For more details, see *Supplemental Material* [@Suppl]. The main focus of the present study is, however, on the electronic contribution to the optical conductivity below the gap and its peculiar temperature dependence. In the temperature window from 40 to 60K, $\sigma_1(\omega)$ reaches values as low as 0.3$(\Omega$cm$)^{-1}$. Most remarkably, the temperature dependence of $\sigma_1(\omega)$ is highly non-monotonic. In the frequency range of about 300–1100cm$^{-1}$, $\sigma_1(\omega)$ is more than three times *larger* at 5K than at 50K. The rise of $\sigma_1(\omega)$ upon heating above 50K agrees with the DC conductivity $\sigma_1(\omega=0)$ measured in transport [@Ren11], but the increase of $\sigma_1(\omega)$ upon cooling below 50K strongly deviates from the transport results. This discrepancy of DC and optical conductivities is the smoking gun for the puddles, as it is a natural consequence of the carrier localization within puddles. Note that for all temperatures the measured values of $\sigma_1(\omega)$ below the gap are by far the lowest reported thus far for the entire family of Bi$_{2-x}$Sb$_x$Te$_{3-y}$Se$_y$. In Bi$_2$Te$_3$ and Bi$_2$Se$_3$, the Drude contribution of extrinsic carriers with typical densities $N \! \approx \! 10^{19}$cm$^{-3}$ extrapolates to DC values of $\sigma_1(0) \! \approx \! 1000$($\Omega$cm)$^{-1}$ [@Thomas92; @LaForge10; @Segura12; @DiPietro12; @Dordevic13; @Post13; @Reijnders14; @Chapler14]. In compounds with smaller $N$ such as Bi$_2$Te$_2$Se, impurity absorption bands with peak values of 50-100($\Omega$cm)$^{-1}$ were reported [@Akrap12; @DiPietro12; @Reijnders14], one to two orders of magnitude larger than the conductivity observed by us. Such pronounced impurity bands are apparently absent in BiSbTeSe$_2$, in agreement with recent reflectivity data [@Post15], which were, however, not sensitive enough to reveal the comparably weak absorption features with $\sigma_1(\omega) \! < \! 10$($\Omega$cm)$^{-1}$ observed by us in transmittance. ![Optical conductivity of BiSbTeSe$_2$ on a logarithmic scale. Weak absorption features below the gap with $\sigma_1(\omega)\! < \! 10$$(\Omega$cm$)^{-1}$ were obtained from the transmittance for a sample thickness of $d$=102$\mu$m, while data with $\sigma_1(\omega) \! > \! 20$$(\Omega$cm$)^{-1}$ in the opaque range were derived via a Kramers-Kronig analysis of the reflectivity. In combination, these data sets give an excellent account of $\sigma_1(\omega)$. []{data-label="fig:siglog"}](Figure-2-Borgwardt.pdf){width="1.0\columnwidth"} Absence of surface contributions {#subsec:surface} -------------------------------- An important question is whether the spectral weight observed below the gap can be related to the surface states of the topological insulator. This can, however, be excluded by comparing data for different thicknesses $d$ obtained successively on the same sample (see *Methods*, Sec. \[sec:exp\]). At each temperature, results for $\sigma_1(\omega)$ for $d$=102, 130, and 183$\mu$m agree very well with each other within the experimental uncertainty (see Fig. S3 in *Supplemental Material* [@Suppl]). This proves the bulk character of the excitations in the investigated frequency range. Theoretically, one may expect two contributions from the surface state: a Drude peak arising from surface conduction and *interband* excitations within the Dirac bands. In BiSbTeSe$_2$, the Fermi level is close to the Dirac point [@Arakane12], giving rise to a small density of surface states. Moreover, Dirac fermions show a large mobility. The respective narrow Drude peak is located below the frequency range addressed in our data, in agreement with terahertz data on thin films of Bi$_2$Se$_3$ and Bi$_{1.5}$Sb$_{0.5}$Te$_{1.8}$Se$_{1.2}$ [@Valdes12; @Tang13]. *Interband* excitations within the Dirac bands contribute at higher frequencies. For $\hbar \omega \! \geq \! 2\,E_F$, a universal conductance $G_0$=$\pi e^2/8h \approx 1.5\cdot 10^{-5}/\Omega$ has been predicted [@Schmeltzer13; @Li13]. For $d$=100$\mu$m, this is equivalent to a bulk conductivity of 0.0015($\Omega$cm)$^{-1}$, which is two orders of magnitude smaller than the lowest values observed in BiSbTeSe$_2$, see Fig. \[fig:siglog\]. We therefore conclude that all of our observations reflect bulk properties. Electronic contribution to $\sigma_1(\omega)$ {#sec:sigma} --------------------------------------------- Figure \[fig:sig1\] shows $\sigma_1(\omega)$ on a linear scale for frequencies below the gap. Several contributions can be identified in this frequency range, see inset of Fig. \[fig:sig1\]. Below about 150cm$^{-1}$, $\sigma_1(\omega)$ is dominated by a phonon contribution with a peak value of the order of $10^3\,(\Omega$cm$)^{-1}$ [@Reijnders14; @Post15] which can be identified in the reflectivity data (see Fig. S4 in *Supplemental Material* [@Suppl]). Above 150cm$^{-1}$, we find a tiny absorption band extending up to about 350cm$^{-1}$ with a peak value of about 1($\Omega$cm)$^{-1}$. Based on the frequency range and the tiny spectral weight, this can be attributed to a multi-phonon contribution, i.e., two- and three-phonon excitations. The remaining contributions of electronic origin we fit with a tiny, temperature-independent constant term of about $0.2$($\Omega$cm)$^{-1}$ and a strongly temperature-dependent Drude peak. Well above 50K, the interpretation of this feature as a Drude peak of thermally activated carriers is supported by the absolute value of $\sigma_1(\omega)$, by the peak width, and by the temperature dependence of the spectral weight, as shown below. The main focus of our study is, however, on the reappearance of spectral weight at low temperatures, which can be attributed to locally $n$- or $p$-doped puddles. The optical conductivity of such puddles is also expected to be of Drude form for frequencies above a cut-off $\omega_c$ given by the Thouless energy, determined by the time scale needed to diffuse through a puddle. Due to the large size of the puddles, the cut-off $\omega_c$ is orders of magnitude smaller than the frequency range investigated by us. Accordingly, we fit the data using the Drude model also at low temperatures. In the Drude model, $\sigma_1(\omega)$ depends on the scattering rate $1/\tau$ and the effective carrier density $N_{\rm eff}$=$N \, m_e/m^*$, $$\sigma_1(\omega) = \frac{\sigma_1(0)}{1+\omega^2 \tau^2} = \frac{N_{\rm eff}\, e^2 \, \tau /m_e}{1+\omega^2 \tau^2} \, ,$$ where $e$ and $m_e$ denote charge and mass of a free electron, respectively, and $m^*$ is the effective band mass. Well above 50K, the fit results for $\sigma_1(0)$ are consistent with DC resistivity data of samples with the same stoichiometry [@Ren11]. Comparing our result for $N_{\rm eff}$ at room temperature with Hall-effect data [@Ren11], we find $m^*/m_e$=0.2, see *Supplemental Material* [@Suppl]. This agrees with results for Bi$_2$Se$_3$, where values between 0.14 to 0.24 were derived from the cyclotron mass of the bulk conduction band depending on the orientation of the cyclotron orbit [@Eto10]. Using $m^*/m_e$=0.2, we deduce a carrier density as low as $N \! \approx \! 4\cdot 10^{16}$cm$^{-3}$ between 40K and 60K. The temperature-driven increase of $N_{\rm eff}$ above about 50K can be described as activated behavior with an activation energy $E_A$=26meV $\approx 0.1\,\Delta$, see inset of Fig. \[fig:omegap2\]. This agrees with $E_A$=22–30meV derived from transport measurements on BiSbTeSe$_2$ for temperatures above 100K [@Ren11]. The small activation energy has been proposed to be a clear signature of strong Coulomb fluctuations [@Skinner12; @Skinner13]. From the peak width we obtain $1/\tau \! \approx \! 1.4 \cdot 10^{14}$s$^{-1}$ roughly independent of temperature as expected for a scattering mechanism arising from the random position of defects. With $m^*/m_e = 0.2$, this corresponds to a mobility $\mu = e\tau/m^*\! \approx \! 70$cm$^2$/Vs, in excellent agreement with the value of 73cm$^2$/Vs from Hall data [@Ren11] on Bi$_{1.5}$Sb$_{0.5}$Te$_{1.7}$Se$_{1.3}$. Scattering rates of different compounds are compared in Table \[tab:omegap\]. Compensated BiSbTeSe$_2$ shows the smallest carrier density and by far the largest value of $1/\tau$ which supports that defect scattering is dominant. ![Optical conductivity below the gap. At 40–50K, $\sigma_1(\omega)$ is tiny below the gap. With increasing temperature, we identify a Drude peak of activated carriers with a strongly temperature-dependent spectral weight (see bottom panel and Fig. \[fig:omegap2\]) and a large and approximately temperature-independent scattering rate, $1/\tau \! \approx \! 1.4\cdot 10^{-14}$s$^{-1}$. Most remarkable is the reappearance of low-frequency spectral weight below about 50K, which reveals the formation of puddles, see top panel. The inset shows a fit of the 40K data with four contributions: a phonon at 70cm$^{-1}$, a multi-phonon band at 275cm$^{-1}$, a constant background of 0.23$(\Omega$cm$)^{-1}$, and a broad low-frequency band for carriers localized within puddles (red). The phonon and the background are kept constant in the fits of other temperatures (dashed lines in both panels). []{data-label="fig:sig1"}](Figure-3a-Borgwardt.pdf "fig:"){width="0.95\columnwidth"} ![Optical conductivity below the gap. At 40–50K, $\sigma_1(\omega)$ is tiny below the gap. With increasing temperature, we identify a Drude peak of activated carriers with a strongly temperature-dependent spectral weight (see bottom panel and Fig. \[fig:omegap2\]) and a large and approximately temperature-independent scattering rate, $1/\tau \! \approx \! 1.4\cdot 10^{-14}$s$^{-1}$. Most remarkable is the reappearance of low-frequency spectral weight below about 50K, which reveals the formation of puddles, see top panel. The inset shows a fit of the 40K data with four contributions: a phonon at 70cm$^{-1}$, a multi-phonon band at 275cm$^{-1}$, a constant background of 0.23$(\Omega$cm$)^{-1}$, and a broad low-frequency band for carriers localized within puddles (red). The phonon and the background are kept constant in the fits of other temperatures (dashed lines in both panels). []{data-label="fig:sig1"}](Figure-3b-Borgwardt.pdf "fig:"){width=".95\columnwidth"} ![Effective carrier density $N_{\rm eff}$. Symbols depict fitting results for the low-frequency absorption band obtained for different sample thicknesses $d$. Below about 50K, the carriers can be attributed to puddles. Solid line: activated behavior with an activation energy $E_A$=26meV. Inset: same data on a log scale vs. $1/T$. []{data-label="fig:omegap2"}](Figure-4-Borgwardt.pdf){width=".95\columnwidth"} compound $N_{\rm eff}[10^{19}/\text{cm}^{3}$\] $1/\tau[10^{12}$/s\] $T$ \[K\] Ref.  ---------------- -- --------------------------------------- -- ---------------------- -- ----------- -- ---------------------------- BiSbTeSe$_2$ 0.02/0.6 140 50/300 this work Bi$_2$Te$_2$Se 1.9 40 300 [@Akrap12] Bi$_2$Se$_3$ 2.9; 18 4; 23 6; 300 [@LaForge10]; [@Segura12] Bi$_2$Te$_3$ 33; 46 4.7; 5.6 10; 10 [@Thomas92]; [@Dordevic13] : Effective carrier densities $N_{\rm eff} = N m_e/m^*$ and scattering rates for different compounds. Carrier densities from Refs. [@Akrap12; @LaForge10] were calculated from the unscreened plasma frequencies given there. In Ref. [@Segura12], the screened plasma frequency $\omega_p/\sqrt{\varepsilon_\infty}$ is given together with $\varepsilon_\infty$=29.5 for Bi$_2$Se$_3$. []{data-label="tab:omegap"} Puddles {#sec:puddles} ------- Our main result is the dramatic reappearance of low-frequency spectral weight at temperatures below 50K, see Fig. \[fig:omegap2\]. The charge carriers responsible for this do, however, not contribute to the DC conductivity, and $\sigma_1(\omega)$ at 5K is about an order of magnitude larger than $\sigma_1(\omega=0)$ [@Ren11]. This is consistent with a picture of well separated metallic puddles contributing to $\sigma_1(\omega)$ but not directly to DC transport. The effective carrier density amounts to $N_{\rm eff,p} \! \approx \! 1.2\cdot10^{18}$cm$^{-3}$ at 5K. Using the value of the effective mass determined at 300K, this corresponds to an *average* carrier density $N_{\rm p} \! \approx \! 2\cdot10^{17}$cm$^{-3}$, which is, however, expected to be distributed in a highly non-uniform way due to puddle formation. With increasing temperature, the carrier density shows a rapid drop by a factor of 4–6 at a temperature scale of the order of 30–40K, see Fig. \[fig:omegap2\]. Below we will show that this temperature scale has to be identified with the energy scale $E_c$, which agrees quantitatively with theoretical expectations. Also the average carrier density $N_{\rm p}$ can be explained by our numerical simulations. Note that an unconventional – but much weaker – temperature dependence of the carrier density has been observed before in this family of topological insulators. In Bi$_2$Te$_3$, an unconventional decrease of $N_{\rm eff}$ by up to a factor of 2 has been observed between 5K and 300K [@Thomas92; @Dordevic13; @Chapler14]. For compensated Bi$_2$Te$_2$Se with $N_{\rm eff} \! \approx \! 10^{19}$cm$^{-3}$, a non-monotonic behavior of $N_{\rm eff}$ with a minimum in the range of 50K to 150K was reported [@Aleshchenko14]. However, $N_{\rm eff}$ in Bi$_2$Te$_2$Se changes by less than 10% between 5K and 50K and by about 20% between 50K and 300K, whereas we find a drastic change by more than a factor of 10 in BiSbTeSe$_2$, see Fig. \[fig:omegap2\]. We emphasize that our results are based on samples with very low carrier density in combination with the enhanced sensitivity for weak absorption features offered by transmittance measurements. Modelling the formation and destruction of puddles {#sec:model} ================================================== Following Skinner, Chen, and Shklovskii [@Skinner12], we use a simple classical electrostatic model to describe the formation of puddles in a compensated semiconductor. The model assumes that donors and acceptors are located at random positions ${{\mathbf r}}_i$ in space. Their average densities are given by $N_D$ and $N_A$, respectively, with $N_{\rm def}$=$(N_A+N_D)/2$. We are mainly interested in the experimentally relevant limit of almost perfect compensation where $K$=$N_A/N_D$ is close to $1$. The binding energy of charges to defects is small due to the large dielectric constant (see Fig. S4 in *Supplemental Material* [@Suppl]), thus donors and acceptors are shallow with energy levels very close to $\pm { \Delta}/2$. This situation is described by the Hamiltonian $$\begin{aligned} H=\sum_i \frac{ { \Delta}}{2} f_i n_i +\frac{1}{2} \sum_{i,j} V_{{{\mathbf r}}_i -{{\mathbf r}}_j} q_i q_j \label{model}\end{aligned}$$ where $n_i$=0,1 denotes the number of electrons on a donor ($f_i$=1) or acceptor ($f_i$=-1) site. The charge of a donor (acceptor) amounts to $q_i$=1 ($q_i$=-1) if it has donated (accepted) an electron to (from) another defect, otherwise defects are charge neutral, $q_i$=0. The Coulomb potential is supplemented by a short-distance cutoff $a_B$, $V_{{{\mathbf r}}_i -{{\mathbf r}}_j}$=$e^2/\{4 \pi \varepsilon_0 \varepsilon (|{{\mathbf r}}_i -{{\mathbf r}}_j|^2+a_B^2)^{1/2}\}$, which effectively takes into account the finite extent of the wave functions of the shallow impurity states [@Skinner12]. The value of $a_B$ turns out to have little influence [@Skinner12] and is set to $a_B$=$(2/N_{\rm def})^{1/3}$ for all of our simulations. Expressing all distances in units of the average distance of dopants, $d_{\rm def}=1/N_{\rm def}^{1/3}$, and all energies in units of the Coulomb interaction between neighbouring dopants, $E_c=e^2/(4 \pi \varepsilon_0 \varepsilon d_{\rm def})$, all properties of the model depend on ${ \Delta}/E_c$, $K$, and $T/E_c$. The strength of Coulomb interactions and therefore $E_c$ strongly depend on the dielectric constant $\varepsilon$ which is strongly frequency dependent in BiSbTeSe$_2$ and related compounds. Below the gap but above the phonons, we find $\varepsilon \! \approx \! 35$, which increases to $\varepsilon \approx 200$ for $\omega \to 0$ due to a huge phonon contribution, see Fig. S4 in *Supplemental Material* [@Suppl]. As puddles are static objects around which the highly polarizable ions will adjust their positions, the $\omega \to 0$ value $\varepsilon \approx 200$ should be most relevant for our model and is therefore used in the following. Besides the donor and acceptor states, no further conduction or valence electron states are taken into account in Eq. (\[model\]). For a gap of $\Delta/k_B \! \sim \! 3000$K, the contribution of intrinsic carriers thermally activated across the gap can be neglected at low temperatures. Also the intrinsic carrier density within a puddle can be neglected. This is due to the small effective mass $m^*$ in combination with the small value of the Fermi energy $E_F \! \sim \! E_c$ within the puddles (see below). Using $m^*/m_e$=0.2 and $E_F$=50K for a single spherical band, one obtains an electron density of $10^{17}$cm$^{-3}$, more than an order of magnitude smaller than the typical density of defects. While the classical model of Eq. (\[model\]) is strongly simplified, it is a powerful tool [@Shklovskii72; @Skinner12; @Skinner13; @Chen13] to obtain a semi-quantitative understanding of puddle formation at $T$=0. We will show below, that it also describes the destruction of puddles with increasing temperature. Most importantly, the model is sufficiently simple to allow for quantitative numerical simulations both at $T$=0 and at finite $T$ (see *Methods*, Sec. \[subsec:simulations\]). We are able to obtain results with only small finite-size effects for values of $ { \Delta}/E_c$ up to 15, see Fig. S5 in *Supplemental Material* [@Suppl]. Scaling arguments then allow us to address the experimentally relevant regime of $ { \Delta}/E_c \lesssim 100$ (see Sec. \[sec:disc\]). ![Identification of puddles. Top panel: Schematic picture of an electron puddle in a region where the density of donors (green circles) is larger than the density of acceptors (yellow circles). The symbols $+,-$, and 0 indicate the dopant charge. To identify a puddle, we consider a sphere of radius $r_0=1.42 \, d_{\rm def}$ around each neutral dopant. Then, we count the number $n$ of neutral dopants of the same kind within each sphere to obtain the distribution function $p_0(n)$ (normalized by the total number of dopants). This is shown in the lower panel for ${ \Delta}/E_c$=15 (left: perfect compensation, $K$=1; right: $K$=0.95) and various temperatures. We identify the fraction $p_p$ of dopants located well within a puddle with $p_p=\sum_{n\ge 4} p_0(n)$, i.e., considering all neutral dopants with at least four neutral neighbours (cf. vertical dashed lines). At $T$=0 (solid blue line), most neutral dopants are organized in puddles, having a substantial number of neutral neighbours. With increasing $T$, the number of neutral dopants with many neutral neighbours drops for $T\lesssim E_c$, which is a clear sign for the destruction of puddles (see Fig. \[fig:scaling2\]). The number of isolated neutral dopants with no neutral neighbours rises instead. []{data-label="fig:puddle"}](Figure-5b-Borgwardt.pdf){width="1.\columnwidth"} ![Destruction of puddles with increasing temperature. The fraction $p_p$ (see Fig. \[fig:puddle\]) of dopants organized in puddles drops rapidly as function of temperature at a temperature scale set by the Coulomb interaction $E_c$ between neighbouring dopants. Numerical results are given for ${ \Delta}/E_c$=9, 12, and 15 (left: perfect compensation, $K$=1; right: $K$=0.95). The scaling plots ($p_p \, \Delta/E_c$ for $K$=1 and $p_p/(1-K)$ for $1-K \gtrsim 0.3\, E_c/\Delta$, see Fig. \[fig:scaling1\], as function of $T/E_c$) allow us to extrapolate the results to larger values of $\Delta/E_c$. Scaling demonstrates that the destruction of puddles always occurs at approximately the same value of $T/E_c$. []{data-label="fig:scaling2"}](Figure-7-Borgwardt.pdf){width="\columnwidth"} At $T$=0, we reproduce the results of Ref. [@Skinner12], minimizing the energy by a pairwise exchange of charges. We extend, however, the simulation to finite temperatures using a Monte Carlo approach (see, e.g., Ref. [@Sarvestani95] for finite $T$ simulations for other Coulomb systems). Puddles are formed from occupied donor states or empty acceptor states, see Fig. \[fig:puddles\]. These correspond to *neutral* dopants, where, e.g., an electron compensates a positively charged donor ion. To detect puddles in our simulation, we distinguish extended regions of neutral dopants from isolated sites. Around each neutral dopant, we consider a sphere of radius $r_0$ and count the number $n$ of other neutral dopants of the same type, see the sketch in Fig. \[fig:puddle\]. The radius $r_0=1.42\,d_{\rm def}$ is chosen such that on average there are $12$ dopants (the number of nearest neighbours for close-packed spheres) of the same type within the sphere. With $p_0(n)$ we denote the fraction of dopants which are neutral and have $n$ neutral neighbours. This fraction is plotted in Fig. \[fig:puddle\] for perfect compensation $K$=1 (left panel) as well as for $K$=0.95 (right). For $T$=0, most neutral dopants are organized in puddles, i.e., have many neutral neighbours. Note, however, that $p_0(n)$ at $T$=0 is peaked at a value of $n$ substantially smaller than 12 which implies that an impurity band formed by donor or acceptor states within a puddle is only partially filled. This is related to the fact that the energy scale governing puddle formation (the depth of the potential $E_+({{\mathbf r}})-\mu$ in Fig. \[fig:puddles\]) is given by $E_c$ and is of the same size as the fluctuations of energy of neighbouring dopants. Upon increasing $T$, the total density of neutral dopants $(N_A+N_D)\,\sum_{n=0}^\infty p_0(n)$ increases. For low $T$ this increase is due to the thermal activation of carriers *outside* of the puddles, as $p_0(n)$ rises only for small number $n$ of neighbours, see Fig. \[fig:puddle\]. At the same time, the number of neutral dopants with large $n$ decreases, thus also the number of carriers *inside* the puddles decreases. The reason is that the thermally activated charges screen the Coulomb potential, which leads to a pronounced reduction of the fluctuations of the Coulomb potential, see Fig. \[fig:puddles\], and therefore to the destruction of puddles in a highly non-linear process. As we will show, this mechanism of puddle destruction is remarkably robust against changes of $\Delta/E_c$ and deviations from perfect compensation, $K=1$. The identification of this mechanism and of the corresponding energy scales is the main result of our theoretical analysis. To quantify this effect, we count the fraction $p_p=\sum_{n\ge n_0} p_0(n)$ of neutral dopants located well within a puddle. For the discussion, we choose $n_{0}$=4, i.e., we count those neutral dopants with at least four neutral neighbours, as indicated by the vertical dashed lines in Fig. \[fig:puddle\]. We have also used other values of $n_0$ (not shown) to confirm that the results do not depend qualitatively on this choice. A similar approach to identify clusters in random systems is, for example, used in Ref. [@clusters15]. We first consider the limit $T \to 0$. A scaling collapse of all results is obtained when $p_p \cdot \Delta/E_c$ is plotted as function of $(1-K)\, \Delta/E_c$, see Fig. \[fig:scaling1\]. For $K$=1, the density of neutral dopants contributing to puddles is of the order $N_{\rm def}\, E_c/\Delta$ with a small prefactor. This density does, however, rapidly increase when small deviations from perfect compensation are taken into account. This occurs roughly when $|N_D-N_A|$, the uncompensated part of the doping, becomes larger than the density of neutral dopants organized in puddles at $K$=1. The linear growth of $p_p$ with $|1-K|$ corresponds to the effect that adding, e.g., a small amount of extra donors to the perfectly compensated system does *not* give rise to a uniform doping but instead increases the number of neutral donors organized in puddles. This non-uniform doping originates again in the large-scale inhomogeneities of the Coulomb potential and is important also for the behavior at finite temperature. Upon increasing $T$, the fraction $p_p$ of dopants in puddles drops sharply, as shown in Fig. \[fig:scaling2\]. For perfect compensation, plots of $p_p(T) \cdot \Delta/E_c$ versus $k_B T/E_c$ are independent of $\Delta/E_c$ for $T, E_c \ll { \Delta}$ (left panel of Fig. \[fig:scaling2\]). This allows us to predict that the Coulomb interaction $E_c$ between neighbouring dopants sets the temperature scale for the destruction of puddles even in the experimentally relevant regime $\Delta/E_c\sim 100$ (see below). The right panel of Fig. \[fig:scaling2\], where $p_p/(1-K)$ is investigated for $K$=0.95, shows that this physics is not affected by small deviations from perfect compensation. In the investigated parameter regime ($|1-K|\ll 1$ and $\Delta \gg E_c$), we find that the temperature scale for the destruction of puddles is independent of doping effects and always given by $E_c$. This has to be contrasted with the strong effects of doping on the absolute value of $p_p$ which changes by an order of magnitude, see Fig. \[fig:scaling1\]. While the destruction of puddles with increasing temperature has, to our knowledge, not been investigated before, our results are fully consistent with known properties [@Skinner12; @Shklovskii72] of such Coulomb systems. For $T$=0 and $K$=1, it was observed [@Skinner12; @Shklovskii72] that the density of states of effective single-particle levels is roughly constant (up to the famous Efros-Shklovskii Coulomb gap at low frequencies) in an energy window set by $\pm \left(\frac{{ \Delta}}{2} + E_c\right)$ and therefore given by $1/{ \Delta}$ for ${ \Delta}\gg E_c$. Furthermore, the energy scale of the carriers within a puddle is set by $E_c$. In combination, this implies that the fraction of charge carriers in puddles is of the order of $E_c/{ \Delta}$ in the case of perfect compensation, as observed in our simulations. This value is, nevertheless, surprisingly large when compared to the much smaller fraction of charges, $ \sim (E_c/{ \Delta})^3$, which should be sufficient to compensate charge fluctuations of order $\sqrt{N_{\rm def} R_g^3}$ within the non-linear screening radius $R_g$ discussed in the introduction. Discussion {#sec:disc} ========== Both of our main experimental observations, the presence of a sizable optical weight at low $T$ and its rapid drop on a small temperature scale of the order of 30–40K, agree qualitatively with our numerical simulations based on a model of shallow donors and acceptors interacting by long-ranged Coulomb interactions. The remaining task is to compare the parameters of theory and experiment quantitatively. For the Coulomb energy between neighbouring dopants, theory predicts $$\begin{aligned} E_c = \frac{e^2}{4 \pi \epsilon_0 \epsilon} N_{\rm def}^{1/3} \approx k_B \cdot 20 - 40\,{\rm K} \, , \label{ec}\end{aligned}$$ where we used $\epsilon\approx 200$ (see Fig. S4 in *Supplemental Material* [@Suppl]) and assumed that the density of shallow donors and acceptors is in the range of $N_{\rm def} \sim 10^{19} - 10^{20}\,$cm$^{-3}$ as estimated from the carrier density in uncompensated samples, see Tab. I. This is fully consistent with the experimentally observed temperature scale of 30–40K, which translates to a ratio of $\Delta/E_c \approx 75-100$. This agreement does not only corroborate the assertion that puddles exist in BiSbTeSe$_2$ but also confirms the scenario that they arise from strong fluctuations of the Coulomb potential. Moreover, this agreement indicates that the optical determination of $E_c$ may turn out to be a useful tool to estimate the defect density $N_{\rm def}$, a quantity which is difficult to assess in a compensated semiconductor. This interesting result will have to be tested in future experiments. A quantitative prediction for the effective carrier density $N_{\rm eff}$ is more subtle as this quantity depends for large $\Delta/E_c$ sensitively on the precise amount of compensation which is not known experimentally. For perfect compensation, $K$=1, our results for $p_p$ show that $N_p/N_{\rm def}$ should take values of the order of $0.1~E_c/\Delta\sim 10^{-3}$. The defect density $N_{\rm def}$ can be estimated from the experimental value for $E_c$ and Eq. (\[ec\]) which gives $N_{\rm def}$=$5 - 10 \cdot 10^{19}$cm$^{-3}$. Combining this with the experimental estimate $N_p\,\approx\,2.4\cdot 10^{17}$cm$^{-3}$ at 5K (see above), we obtain $N_p/N_{\rm def}\approx 0.002 - 0.005$, a factor 2$-$10 larger than the theoretical estimate for $K$=1. Most likely, this just means that compensation is not perfect in our sample. As can be seen from Fig. \[fig:scaling1\], tiny deviations from perfect compensation at a level of $1-K \lesssim 1.5 E_c/\Delta$ can easily explain the observed spectral weight. With $\Delta/E_c \approx 75-100$, a deviation from perfect compensation of just $1$ or $2\%$ is sufficient to obtain a consistent description of the experiment. Taking both the extremely simplified nature of the theoretical description and the uncertainties in parameters like the effective mass into account, the quantitative determination of the parameters should perhaps not be taken too literally. They are, however, highly plausible, suggesting that at low temperatures we achieve agreement between theory and experiment at least on a semi-quantitative level. Conclusion ========== Our optical conductivity data of the almost perfectly compensated topological insulator BiSbTeSe$_2$ reveal the existence of puddles at low temperatures as well as their destruction on a temperature scale of 30–40K. Both the spectral weight and the temperature scale agree semi-quantitatively with our numerical simulations based on a model of shallow donors and acceptors interacting by long-ranged Coulomb interactions. We have shown that puddles are suppressed by thermally activated charges which screen the Coulomb potential. The temperature scale of puddle destruction is set by the Coulomb interaction $E_c$ between neighbouring dopants. This mechanism works both for near-perfect and perfect compensation. Puddle formation driven by long-ranged Coulomb interactions is not only of importance in compensated semiconductors but also for other materials with a vanishing density of electronic states including Dirac matter in two or three dimensions, like graphene [@Martin08] or Weyl semimetals. For the physics of topological insulators, puddle formation in compensated samples has both positive and negative effects. While the strong fluctuations of the Coulomb potential imply that it is more difficult to reach high bulk resistivities despite of perfect compensation, they can also help to localize electrons or holes in puddles in situations where the compensation of donors and acceptors is not perfect. The surface states of topological insulators can provide extra screening channels, thus suppressing puddle formation close to the surface or in thin samples. It will therefore be interesting to study both experimentally and theoretically, how puddle formation depends on sample thickness and other parameters and how it interacts with the charge density of the topological surface states. Controlling puddle formation may turn out to be a key step for further reduction of bulk transport in topological insulators. Methods {#sec:exp} ======= Samples {#subsec:samples} ------- The compound BiSbTeSe$_2$ belongs to the family of $A_2B_3$ tetradymites ($A$=Bi,Sb; $B$=Te,Se) showing rhombohedral structure (space group $R\bar{3}m$) [@Nakajima63; @Ren11] with three quintuple layers per unit cell stacked along the \[111\] direction. Single crystals of BiSbTeSe$_2$ were grown starting from high-purity elements as described in Ref. [@Ren11]. The crystals were cut into platelets with typical dimensions of $3\times3$mm$^2$ within the (111) plane. Due to the weak van der Waals bonding between quintuple layers, the samples can be cleaved easily along the (111) plane using adhesive tape. This yields shiny plane-parallel surfaces. The defect density can be reduced by chalcogen order within the quintuple layers [@Ren10; @Xiong12; @Taskin11; @Ren11]. In Bi$_2$Te$_2$Se with Te-Bi-Se-Bi-Te order [@Ren10; @Xiong12], the $B^{II}$ sites are exclusively occupied by Se ions. In the solid solutions Bi$_{2-x}$Sb$_x$Te$_{3-y}$Se$_y$ with $y \geq 1$, the composition has been optimized with the aim to achieve full compensation [@Ren11; @Pan14]. Chalcogen order is preserved to some extent, as shown by x-ray diffraction [@Ren11]. Among these solid solutions, BiSbTeSe$_2$ was reported to show the highest DC resistivity at 2K [@Ren11]. It reaches 3$\Omega$cm but varies by about a factor of 3 for samples with the same nominal composition. This can partially be attributed to a thickness dependence [@Taskin11; @Xu14; @Pan14] related to a finite conductance contribution of the surface but also reflects different defect concentrations [@Ren11]. Optical measurements {#subsec:opticalmeas} -------------------- Infrared reflectance and transmittance measurements were performed with unpolarized light in the frequency range of 50–7500cm$^{-1}$ (6meV–0.93eV) using a Bruker IFS 66v/S Fourier-transform spectrometer equipped with a continuous-flow He cryostat. The transmittance $T(\omega)$ was recorded at normal incidence with the electric field parallel to the cleavage plane, while the reflectivity $R(\omega)$ was measured under near-normal incidence. Additionally, ellipsometric data were obtained using a rotating analyzer ellipsometer (Woollam VASE) equipped with a retarder between polarizer and sample. The ellipsometric data were collected at room temperature in the photon energy range of 0.75–5.5eV (6050–44360cm$^{-1}$) for three different angles of incidence (60$^\circ$, 70$^\circ$, and 80$^\circ$). Reflectance data and ellipsometric data were measured on a sample with a thickness of $d \! \approx \! 1.1$mm. For the transmittance measurements, we started on a sample with a thickness of $d$=(183$\pm$5)$\mu$m (see Fig. S1 in *Supplemental Material* [@Suppl]). The value of $d$ was determined mechanically using a micrometer screw. For this rather thick sample, the accuracy of 5$\mu$m corresponds to an error of 2.7%. Subsequently this sample was cleaved several times using adhesive tape, and the transmittance was measured successively on the same sample for a series of different thicknesses, $d$=183, 130, and 102$\mu$m. The latter two values were determined by comparing the Fabry-Perot interference fringes which arise in a transparent frequency range due to multiple reflections within the sample. Due to the shiny and plane-parallel surfaces obtained by cleaving, the interference fringes are particularly pronounced, see Fig. S1 in *Supplemental Material* [@Suppl]. For two samples $a$ and $b$ with different thicknesses $d_a$ and $d_b$, the thickness ratio can be determined from the fringe periods, $\Delta \nu_a/ \Delta \nu_b$=$d_b/d_a$, with an accuracy of better than 0.5%. This is important for the comparison of results obtained for different thicknesses, see Fig. S3 in *Supplemental Material* [@Suppl], and thus for the question whether there is a finite contribution of surface states. In the transparent frequency range, the complex optical conductivity $\tilde{\sigma}(\omega)$=$\sigma_1(\omega) + i \sigma_2(\omega)$ was determined from $T(\omega)$ and $R(\omega)$ [@Grueninger2002]. In the opaque range, $\tilde{\sigma}(\omega)$ was obtained via a Kramers-Kronig analysis of $R(\omega)$, which at high frequencies was extrapolated using the ellipsometric results. Monte Carlo simulations {#subsec:simulations} ----------------------- The model defined in Eq. (\[model\]) is simulated at finite $T$ with a standard Monte Carlo algorithm (Metropolis). Periodic boundary conditions for the Coulomb potential are imposed by using always the shortest distance on the $3-$torus for its computation. We start the simulations from a configuration where all $N_A$ acceptors and $(1-K)N_D$ donors are occupied, such that the total system is charge neutral. We only consider configurations which keep charge neutrality by using a pairwise exchange of charge in each Metropolis step. For $T \to 0$ we average over $100$ disorder realizations. At high temperatures, averages over only $10$ different realizations turn out to be sufficient. We simulate up to $N_A+N_D$=$2N_{\rm def}$=$2 \times 38^3 \approx 110.000$ dopants. For the parameters $K$=1, $T/E_c$=0, and $\Delta/E_c$=15 finite-size effects are largest, see *Supplemental Material* [@Suppl] for details. Therefore, we used $2 \times 60^3 = 432.000$ dopants for this particular set of parameters (a single triangle in Figs. \[fig:scaling1\] and \[fig:scaling2\]). For $T = 0$ the algorithm is identical to the one used in Ref. [@Skinner12]. It yields only a local and not a global minimum of the (free) energy but such pseudo ground states are known to describe the properties of real ground states with high accuracy [@Skinner12]. In contrast to simulations with local interactions, the numerical costs for the long-ranged Coulomb interactions increase strongly for increasing temperature and scale with $(N_A+N_D)^2$: each update of the charge configuration implies that the local energies of all other dopants have to be recomputed (see Ref. [@Skinner12]). As puddle formation occurs at length scales of $( { \Delta}/E_c)^2 /N_{\rm def}^{1/3}$, the number of dopants needed in a simulation grows with $( { \Delta}/E_c)^6$ and a worst-case estimate for the computational costs is $( { \Delta}/E_c)^{12}$ for finite temperatures. Financial support through the German Excellence Initiative via the key profile area “quantum matter and materials” of the University of Cologne is gratefully acknowledged. The work was also supported by JSPS (KAKENHI 25220708) and MEXT (Innovative Area “Topological Materials Science” KAKENHI). The numerical simulations were performed on the CHEOPS cluster at RRZK Cologne. [99]{} M.Z. Hasan and C.L. Kane, *Colloquium: Topological Insulators,* Rev. Mod. Phys. **82**, 3045 (2010). X.-L. Qi and S.-C. Zhang, *Topological insulators and superconductors,* Rev. Mod. Phys. **83**, 1057 (2011). M.Z. Hasan and J.E. Moore, *Three-Dimensional Topological Insulators,* Annu. Rev. Condens. Matter Phys. **2**, 55 (2011). Y. Ando, *Topological Insulator Materials,* J. Phys. Soc. Japan **82**, 102001 (2013). Y. Xia, D. Qian, D. Hsieh, L. Wray, A. Pal, H. Lin, A. Bansil, D. Grauer, Y. S. Hor, R. J. Cava, and M. Z. Hasan, *Observation of a large-gap topological-insulator class with a single Dirac cone on the surface,* Nat. Phys. **5**, 398 (2009). Y.L. Chen, J.G. Analytus, J.-H. Chu, Z.K. Liu, S.-K.Mo, X.L. Qi, H.J. Zhang, D.H. Lu, X. Dai, Z. Fang, S.-C. Zhang, I.R. Fisher, Z. Hussain, and Z.-X. Shen, *Experimental Realization of a Three-Dimensional Topological Insulator, Bi$_2$Te$_3$,* Science **325**, 178 (2009). D. Hsieh, Y. Xia, D. Qian, L. Wray, J.H. Dil, F. Meier, J. Osterwalder, L. Patthey, J.G. Checkelsky, N.P. Ong, A.V. Fedorov, H. Lin, A. Bansil, D. Grauer, Y.S. Hor, R.J. Cava, and M.Z. Hasan, *A tunable topological insulator in the spin helical Dirac transport regime,* Nature **460**, 1101 (2009). J. S[á]{}nchez-Barriga, A. Varykhalov, J. Braun, S.-Y. Xu, N. Alidoust, O. Kornilov, J. Min[á]{}r, K. Hummer, G. Springholz, G. Bauer, R. Schumann, L.V. Yashina, H. Ebert, M.Z. Hasan, and O. Rader, *Photoemission of Bi$_2$Se$_3$ with Circularly Polarized Light: Probe of Spin Polarization or Means for Spin Manipulation?,* Phys. Rev. X **4**, 011046 (2014). Z. Alpichshev, J.G. Analytis, J.H. Chu, I.R. Fisher, Y.L. Chen, Z.X. Shen, A. Fang, and A. Kapitulnik, *STM Imaging of Electronic Waves on the Surface of Bi$_2$Te$_3$: Topologically Protected Surface States and Hexagonal Warping Effects,* Phys. Rev. Lett. **104**, 016401 (2010). H. Beidenkopf, P. Roushan, J. Seo, L. Gorman, I. Drozdov, Y.S. Hor, R.J. Cava, and A. Yazdani, *Spatial fluctuations of helical Dirac fermions on the surface of topological insulators,* Nature Phys. **7**, 939 (2011). X.-L. Qi, T.L. Hughes, and S.-C. Zhang, *Topological field theory of time-reversal invariant insulators,* Phys. Rev. B **78**, 195424 (2008). W.-K. Tse and A.H. MacDonald, *Giant Magneto-Optical Kerr Effect and Universal Faraday Effect in Thin-Film Topological Insulators,* Phys. Rev. Lett. **105**, 057401 (2010). D. Schmeltzer and K. Ziegler, *Optical conductivity for the surface of a Topological Insulator,* arXiv:1302.4145v1. Zhou Li and J.P. Carbotte, *Hexagonal warping on optical conductivity of surface states in topological insulator Bi$_2$Te$_3$,* Phys. Rev, B **87**, 155416 (2013). M. Stordeur, K.K. Ketavong, A. Priemuth, H. Sobotta, and V. Riede, *Optical and Electrical Investigations of n-Type Bi$_2$Se$_3$ Single Crystals,* phys. stat. sol. (b) **169**, 505 (1992). J.G. Analytis, R.D. McDonald, S.C. Riggs, Jiun-Haw Chu, G.S. Boebinger, and I.R. Fisher, *Two-dimensional surface state in the quantum limit of a topological insulator,* Nat. Phys. **6**, 960 (2010). Z. Ren, A.A. Taskin, S. Sasaki, K. Segawa, and Y. Ando, *Large bulk resistivity and surface quantum oscillations in the topological insulator Bi$_2$Te$_2$Se,* Phys. Rev. B **82**, 241306(R) (2010). A. D. LaForge, A. Frenzel, B. C. Pursley, Tao Lin, Xinfei Liu, Jing Shi, and D. N. Basov, *Optical characterization of Bi$_2$Se$_3$ in a magnetic field: Infrared evidence for magnetoelectric coupling in a topological insulator material,* Phys. Rev. B **81**, 125120 (2010). N.P. Butch, K. Kirshenbaum, P. Syers, A.B. Sushkov, G.S. Jenkins, H.D. Drew, and J. Paglione, *Strong surface scattering in ultrahigh-mobility Bi$_2$Se$_3$ topological insulator crystals,* Phys. Rev. B **81**, 241301(R) (2010). K. Eto, Z. Ren, A.A. Taskin, K. Segawa, and Y. Ando, *Angular-dependent oscillations of the magnetoresistance in Bi$_2$Se$_3$ due to the three-dimensional bulk Fermi surface,* Phys. Rev. B **81**, 195309 (2010). K. W. Post, B. C. Chapler, L. He, X. Kou, K. L. Wang, and D. N. Basov, *Thickness-dependent bulk electronic properties in Bi$_2$Se$_3$ thin films revealed by infrared spectroscopy,* Phys. Rev. B **88**, 075121 (2013). R.J. Cava, H. Ji, M.K. Fuccillo, Q.D. Gibson, and Y.S. Hor, *Crystal structure and chemistry of topological insulators,* J. Mater. Chem. C **1**, 3176 (2013). A.A. Taskin, Z. Ren, S. Sasaki, K. Segawa, and Y. Ando, *Observation of Dirac Holes and Electrons in a Topological Insulator,* Phys. Rev. Lett. **107**, 016801 (2011). Z. Ren, A.A. Taskin, S. Sasaki, K. Segawa, and Y. Ando, *Optimizing Bi$_{2-x}$Sb$_x$Te$_{3-y}$Se$_y$ solid solutions to approach the intrinsic topological insulator regime,* Phys. Rev. B **84**, 165311 (2011). T. Arakane, T. Sato, S. Souma, K. Kosaka, K. Nakayama, M. Komatsu, T. Takahashi, Z. Ren, K. Segawa, and Y. Ando, *Tunable Dirac cone in the topological insulator Bi$_{2-x}$Sb$_x$Te$_{3-y}$Se$_y$,* Nat. Commun. 3:636 doi: 10.1038/ncomms1639 (2012). M. Neupane, S.-Y. Xu, L.A. Wray, A. Petersen, R. Shankar, N. Alidoust, C. Liu, A. Fedorov, H. Ji, J.M. Allred, Y.S. Hor, T.-R. Chang, H.-T. Jeng, H. Lin, A. Bansil, R.J. Cava, and M.Z. Hasan, *Topological surface states and Dirac point tuning in ternary topological insulators,* Phys. Rev. B **85**, 235406 (2012). Y. Xu, I. Miotkowski, C. Liu, J. Tian, H. Nam, N. Alidoust, J. Hu, C.-K. Shih, M. Zahid Hasan, and Y.P. Chen, *Observation of topological surface state quantum Hall effect in an intrinsic three-dimensional topological insulator,* Nature Phys. **10**, 956 (2014). Y. Pan, D. Wu, J.R. Angevaare, H. Luigjes, E. Frantzeskakis, N. de Jong, E. van Heumen, T.V. Bay, B. Zwartsenberg, Y.K. Huang, M. Snelder, A. Brinkman, M.S. Golden, and A. de Visser, *Low carrier concentration crystals of the topological insulator Bi$_{2-x}$Sb$_x$Te$_{3-y}$Se$_y$: a magnetotransport study,* New J. Phys. **16**, 123035 (2014). J. Xiong, A.C. Petersen, D. Qu, Y.S. Hor, R.J. Cava, and N.P. Ong, *Quantum oscillations in a topological insulator Bi$_2$Te$_2$Se with large bulk resistivity (6 $\Omega$ cm),* Physica E **44**, 917 (2012). J. Xiong, Y. Luo, Y.H. Khoo, S. Jia, R.J. Cava, and N.P. Ong, *High-field Shubnikov–de Haas oscillations in the topological insulator Bi$_2$Te$_2$Se,* Phys. Rev. B **86**, 045314 (2012). Z. Ren, A.A. Taskin, S. Sasaki, K. Segawa, and Y. Ando, *Fermi level tuning and a large activation gap achieved in the topological insulator Bi$_2$Te$_2$Se by Sn doping,* Phys. Rev. B **85**, 155301 (2012). S. Jia, H. Beidenkopf, I. Drozdov, M.K. Fuccillo, J. Seo, J. Xiong, N.P. Ong, A. Yazdani, and R.J. Cava, *Defects and high bulk resistivities in the Bi-rich tetradymite topological insulator Bi$_{2+x}$Te$_{2-x}$Se,* Phys. Rev. B **86**, 165119 (2012). C. Shekhar, C.E. ViolBarbosa, B. Yan, S. Ouardi, W. Schnelle, G.H. Fecher, and C. Felser, *Evidence of surface transport and weak antilocalization in a single crystal of the Bi$_2$Te$_2$Se topological insulator,* Phys. Rev. B **90**, 165140 (2014). A. Akrap, A. Ubaldini, E. Giannini, and L. Forro, *Bi$_2$Te$_{3-x}$Se$_x$ series studied by resistivity and thermopower,* Europhys. Lett. **107**, 57008 (2014). S.K. Kushwaha, Q.D. Gibson, J. Xiong, I. Pletikosic, A.P. Weber, A.V. Fedorov, N.P. Ong, T. Valla, and R.J. Cava, *Comparison of Sn-doped and nonstoichiometric vertical-Bridgman-grown crystals of the topological insulator Bi$_2$Te$_2$Se,* J. Appl. Phys. **115**, 143708 (2014). B. Skinner, T. Chen, and B.I. Shklovskii, *Why Is the Bulk Resistivity of Topological Insulators So Small?,* Phys. Rev. Lett. **109**, 176801 (2012). B. Skinner, T. Chen, and B.I. Shklovskii, *Effects of Bulk Charged Impurities on the Bulk and Surface Transport in Three-Dimensional Topological Insulators,* J. Exp. Theo. Phys. **117**, 579 (2013). T. Chen and B.I. Shklovskii, *Anomalously small resistivity and thermopower of strongly compensated semiconductors and topological insulators,* Phys. Rev. B **87**, 165119 (2013). B. I. Shklovskii and A. L. Efros, *Completely Compensated Crystalline Semiconductor as a Model of an Amorphous Semiconductor,* Sov. Phys. JETP **35**, 610 (1972). A. L. Efros and B. I. Shklovskii, *Coulomb gap and low temperature conductivity of disordered systems,* J. Phys. C **8**, L49 (1975). J. Black, E.M. Conwell, L. Seigle, and C.W. Spencer, *Electrical and optical properties of some M$_2^{V-B}$N$_3^{VI-B}$ semiconductors,* J. Phys. Chem. Solids **2**, 240 (1957). I.G. Austin, *The Optical Properties of Bismuth Telluride,* Proc. Phys. Soc. London **72**, 545 (1958). D.L. Greenaway and G. Harbeke, *Band structure of bismuth telluride, bismuth selenide and their respective alloys,* J. Phys. Chem. Solids **26**, 1585 (1965). H. Gobrecht, S. Seeck, and T. Klose, *Der Einfluß der freien Ladungstr[ä]{}ger auf die optischen Konstanten des Bi$_2$Se$_3$ im Wellenl[ä]{}ngengebiet von 2 bis 23 $\mu$m,* Z. Physik **190**, 427 (1966). H. K[ö]{}hler and C.R. Becker, *Optically Active Lattice Vibrations in Bi$_2$Se$_3$,* phys. stat. sol. (b) **61**, 533 (1974). B. Poudel, Q. Hao, Y. Ma, Y. Lan, A. Minnich, B. Yu, X. Yan, D. Wang, A. Muto, D. Vashaee, X. Chen, J. Liu, M.S. Dresselhaus, G. Chen, and Z. Ren, *High-Thermoelectric Performance of Nanostructured Bismuth Antimony Telluride Bulk Alloys,* Science **320**, 634 (2008). *Thermoelectric Bi$_2$Te$_3$ Nanomaterials*, edited by O. Eibl, K. Nielsch, N. Peranio, and F. V[ö]{}lklein (Wiley, 2015). A. Akrap, M. Tran, A. Ubaldini, J. Teyssier, E. Giannini, D. van der Marel, P. Lerch, and C. C. Homes, *Optical properties of Bi$_2$Te$_2$Se at ambient and high pressures,* Phys. Rev. B **86**, 235207 (2012). P. Di Pietro, F. M. Vitucci, D. Nicoletti, L. Baldassarre, P. Calvani, R. Cava, Y. S. Hor, U. Schade, and S. Lupi, *Optical conductivity of bismuth-based topological insulators,* Phys. Rev. B **86**, 045439 (2012). A.A. Reijnders, Y. Tian, L.J. Sandilands, G. Pohl, I.D. Kivlichan, S.Y. Frank Zhao, S. Jia, M.E. Charles, R.J. Cava, Nasser Alidoust, Suyang Xu, Madhab Neupane, M. Zahid Hasan, X.Wang, S.W. Cheong, and K.S. Burch, *Optical evidence of surface state suppression in Bi-based topological insulators,* Phys. Rev. B **89**, 075138 (2014). Yu. A. Aleshchenko, A.V. Muratov, V.V. Pavlova, Yu.G. Selivanov, and E. G. Chizhevskii, *Infrared Spectroscopy of Bi$_2$Te$_2$Se,* JETP Letters **99**, 187 (2014). K.W. Post, Y.S. Lee, B.C. Chapler, A.A. Schafgans, M. Novak, A.A. Taskin, K. Segawa, M.D. Goldflam, H.T. Stinson, Y. Ando, and D.N. Basov, *Infrared probe of the bulk insulating response in Bi$_{2-x}$Sb$_x$Te$_{3-y}$Se$_y$ topological insulator alloys,* Phys. Rev. B **91**, 165202 (2015). See Supplemental Material for transmittance spectra, the temperature dependence of the gap, the optical conductivity of samples with different thicknesses, a brief discussion of the effective mass, the reflectivity and the dielectric function in the far-infrared range, and a brief discussion of finite-size effects of the numerical data. G.A. Thomas, D.H. Rapkine, R.B. Van Dover, L.F. Mattheiss, W.A. Sunder, L.F. Schneemeyer, and J.V. Waszczak, *Large electronic-density increase on cooling a layered metal: Doped Bi$_2$Te$_3$,* Phys. Rev. B **46**, 1553 (1992). A. Segura, V. Panchal, J.F. Sánchez-Royo, V. Marín-Borrás, V. Muñoz-Sanjosé, P. Rodríguez-Hernández, A. Muñoz, E. Pérez-González, F. J. Manjón, and J. González, *Trapping of three-dimensional electrons and transition to two-dimensional transport in the three-dimensional topological insulator Bi$_2$Se$_3$ under high pressure,* Phys. Rev. B **85**, 195139 (2012). S.V. Dordevic, M.S. Wolf, N. Stojilovic, Hechang Lei, and C. Petrovic, *Signatures of charge inhomogeneities in the infrared spectra of topological insulators Bi$_2$Se$_3$, Bi$_2$Te$_3$ and Sb$_2$Te$_3$,* J. Phys.: Condens. Matter **25**, 075501 (2013). B.C. Chapler, K.W. Post, A.R. Richardella, J.S. Lee, J. Tao, N. Samarth, and D. N. Basov, *Infrared electrodynamics and ferromagnetism in the topological semiconductors Bi$_2$Te$_3$ and Mn-doped Bi$_2$Te$_3$,* Phys. Rev. B **89**, 235308 (2014). R. Vald[é]{}s Aguilar, A. V. Stier, W. Liu, L. S. Bilbro, D. K. George, N. Bansal, L. Wu, J. Cerne, A. G. Markelz, S. Oh, and N. P. Armitage, *Terahertz Response and Colossal Kerr Rotation from the Surface States of the Topological Insulator Bi$_2$Se$_3$,* Phys. Rev. Lett. **108**, 087403 (2012). C.S. Tang, B. Xia, X. Zou, S. Chen, H.-W. Ou, L. Wang, A. Rusydi, J.-X. Zhu, and E.E.M. Chia, *Terahertz conductivity of topological surface states in Bi$_{1.5}$Sb$_{0.5}$Te$_{1.8}$Se$_{1.2}$,* Sci. Rep. 3, 3513; DOI: 10.1038/srep03513 (2013). M. Sarvestani, M. Schreiber, and T. Vojta, *Coulomb gap at finite temperatures,* Phys. Rev. B **52**, 3820(R) (1995). E. Allahyarov, K. Sandomirski, S. U. Egelhaaf, and H. Löwen, *Crystallization seeds favour crystallization only during initial growth,* Nature Commun. **6**, 7110 (2015). J. Martin, N. Akerman, G. Ulbricht, T. Lohmann, J.H. Smet, K. von Klitzing, and A. Yacoby, *Observation of electron-hole puddles in graphene using a scanning single-electron transistor,* Nature Phys. **4**, 144 (2008). S. Nakajima, *The crystal structure of Bi$_2$Te$_{3-x}$Se$_x$,* J. Phys. Chem. Solids **24**, 479 (1963). M. Gr[ü]{}ninger, M. Windt, T. Nunner, C. Knetter, K.P. Schmidt, G.S. Uhrig, T. Kopp, A. Freimuth, U. Ammerahl, B. B[ü]{}chner, and A. Revcolevschi, *Magnetic excitations in two-leg spin 1/2 ladders: experiment and theory,* J. Phys. Chem. Sol. **63**, 2167 (2002). **Supplemental Material:\ Revealing puddles of electrons and holes in compensated topological insulators** Transmittance spectra ===================== Figure \[fig:Trans\] shows the transmittance $T(\omega)$ measured on a sample with a thickness of $d$=183$\mu$m. At 300K, $T(\omega)$ exceeds the noise level only between 830 and 1335cm$^{-1}$ and stays below 0.5%. However, $T(\omega)$ strongly increases upon cooling down. At 50K, the sample is transparent between about 180cm$^{-1}$ and 2060cm$^{-1}$. Most remarkably, $T(\omega)$ [*decreases*]{} upon further cooling below about 50K. At 5K, $T(\omega)$ is much lower than at 50K for frequencies not too close to the gap. In the highly transparent range, $T(\omega)$ exhibits Fabry-Perot interference fringes (see right panel of Fig. \[fig:Trans\]) caused by multiple reflections between front and back surface. The fringes are particularly pronounced due the shiny, plane-parallel surfaces of cleaved samples. ![Infrared transmittance spectra of BiSbTeSe$_2$. Left: $T(\omega)$ measured on a sample with a thickness of $d$=183$\mu$m. Right: same data on an enlarged scale, highlighting the pronounced interference fringes. []{data-label="fig:Trans"}](FigS1.pdf){width="1.0\columnwidth"} Temperature dependence of the gap ================================= Figure \[fig:gap\] depicts $\Delta(T)$ as determined from the onset of transmittance for a sample thickness of $d$=102$\mu$m. The gap shifts by almost 40% between 5K and 300K, from about 2112cm$^{-1}$ (262meV) to 1292cm$^{-1}$ (160meV). A fit based on the empirical Varshni equation $\Delta(T)$=$\Delta(0) - \beta \cdot T^2/(T+T_0)$ (red line) yields $\Delta(0)$=2125cm$^{-1}$, $T_0$=86K, and $\beta$=3.6cm$^{-1}$/K or 5.2k$_{\rm B}$. Our result for $\beta$ is framed by values reported for binary compounds, e.g., 0.77cm$^{-1}$/K for Bi$_2$Te$_3$ [@sAustin58], 1.6cm$^{-1}$/K [@sBlack57] and 2.0cm$^{-1}$/K [@sLaForge10] for Bi$_2$Se$_3$, and 5.6cm$^{-1}$/K for Sb$_2$Se$_3$ [@sBlack57]. Band-structure calculations for binary compounds indicate that $\Delta$ is very sensitive to temperature [@sNechaev13; @sMichiardi14]. To the best of our knowledge, band-structure calculations for quaternary compounds have not been reported thus far. ![Temperature dependence of the gap. Symbols depict $\Delta(T)$ as determined from the onset of transmittance for a sample thickness of $d$=102$\mu$m. The red line shows a fit based on the empirical Varshni equation. []{data-label="fig:gap"}](FigS2.pdf){width="0.9\columnwidth"} Optical conductivity for different sample thicknesses ===================================================== The bulk conductivity $\sigma_1(\omega)$ and a possible surface conductance $G(\omega)$ can be disentangled by comparing results for different thicknesses $d$. If the data analysis is based on the assumption of a bulk-only character, a finite surface conductance gives rise to an apparent thickness dependence of $\sigma_1(\omega)$, i.e., the calculated $\sigma_1(\omega)$ increases upon decreasing $d$. In contrast, our results for $\sigma_1(\omega)$ for different $d$ agree very well with each other, see Fig. \[fig:compareD\]. At 5K, the ratio $\sigma_1^{\rm 102}/\sigma_1^{\rm 183}$ lies within the range 0.99–1.05 below 1500cm$^{-1}$. This strongly supports a bulk-only character of the investigated excitations. At 1000cm$^{-1}$, the difference between data for $d$=183$\mu$m and 102$\mu$m lies in the range $-0.1$$\Omega$cm$^{-1}$ to $+0.2$$\Omega$cm$^{-1}$ for different temperatures. This can be considered as the experimental uncertainty, which can be attributed to small experimental errors concerning the absolute value of $T(\omega)$ or the thickness $d$. In our data, $\sigma_1(\omega)$ typically takes the smallest value for $d$=130$\mu$m. These values are in particular *smaller* than the results for $d$=183$\mu$m, in contrast to the expectations. This points towards a small systematic error of the absolute value. ![Results for the optical conductivity $\sigma_1(\omega)$ for different sample thicknesses $d$ agree very well with each other, strongly supporting a bulk-only character. []{data-label="fig:compareD"}](FigS3.pdf){width="0.95\columnwidth"} Effective mass ============== The effective mass $m^*$ can be estimated by comparing our results with Hall-effect data [@sRen11] which yield a Hall constant of $R_H$=5cm$^3$/C at 300K. In a perfectly compensated semiconductor, both electrons and holes contribute to transport. In this case, $R_H$ gives an upper estimate of the carrier density, $N$=$1.25\cdot 10^{18}$cm$^{-3}$ at 300K. However, our analysis (see *Discussion* in main text) suggests that the sample of BiSbTeSe$_2$ shows small deviations of 1–2% from perfect compensation, i.e., one carrier type predominates. We thus expect that the result for $N$ derived from $R_H$ is reliable. From the optical data, we find an effective carrier density $N_{\rm eff}\! \approx \! 6\cdot 10^{18}$cm$^{-3}$ at 300K, from which we obtain $m^*/m_e\! \approx \! 0.2$. In BiSbTeSe$_2$, the cyclotron mass of the bulk bands is unknown since the measured quantum oscillations arise only from the surface states [@sRen11]. However, the cyclotron mass of the bulk conduction band has been measured in Bi$_2$Se$_3$, showing $m^*/m_e$ between 0.14 to 0.24 depending on the orientation of the cyclotron orbit [@sEto10], in agreement with our result. Far-infrared reflectivity ========================= The reflectivity for a sample thickness of $d$=102$\mu$m is shown in the left panel of Fig. \[fig:RFIR\]. Infrared-active phonon modes dominate the shape of $R(\omega)$ below about 150cm$^{-1}$ (see also Refs. [@sLaForge10; @sPost15; @sReijnders14; @sAkrap12; @sAleshchenko14; @sDiPietro12; @sPost13; @sDordevic13; @sChapler14; @sKoehler74ph; @sRichter77]). In the transparent frequency range above the phonons, the measured reflectivity shows interference fringes. For these measurements with polarization parallel to the cleavage plane, we expect two phonon modes with $E_u$ symmetry [@sRichter77]. These modes were observed at $48$cm$^{-1}$ and $98$cm$^{-1}$ in Bi$_2$Te$_3$, at $61$cm$^{-1}$ and $134$cm$^{-1}$ in Bi$_2$Se$_3$ [@sRichter77], and at $62$cm$^{-1}$ and $120$cm$^{-1}$ in BiSbTeSe$_2$ [@sPost15]. Since the spectral weight of the lower mode is much larger, the reflectivity data typically show one strong Reststrahlenband above about 50cm$^{-1}$ with a weaker feature on its high-frequency side. In BiSbTeSe$_2$, we find a strong Reststrahlenband with a weak shoulder at about 124cm$^{-1}$, in reasonable agreement with the data reported by Reijnders *et al.* [@sReijnders14] and by Post *et al.* [@sPost15]. The strong Reststrahlenband is located close to the lower limit of our frequency range, which hinders the determination of the precise eigenfrequency. Compared to Bi$_2$Te$_2$Se, the features are much broader in BiSbTeSe$_2$, which can be attributed to the Bi/Sb and Te/Se disorder. ![Reflectivity and dielectric function $\varepsilon(\omega)$ in the far-infrared range. Left: Reflectivity for a sample thickness of $d$=102$\mu$m (solid lines) compared to Drude-Lorentz fits (dashed lines). Right: Dashed lines depict the dielectric function $\varepsilon(\omega)$ determined from the Drude-Lorentz fits of the reflectivity shown in the left panel. The Reststrahlenband carries a huge oscillator strength. Solid lines show $\varepsilon(\omega)$ as obtained from the interference fringes in the transparent range. []{data-label="fig:RFIR"}](FigS4.pdf){width="0.95\columnwidth"} Dielectric function $\varepsilon(\omega)$ ========================================= For $\omega\to 0$, $\varepsilon(\omega)$ is dominated by the huge oscillator strength of the strong phonon mode, see right panel of Fig. \[fig:RFIR\]. A Drude-Lorentz fit yields $\varepsilon(\omega\to 0)$ of the order of 200. A more precise determination of this value is difficult because the relevant phonon is close to the lower frequency limit of our data, see left panel of Fig. \[fig:RFIR\]. Similar values can be derived from results reported in the literature. In Bi$_2$Te$_2$Se, an analysis of the infrared-active phonon modes yields one strong phonon mode with a contribution of about 230, six more modes with a total contribution of about 10, and a value of $\varepsilon \! \approx \! 35$–40 above the phonons [@sAkrap12]. In total, this yields $\varepsilon(\omega\to 0) \! \approx \! 275$–280 in Bi$_2$Te$_2$Se. In Bi$_2$Se$_3$, a Drude-Lorentz fit yields a phonon contribution of about 170 and a value of $\varepsilon \! \approx \! 30$ above the phonons, adding up to $\varepsilon(\omega\to 0) \! \approx \! 200$ [@sDordevic13]. Finite-size effects of the numerical data ========================================= At $T$=0 and for perfect compensation, $K$=1, screening is weakest and one expects the largest finite-size effects in the numerical simulations. In this regime the typical distance of puddles scales with $(\Delta/E_c)^2$. In Fig. \[finiteSize\] we show $p_p \, \Delta/E_c$, as function of the inverse linear system size, where $p_p$ denotes the fraction of dopants contributing to electron-hole puddles (see Fig. 5 of main text). While small system-size effects are clearly visible, all finite-size errors are well below $5\%$. Note that we expect a universal value of $p_p \,\Delta/E_c$ for $\Delta/E_c \to \infty$. For the shown values of $\Delta/E_c\ge 9$ this limit is reached with only small corrections: $p_p \, \Delta/E_c$ slightly grows for smaller values of $\Delta/E_c$. ![To illustrate finite-size effects of our numerical results, $p_p \Delta/E_c$ (see Fig. 7 of the main text) is plotted as function of the inverse of the linear size $L$ of the simulated box for perfect compensation $K$=1 and $T$=0 for $\Delta/E_c$=9, 12, and 15. The number of dopants is $2 \times L^3$, reaching more than $400.000$ for $L$=60. Error bars represent a single standard deviation of the mean value obtained by averaging over $200-500$ impurity configurations. All finite size effects are well below $5\%$. \[finiteSize\] ](FigS5.pdf){width="0.95\columnwidth"} [99]{} I.G. Austin, *The Optical Properties of Bismuth Telluride,* Proc. Phys. Soc. London [**72**]{}, 545 (1958). J. Black, E.M. Conwell, L. Seigle, and C.W. Spencer, *Electrical and optical properties of some M$_2^{V-B}$N$_3^{VI-B}$ semiconductors,* J. Phys. Chem. Solids **2**, 240 (1957). A. D. LaForge, A. Frenzel, B. C. Pursley, Tao Lin, Xinfei Liu, Jing Shi, and D. N. Basov, *Optical characterization of Bi$_2$Se$_3$ in a magnetic field: Infrared evidence for magnetoelectric coupling in a topological insulator material,* Phys. Rev. B **81**, 125120 (2010). I.A. Nechaev and E.V. Chulkov, *Quasiparticle band gap in the topological insulator Bi$_2$Te$_3$,* Phys. Rev. B [**88**]{}, 165135 (2013). M. Michiardi, I. Aguilera, M. Bianchi, V. Eustaquio de Carvalho, L.O. Ladeira, N.G. Teixeira, E.A. Soares, C. Friedrich, S. Bl[ü]{}gel, and P. Hofmann, *Bulk band structure of Bi$_2$Te$_3$,* Phys. Rev. B [**90**]{}, 075105 (2014). Z. Ren, A.A. Taskin, S. Sasaki, K. Segawa, and Y. Ando, *Optimizing Bi$_{2-x}$Sb$_x$Te$_{3-y}$Se$_y$ solid solutions to approach the intrinsic topological insulator regime,* Phys. Rev. B **84**, 165311 (2011). K. Eto, Z. Ren, A.A. Taskin, K. Segawa, and Y. Ando, *Angular-dependent oscillations of the magnetoresistance in Bi$_2$Se$_3$ due to the three-dimensional bulk Fermi surface,* Phys. Rev. B **81**, 195309 (2010). K.W. Post, Y.S. Lee, B.C. Chapler, A.A. Schafgans, M. Novak, A.A. Taskin, K. Segawa, M.D. Goldflam, H.T. Stinson, Y. Ando, and D.N. Basov, *Infrared probe of the bulk insulating response in Bi$_{2-x}$Sb$_x$Te$_{3-y}$Se$_y$ topological insulator alloys,* Phys. Rev. B **91**, 165202 (2015). A.A. Reijnders, Y. Tian, L.J. Sandilands, G. Pohl, I.D. Kivlichan, S.Y. Frank Zhao, S. Jia, M.E. Charles, R.J. Cava, Nasser Alidoust, Suyang Xu, Madhab Neupane, M. Zahid Hasan, X.Wang, S.W. Cheong, and K.S. Burch, *Optical evidence of surface state suppression in Bi-based topological insulators,* Phys. Rev. B **89**, 075138 (2014). A. Akrap, M. Tran, A. Ubaldini, J. Teyssier, E. Giannini, D. van der Marel, P. Lerch, and C. C. Homes, *Optical properties of Bi$_2$Te$_2$Se at ambient and high pressures,* Phys. Rev. B **86**, 235207 (2012). Yu. A. Aleshchenko, A.V. Muratov, V.V. Pavlova, Yu.G. Selivanov, and E. G. Chizhevskii, *Infrared Spectroscopy of Bi$_2$Te$_2$Se,* JETP Letters **99**, 187 (2014). P. Di Pietro, F. M. Vitucci, D. Nicoletti, L. Baldassarre, P. Calvani, R. Cava, Y. S. Hor, U. Schade, and S. Lupi, *Optical conductivity of bismuth-based topological insulators,* Phys. Rev. B **86**, 045439 (2012). K. W. Post, B. C. Chapler, L. He, X. Kou, K. L. Wang, and D. N. Basov, *Thickness-dependent bulk electronic properties in Bi$_2$Se$_3$ thin films revealed by infrared spectroscopy,* Phys. Rev. B **88**, 075121 (2013). S.V. Dordevic, M.S. Wolf, N. Stojilovic, Hechang Lei, and C. Petrovic, *Signatures of charge inhomogeneities in the infrared spectra of topological insulators Bi$_2$Se$_3$, Bi$_2$Te$_3$ and Sb$_2$Te$_3$,* J. Phys.: Condens. Matter **25**, 075501 (2013). B.C. Chapler, K.W. Post, A.R. Richardella, J.S. Lee, J. Tao, N. Samarth, and D. N. Basov, *Infrared electrodynamics and ferromagnetism in the topological semiconductors Bi$_2$Te$_3$ and Mn-doped Bi$_2$Te$_3$,* Phys. Rev. B **89**, 235308 (2014). H. K[ö]{}hler and C.R. Becker, *Optically Active Lattice Vibrations in Bi$_2$Se$_3$,* phys. stat. sol. (b) **61**, 533 (1974). W. Richter, H. K[ö]{}hler, and C.R. Becker, *A Raman and Far-Infrared Investigation of Phonons in the Rhombohedral V$_2$-VI$_3$ Compounds Bi$_2$Te$_3$, Bi$_2$Se$_3$, Sb$_2$Te$_3$ and Bi$_2$(Te$_{1-x}$Se$_x$)$_3$ ($0\!<\!x\!<\!1$), (Bi$_{1-y}$Sb$_y$)$_2$Te$_3$ ($0\!<\!y\!<\!1$),* phys. stat. sol. (b) [**84**]{}, 619 (1977). [^1]: grueninger@ph2.uni-koeln.de
--- abstract: 'In this paper, near optimal tracking of a class of nonlinear systems is addressed. Adaptive (approximate) dynamic programming approach is used to calculate the optimal control in closed form. ADP[^1] has been widely used to resolve optimal regulation and tracking problems of nonlinear control systems. Despite advances in the so called supervised and unsupervised ADP techniques for optimal tracking, they have a main draw back. That is, the optimal controller needs to be recalculated for every particular reference trajectory. The main goal of this work is to address this issue for a class of nonlinear systems. Finally, this approach is applied on a Delta robot and the performance of the method is analyzed experimentally.' author: - 'Farshid Asadi and Ali Heydari, [^2][^3]' title: Near optimal tracking control of a class of nonlinear systems and an experimental comparison --- Nonlinear, Optimal, Tracking, Control, ADP, Delta. Introduction ============ Trajectory tracking of nonlinear systems is a classic problem in control theory. Optimal control, as one of the approaches to solve this problem, has attracted some efforts throughout past several decades. The interested reader can refer to [@bryson1975applied; @werbos2004adp; @lewis2009reinforcement; @saridis1979approximation; @ccimen2008state] for a short introduction to more common nonlinear optimal control techniques. Since analytic solutions of nonlinear optimal control problems is not available, except for simple cases, seeking approximate solutions is a common practice. ADP technique [@werbos2004adp], that is approximating cost function with a neural network and learning the optimal cost function in a backward manner (dynamic programming), is one of the widely used techniques among researchers. Optimal tracking problem have been studied for both continuous time and discrete time systems, but regardless of this, the pursued solutions can be categorized into two general frameworks: LQR[^4] extensions and ADP based approaches. In the first approach, the nonlinear plant is modeled as a linear system with time-varying matrices, and then techniques from linear optimal control theory are used. For instance, in [@lahdhiri1999design], a feedback linearization is done on the nonlinear plant and then, a linear optimal problem is defined for the resultant feedback linearized system. In this approach, the object function is not directly related to the physical system and may not have physical realization. In [@ccimen2004nonlinear], SDRE[^5] approach is used for input affine nonlinear systems. The main drawback of this method is that the proper choice of state-dependent quasilinear form plays an important role in the algorithm [@ccimen2008state]. Also in [@chou2004line], a general nonlinear system is considered and error dynamics is estimated adaptively as a linear system. The optimal control then, is calculated based on the linear estimation. In the ADP based approaches, the total cost is approximated with a function approximator of appropriate form and then, this approximation is used in order to calculate optimal cost and optimal control. ADP based approaches can be categorized into two branches based on the objective function that they used. In the first one, the objective function for tracking is defined based on the error and the total control input of the system. Optimizing this cost function leads to optimality but the resulted controller is not locally asymptotically stable in general, this will be discussed later. For instance, in [@tang2007optimal], the general nonlinear system is decomposed based on its linearization and residual terms, then the optimal control is calculated as a combination of linear and residual part. In [@mclain1999synthesis], a finite horizon continuous time optimal tracking is considered and then, the optimal control is calculated by direct implementation of ADP. A discrete time version of finite horizon approximate optimal tracking can also be found in [@heydari2014fixed], in this work the controller can accept different initial conditions of the same reference trajectory dynamics. Also in [@modares2018adaptive; @kiumarsi2014actor; @modares2014optimal], states and reference trajectory are augmented in a new variable and the optimal problem is solved as a regulation. In these three work reinforcement learning is used to calculate the optimal control online. Moreover in [@khiabani2019design], a discrete time optimal tracking controller is considered for a switching system and is solved by direct implementation of ADP. Another ADP based reinforcement learning is used in [@yang2011reinforcement] for tracking control of a class of discrete time nonlinear systems with unknown dynamics, however in this approach it is assumed that the input transition matrix is positive definite. For problems with bounded input, [@lyshevski1999optimal] has proposed an approach by adding a non-quadratic functional to the total cost. This approach is not done with ADP for tracking problems. However, in [@abu2005nearly] it is used along with ADP for a regulation problem. In the second category of ADP based approaches, the control input is decomposed into a steady state (that makes the error dynamic stationary at origin) and a transient part. Then, the control objective is defined based on the error and the transient control. This approach can be found in [@park1996optimal; @zhang2008novel]. In [@zhang2008novel], reference trajectory and error are augmented into new states and then the ADP is used to solve the resulted augmented regulation optimal problem. In [@dierks2009optimal], an optimal control problem is defined for the transient control and then the total cost is calculated in an online manner by ADP. In this method knowledge of system dynamics is not necessary. In [@kamalapurkar2015approximate], reinforcement learning is used to solve the optimal tracking control of a nonlinear system with unknown dynamics online. In these two works ([@dierks2009optimal; @kamalapurkar2015approximate]) the optimal problem is solved for an augmented system, as in [@zhang2008novel]. Despite advances in the mentioned works, ADP based methods all share a common drawback. That is, the controller needs to be re-calculated for each particular reference trajectory. This issue also exists in LQR based techniques. This work is dedicated to solve the mentioned problem of ADP based approaches for a class of nonlinear systems. The proposed method uses the idea of control decomposition to eliminate trajectory dynamics from error dynamics, without eliminating systems dynamic matrices. The optimization is done based on using transient control in the objective function, which is called *modified total cost*, in here. Effects of this decomposition on optimality and asymptotic stability of the closed loop system will be discussed, which, to the best of our knowledge, has not been done in the related literature yet. Furthermore, it will be shown that by optimizing expectation of modified total cost, instead of its exact value, there is no need to know the reference trajectory in the training stage. This change will lead to the main contribution of this paper, that is a near optimal asymptotically stabilizing tracking controller that does general tracking for a class of nonlinear systems. Finally, it will be shown that using optimal control based on expected value of modified total cost (instead of its exact value), does not hurt asymptotic stability of the closed loop system. The proposed controller is near optimal in three aspects. First, because of the form of steady state control that is used. Second, it optimizes expected value of modified total cost, instead of exact modified total cost of a reference trajectory. Third, it approximates this objective function which is the core of ADP. In what follows, first the problem is defined and the proposed method is explained. Then theoretical support for optimality, convergence, and asymptotic stability of the approach is presented. Finally, the method is implemented experimentally on a Delta parallel manipulator and its performance is shown in comparison to some standard nonlinear control techniques. Problem statement and resolution ================================ Let us define a tracking problem for a nonlinear system of the following form $$x^{(p)} = f(X) + g(X)u, \label{eq:csys}$$ where $x(t) \in \mathbb{R}^{m\times 1}$ is an $m\mbox{-}$vector[^6] of output of interest, $X = [x^\intercal, \cdots, {x^{(p-1)}}^\intercal]^\intercal \in \mathbb{R}^{(n\times 1)}$ is $n\mbox{-}$vector of states, and $u(t) \in \mathbb{R}^{m\times 1}$ is $m\mbox{-}$vector of control input. Also note that $n = m\times p$. Furthermore, $f(.): \mathbb{R}^{n\times1}\rightarrow \mathbb{R}^{m\times1}$ and $g(.): \mathbb{R}^{n\times1}\rightarrow \mathbb{R}^{m\times m}$ are functions representing dynamics of the system. Moreover $f(.)$ and $g(.)$ and their Jacobians are assumed to be continuous. Also, $x^{(i)}$ denotes the $i_{th}$ time derivative of $x$. The system is supposed to follow a particular reference trajectory (not known a priori), that is $X_d(t) = [x_d^\intercal, \cdots, {x_d^{(p-1)}}^\intercal]^\intercal \in D\subset \mathbb{R}^{n\times 1}$, with zero tracking error. Furthermore, assume that $\dot{X}_d$ exists and is continuous. Tracking error is defined as $E = [e^\intercal, \cdots, {e^{(p-1)}}^\intercal]^\intercal$ where $e = x-x_d$. As mentioned in the introduction, in some of the related literature, the optimal controller is designed to minimize the following cost function $$J = \int_{0}^{\infty}exp(-\rho \tau)(E^\intercal QE+{u}^\intercal R{u})d\tau, \label{eq:ctotcost}$$ where $Q \in \mathbb{R}^{n\times n},R \in \mathbb{R}^{m\times m}$, and $\rho \in \mathbb R_{\ge 0}$ are semi-positive definite error penalizing matrix, positive definite control penalizing matrix, and discount factor, respectively. The error dynamics of this tracking problem can be written in the following form $$\label{eqn:cedyn} e^{(p)} = f(E+X_d)+g(E+X_d)u-x_d^{(p)}.$$ This error dynamics can be also stated in state space from as $$\label{eqn:csedyn} \dot{E} = F(E+X_d)+G(E+X_d)u-\dot{X}_d,$$ where one has $$%\label{eqn:csedyn} F(E+X_d) = \begin{bmatrix} \dot{e} + \dot{x}_d\\ \vdots\\ e^{(p-1)} + x_d^{(p-1)}\\ \\ f(E+X_d) \end{bmatrix},$$ $$%\label{eqn:csedyn} G(E+X_d) = \begin{bmatrix} 0_{(n-m) \times m}\\ g(E+X_d) \end{bmatrix}.$$ Note that the above dynamics is non-autonomous[^7] and non-stationary[^8] at origin with respect to its states $E$. Even though error dynamics and objective function in the form of \[eqn:cedyn,eq:ctotcost\] are commonly used, there are three disadvantages with formulating the problem in this way. First, the optimal tracking controller is not locally asymptotically stabilizing in general (see \[appendixexample\]). The reason is that the reference trajectory, generally is not an invariant set of the system dynamics, which is needed for optimal control to be asymptotically stabilizing. This shows itself as a steady state error[^9]. Second, the presence of discounting factor $\rho$ means that, just a limited part of horizon is important to the controller. This will lead to a higher steady state error and worsens the effects of the first problem. Third, the resulted optimal control can only follow [^10] the reference trajectory that is solved for. This means that for new reference trajectories, the problem should be re-solved. These issues have motivated some authors to use a modified objective function and to revisit components of the error dynamics by decomposing the control input into *steady* and *transient* parts. However, the third issue is not solved in any of the ADP related literature yet, to the best of the authors’ knowledge. Furthermore, interpretation of such control decomposition with respect to optimality is not done in the referenced works, to the best of our knowledge. In a tracking problem, the evolution of the system can be categorized in two phases: the transient and the steady state. This gives an idea of decomposing the control to a steady state control plus a correction term, when it is possible. For a system in the form of \[eq:csys\], the steady state control, that is $u_s$, can be defined to satisfy the following equation $$\label{eqn:cpresscont} x_d^{(p)} = f(X_d)+g(E+X_d)u_s,$$ this form of steady state control is used in [@dierks2009optimal] for a discrete-time system, and its main advantage over other forms in literature is that it eliminates trajectory dynamics from error dynamics of \[eqn:cedyn\]. A controllable plant is assumed, therefore $g(E+X_d) = g(X)$ is invertible. Then $u_s$ can be calculated as $$\label{eqn:csscont} u_s = g^{-1}(E+X_d)(x_d^{(p)}-f(X_d)).$$ At any instance, if error equals zero, that is $E=0$, then applying $u_s$ leads to perfect tracking. The total control is the sum of steady state control, that is $u_s$, and a corrective term, that is $\Delta u$, so it is defined as $$u = u_s + \Delta u. \label{eq:ctotcont}$$ By substituting \[eq:ctotcont,eqn:csscont\] in \[eqn:cedyn\], the error dynamics equation, (\[eqn:cedyn\]) can be rewritten as $$\label{eqn:cedynred} e^{(p)} = f(E+X_d)-f(X_d)+g(E+X_d)\Delta u,$$ furthermore, this equation can be written in state space form as $$\label{eqn:csedynred} \dot{E} = F(E+X_d)-F(X_d)+G(E+X_d)\Delta u.$$ The above error dynamics is stationary at origin. Now, one can define modified total cost based on the corrective term $\Delta u$ (instead of total control $u$) as $$V = \int_{0}^{\infty}(E^\intercal QE+{\Delta u}^\intercal R{\Delta u})d\tau, \label{eq:ctotcostred}$$ where optimal transient control will be calculated by optimizing the above total cost. This cost function is commonly used in this category of solutions to the optimal tracking problem. Decomposing the control and redefining the total cost resolves the first mentioned problem. Assuming that the system is controllable, the above modified total cost is bounded. The reason is that $\Delta u$ vanishes as the transient phase finishes. This cost function minimizes the error and the corrective control term, so it brings the system to the steady state tracking phase asymptotically. The reason is that, by imposing the steady state control and optimizing modified total cost, the optimal problem is converted to an optimal regulation problem which is asymptotically stable (see for example [@lyashevskiy1995control] for asymptotic stability of optimal regulation problem). Furthermore, since the boundedness of modified total cost, that is \[eq:ctotcostred\], is achieved without introducing a discounting factor, there is no risk of associated steady state error. Therefore, the second issue is also resolved. For any particular reference trajectory in time, modified total cost, that is \[eq:ctotcostred\], only depends on the initial error, that is $E_0$. This is a key point in this analysis that also reduces dimensionality of the value function and therefore, mitigates curse of dimensionality further. However, the issue is that the trajectories are not known ahead of time. If one writes HJB[^11] equation for \[eq:ctotcostred\] it can be seen (see \[Theoretical background\]) that knowledge of trajectory is needed for calculating the modified optimal cost function. One solution to this issue is using expectation of total cost instead of its exact value for a specific trajectory in time. The reason is that one can consider the desired trajectory, that is $X_d(t)$, as a parameter with uniform distribution in ROI[^12]. This leads to the main contribution of this paper. The expected value of modified total cost, that is \[eq:ctotcostred\], can be written in the following form, $$\overline{V}(E_0) = \underset{X_d\in ROI}{\mathbb{E}} \left\{\int_{0}^{\infty}(E^\intercal QE+{\Delta u}^\intercal R{\Delta u})d\tau)\right\}, \label{eq:cstochtotcostred}$$ where $\mathbb{E}$ denotes mathematical expected value. For every specific trajectory and the defined problem, optimal modified total cost exists uniquely [@athans2013optimal pp. 284–291] and is two times differentiable [@strulovici2015smoothness]. Therefore, its expected value (that is simply an average over all possible trajectories in the present case) also exists, is unique and is two times differentiable. By following the procedure of [@bryson1975applied pp. 131–136] and taking time derivative of \[eq:cstochtotcostred\] and using the error dynamics from \[eqn:csedynred\], the non-optimal HJB equation can be derived as the following $$\underset{X_d\in ROI}{\mathbb{E}}\left\{\overline{V}^\intercal_E(F-F_d+G\Delta u)+E^\intercal QE+\Delta u^{\intercal}R \Delta u \right\}= 0, \label{eq:ctotcostrecredhjb}$$ where $\overline{V}_E = \frac{d\overline{V}(E)}{dE}$, $F = F(E+X_d) = F(X)$, $F_d = F(X_d)$, and $G = G(E+X_d) = G(X)$. Optimal transient control, that is $\Delta u^*$, is the minimizer of LHS[^13] of \[eq:ctotcostrecredhjb\], and can be calculated as $${\Delta u}^* = -\frac{1}{2}R^{-1} G^\intercal \overline{V}_E^*, \label{eq:coptcontred}$$ where $\overline{V}_E^*(E)$ is gradient of expectation of optimal modified total cost. There are several ways in the literature to solve the resulted HJB equation, including PI[^14] algorithm [@leake1967construction; @saridis1979approximation], integral PI [@beard1997galerkin], integral VI[^15] [@bian2016value], projection technique [@kompas2010comparison], perturbation method[@kompas2010comparison], and parametric linear programming technique[@kompas2010comparison]. Among these methods integral VI is chosen. The reason is that it gives a good understanding of underlying theory and it does not need initial admissible policy as needed in other iterative methods based on PI. One can rewrite \[eq:cstochtotcostred\] in the following form and apply Bellman principle of optimality[@Bellman:1957] as $$\begin{gathered} \overline{V}^*(E(t)) =\underset{X_d\in ROI}{\mathbb{E}} \left\{\int_{t}^{t+\Delta T}(E^\intercal QE+{\Delta u^*}^\intercal R{\Delta u^*})d\tau) \right.\\ \left.+ \overline{V}^*(E(t+\Delta T))\right\}, \label{eq:cstochtotcostredrec}\end{gathered}$$ where $\Delta T \rightarrow 0$. This equation can be used in the so called integral value iteration to learn the expectation of modified optimal cost (in the other words, value function), from the following iterative procedure, $$\begin{gathered} \overline{V}_{i+1}(E(t)) =\\ \underset{X_d\in ROI}{\mathbb{E}} \left\{\int_{t}^{t+\Delta T}(E^\intercal QE+{\Delta u_{i+1}^*}^\intercal R{\Delta u_{i+1}^*})d\tau) \right.\\\left.+ \overline{V}_i(E(t+\Delta T))\right\}, \label{eq:cstochvfredrec}\end{gathered}$$ where $\Delta u_{i+1}^*$ is defined as $${\Delta u}_{i+1}^* = -\frac{1}{2}R^{-1} G^\intercal \overline{V}^*_{i_E}. \label{eq:coptcontredrec}$$ Note that because of using the expectation of total cost, there is no need for knowing the trajectory in training stage. This means that once the expectation of modified value function is calculated, it can be used to track every trajectory in the ROI. Therefore the third issue with existing ADP based methods is also solved for nonlinear systems with the dynamics given by \[eq:csys\]. To calculate expected modified value function, and consequently optimal transient control, in a closed form, ADP[@werbos2004adp] is used here. To do this, value functions in \[eq:cstochtotcostredrec,eq:cstochvfredrec\] are approximated with a linear (in weight) NN[^16] as $$\overline{V}(E) = W^\intercal \varphi (E), \label{eq:cvfapp}$$ $$\overline{V}_i(E) = W_i^\intercal \varphi (E),$$ where $\varphi (E)$ is the vector of basis functions. The procedure for training the neural network with integral value iteration can be summarized as: 1. Initialize some random values for $X$ and $X_d$ in the ROI. 2. Calculate errors $E$, based on values generated in step $a$. 3. Initialize \[eq:cstochvfredrec\] with $\overline{V}_{0}(E) = 0$. 4. For every $E$ and $X_d$ pair and some small constant value of $T$, repeat the following stages: 1. Calculate ${\Delta u}_{i+1}^*(E,X_d)$ from \[eq:coptcontred\]. 2. Calculate $\overline{V}_{i+1}(E)$ from \[eq:cstochvfredrec\]. 5. Use values from step $d$ to update weights from the least square method [@boyd2004convex pp. 302–305]. 6. Calculate $\delta = max(\left|W_{i+1}-W_i\right|)$ and do one of the following stages: 1. If $\delta< threshold$, terminate the iterative procedure and set $W = W_{i+1}$ and go to step g. 2. If $\delta \ge threshold$ then go to step d. 7. The optimal corrective term can be calculated as $${\Delta u}^* = -\frac{1}{2}R^{-1} G^\intercal(E+X_d) W^\intercal \Delta \varphi(E),$$ where $\Delta \varphi(E)$ is gradient of $\varphi(E)$. Theoretical background {#Theoretical background} ====================== Optimality {#sec:optimality} ---------- In the presented approach, the total control is calculated as a combination of the steady state control and the optimal transient control. Considering this decomposition, one may ask about the optimality of the resulted total cost with respect to the original cost function, that is \[eq:ctotcost\]. To make the analysis easier to grasp, we will investigate the case of a specific reference trajectory to avoid getting involved in the expected values of the total costs. Let us define a new steady state control $u_r$ and a discounted version of \[eq:ctotcostred\] as $$\label{eqn:crrcont} u_r = g^{-1}(X_d)(x_d^{(p)}-f(X_d)),$$ $$W(E) = \int_{0}^{\infty}exp(-\rho \tau)(E^\intercal QE+{\Delta u}^\intercal R{\Delta u})d\tau. \label{eq:cstochtotcostreddisc}$$ By substituting total control based on $u_r$, that is $u = u_r + \Delta u$, in \[eq:ctotcost\], the total cost can be rewritten as $$\begin{gathered} J(E) = \int_{0}^{\infty}exp(-\rho \tau)(E^\intercal QE + \Delta u^\intercal R \Delta u \\+ 2{\Delta u}^\intercal R{u_r} + u_r^\intercal R u_r)d\tau, \label{eq:ctotcostn1}\end{gathered}$$ or $$J(E) = W(E) + \int_{0}^{\infty}exp(-\rho \tau)(2{\Delta u}^\intercal R{u_r} + u_r^\intercal R u_r)d\tau. \label{eq:ctotcostn2}$$ By minimizing \[eq:ctotcostn1,eq:ctotcostn2\] over $\Delta u$ and eliminating $J^*(E)$ among them one can write $$\begin{gathered} \min_{\Delta u}\left\{W(E) + \int_{0}^{\infty}exp(-\rho \tau)(2{\Delta u}^\intercal R{u_r} + u_r^\intercal R u_r)d\tau\right\}\\= \min_{\Delta u}\biggl\{\int_{0}^{\infty}exp(-\rho \tau)(E^\intercal QE + \Delta u^\intercal R \Delta u \\+ 2{\Delta u}^\intercal R{u_r} + u_r^\intercal R u_r)d\tau\biggr\}. \label{eq:ctotcostn3}\end{gathered}$$ The minimization is on the same variable on both sides of \[eq:ctotcostn3\], so it can be simplified as $$\begin{split} \min_{\Delta u}\left\{\right.&\left.W(E)\right\} = W^*(E) =\\& \min_{\Delta u}\left\{\int_{0}^{\infty}exp(-\rho \tau)(E^\intercal QE + \Delta u^\intercal R \Delta u)d\tau\right\}. \end{split} \label{eq:ctotcostn4}$$ The process above means that optimizing $W(E)$ is equal to optimizing $J(E)$, while imposing $u_r$ on the system. Furthermore, as $\rho \rightarrow 0$, $W(E)\rightarrow V(E)$. This means that optimizing $V(E)$ is equivalent of optimizing $W(E)$ while imposing $u_r$ on the system and $\rho \rightarrow 0$. Since imposing $u_r$ on the error dynamics and finding optimal transient control, for a specific reference trajectory, transforms the optimal control problem to an equivalent optimal regulation problem, so it is asymptotically stable (see [@lyashevskiy1995control] for asymptotic stability of optimal regulation problems). Also since $u_r$ is the exact steady state control, the resulted total control is optimal among asymptotically stabilizing controllers, however it is not the absolute optimal control (that as discussed, see \[appendixexample\], is not generally asymptotically stabilizing). Furthermore, because $u_s$ (that is used in the proposed method) has a slight difference with $u_r$, the proposed method has a degree of sub-optimality. Therefore, the proposed method, for a specific reference trajectory, is a near optimal control among asymptotically stabilizing tracking controllers. The other point that should be investigated is about optimality of optimal transient control, that is \[eq:coptcontred\], with respect to expected value of modified total cost, that is \[eq:cstochtotcostred\]. In the presented approach, the optimization problem is defined based on the expectation of modified total cost. Introducing the expectation in the equations, a legitimate question is the optimality of the selected control, that is \[eq:coptcontred\]. To answer this, one needs to take derivative of LHS of \[eq:ctotcostrecredhjb\] with respect to $\Delta u$ and solve the following equation for $\Delta u$ $$\begin{gathered} \frac{\partial}{\partial \Delta u}\biggl\{\underset{X_d\in ROI}{\mathbb{E}}\Bigl\{\overline{V}^\intercal_E(F-F_d+G\Delta u)\\+E^\intercal QE+\Delta u^{\intercal} R\Delta u \Bigr\}\biggr\} = 0. \label{eq:ctotcostrechjbdif}\end{gathered}$$ In the above equation, since the expectation is over $X_d$, the derivative can be interchanged with the expectation and one has $$\begin{gathered} \underset{X_d\in ROI}{\mathbb{E}} \biggl\{\frac{\partial}{\partial \Delta u}\Bigl\{\overline{V}^\intercal_E(F-F_d+G\Delta u)\\+E^\intercal QE+\Delta u^{\intercal} R\Delta u \Bigr\}\biggr\} = 0, \label{eq:ctotcostrechjbdifin}\end{gathered}$$ note that if the following holds $$\begin{split} \frac{\partial}{\partial \Delta u}\left\{\overline{V}^\intercal_E(F-F_d+G\Delta u)+E^\intercal QE+\Delta u^{\intercal} R\Delta u \right\} = 0, \end{split} \label{eq:ctotcostrechjbdifinred}$$ then also \[eq:ctotcostrechjbdifin\] holds, therefore the answer to the above equation, which is well known to be in the form of \[eq:coptcontred\], is a solution to \[eq:ctotcostrechjbdif\]. Consequently, \[eq:coptcontred\] is a minimizer to \[eq:cstochtotcostred,eq:ctotcostrecredhjb\]. Therefore, optimality of \[eq:coptcontred\] with respect to the expectation of modified total cost is proved. Convergence ----------- Since integral value iteration of \[eq:cstochvfredrec,eq:coptcontredrec\] is an iterative procedure, convergence of the iterations is a concern. This concern is investigated in this subsection while neglecting approximation error of \[eq:cvfapp\]. The procedure is adopted from [@bian2016value] with some changes. Let us define the following transformations, for some positive $V(E)$ with $V(0) = 0$, as $$\begin{gathered} T^{\Delta u}(V,X_d) = \int_{t}^{t+\Delta T}(E^\intercal QE+{\Delta u}^\intercal R{\Delta u})d\tau) \\+ V(E(t+\Delta T)), \label{eq:transdef}\end{gathered}$$ $$\overline{T}^{\Delta u}(V) = \underset{X_d\in ROI}{\mathbb{E}}\left\{T^{\Delta u}(V,X_d)\right\}. \label{eq:etransdef}$$ Also let $T(V,X_d)$ be defined as $T^{^{\Delta u^{\circ}}}(V,X_d)$, that is calculated with ${\Delta u}^{\circ} = -\frac{1}{2}R^-1G(E+X_d)^\intercal V_E$ which is the minimizer of right hand side of \[eq:transdef\] (and also \[eq:etransdef\], in the same way explained in \[sec:optimality\]). Furthermore, let $\overline{T}(V)$ be the expected value of $T(V,X_d)$. In this way one can write \[eq:cstochtotcostredrec,eq:cstochvfredrec\] in the forms of $\overline{V}^* = \overline{T}(\overline{V}^*)$ and $\overline{V}_{i+1} = \overline{T}(\overline{V}_i)$, respectively. If for some $V_l$ and $V_k$ the inequality $V_l< V_k$ holds, then $T^{\Delta u}(V_l,X_d)< T^{\Delta u}(V_k,X_d)$ also holds for all $E, X_d \in ROI \text{ and } E\neq 0$. Therefore one can conclude that $\overline{T}^{\Delta u}(V_l)< \overline{T}^{\Delta u}(V_k)$. Consequently, by this assumption, one has $\overline{T}(V_l)\leq \overline{T}^{{\Delta u}^{\circ}_k}(V_l)< \overline{T}(V_k)$). If $\overline{V}_0 = 0$ holds, by investigating \[eq:cstochvfredrec\] one can see that $\overline{V}_1> \overline{V}_0$. Also if one assumes $\overline{V}_{i+1}>\overline{V}_i$, then $\overline{V}_{i+2} = \overline{T}(\overline{V}_{i+1})>\overline{V}_{i+1} = \overline{T}(\overline{V}_i)$. Thus, if integral value iteration starts with $\overline{V}_0 = 0$, then by induction one has $\overline{V}_{i+1}>\overline{V}_i$ for all $E, X_d \in ROI \text{ and } E\neq 0$. Since a controllable plant is assumed, then there exist a non-optimal (in the sense of expectation of modified total cost) stabilizing control policy, that is $\Delta h$, whose expected modified total cost, that is $\overline{Z}_0(e)$, is greater than $\overline{V}_0$ and $\overline{V}^*$. Also note that one can write $\overline{Z}_0 = \overline{T}^{\Delta h}(\overline{Z}_0)$. Therefore $\overline{Z}_1 = \overline{T}(\overline{Z}_0)< \overline{Z}_0 = \overline{T}^{\Delta h}(\overline{Z}_0)$. Consequently one can conclude $\overline{Z}_{i+1}< \overline{Z}_i$, in the same manner used in the previous paragraph. Furthermore, since $\overline{V}^*<\overline{Z}_0$, then $\overline{V}^* = \overline{T}(\overline{V}^*)< \overline{Z}_1 = \overline{T}(\overline{Z}_0)$. By repeating this for $i$ times, one can conclude that $V^* < Z_i$ . Since $Z_i$ is a decreasing positive sequence[^17] that is lower bounded by $V^*$, it converges to this lower-bound. Moreover, since $\overline{V}_0<\overline{Z}_0$, by the same reasoning $\overline{V}_i<\overline{Z}_i$. Finally, assuming that $\overline{V}_0 = 0$, one can combine strictly monotonic convergence of $\overline{Z}_i$ to $\overline{V}^*$, strictly monotonic increase of $\overline{V}_i$, the inequality $\overline{V}_i<\overline{Z}_i$, and positiveness of these functions to conclude that $\overline{V}_i$ also converges to $\overline{V}^*$ as $i \rightarrow \infty$ for all $E, X_d \in ROI \text{ and } E\neq 0$. Furthermore, one has $\overline{V}_i(0) = \overline{Z}_i(0) = \overline{V}^*(0) = 0$ from their construction. As a result, convergence of the integral value iteration for the proposed method is guaranteed for all $E, X_d \in ROI$. Stability --------- The most important aspect of a controller is its stability. We will show asymptotic stability of the proposed method through an appropriate Lyapunov function. Let us define the *Hamiltonian*, for any differentiable function $V_l(E): \mathbb{R}^{n\times 1}\rightarrow \mathbb{R}$, as $$H_{\Delta u}(V_l) := V_{l_E}^\intercal(F-F_r + G\Delta u) + E^\intercal QE + {\Delta u}^\intercal R\Delta u. \label{eq:hamiltonian}$$ Furthermore, let $H(V_l)$ be optimal value of $H_{\Delta u}(V_l)$ with respect to $\Delta u$. Moreover, let us define $Z(E)$ as the modified total cost of an admissible control[^18], calculated from \[eq:ctotcostred\] for a specific reference trajectory. Based on Lemma 1 of [@leake1967construction], if the following inequality holds, for any $V_l$, $$H(Z) \leq H(V_l), \label{eq:hamil1}$$ then one has $$V_l(E) \leq Z(E). \label{eq:hamil2}$$ One can write optimal value of the modified total cost of a specific reference trajectory, that is calculated form \[eq:ctotcostred\] (here called $S^*(E)$ for clarity of notation), as $$S^*(E) = \overline{V}^*(E) + D^*(E). \label{eq:totdec}$$ Furthermore, since $S^*$ is the solution of HJB equation, so one has $$H(S^*) = {S^*_E}^\intercal(F-F_r + G\Delta u^*_{s}) + E^\intercal QE + {\Delta u^*_{s}}^\intercal R\Delta u^*_{s} = 0, \label{eq:hamilopt}$$ where $\Delta u^*_{s} = \Delta u^* + \Delta u^*_{d} = -\frac{1}{2}R^{-1}G^\intercal (\bar{V}_E^* + D^*_E)$. This equation can be rewritten as $$\begin{gathered} {D^*_E}^\intercal(F-F_r + G(\Delta u^* + \Delta u^*_{d}))\\ + E^\intercal QE + (\Delta u^* + \Delta u^*_{d})^\intercal R(\Delta u^* + \Delta u^*_{d})\\ + {\overline{V}^*_E}^\intercal(F-F_r + G(\Delta u^* + \Delta u^*_{d}))= 0. \label{eq:rehamilopt}\end{gathered}$$ For any specific reference trajectory, there could be three cases, based on $D^*(E)$. First, assume that $D^* = 0$. In this cases $\overline{V}^*(E) = V^*(E)$. So, by substituting \[eq:totdec\] into \[eq:hamilopt\] one has $$\dot{\overline{V}}^* = {\overline{V}_E^*}^\intercal(F-F_r + G\Delta u^*) = - E^\intercal QE - {\Delta u^*}^\intercal R\Delta u^*. \label{eq:dl1}$$ RHS of the above equation is negative definite (from definition of the problem). Therefore, when $D^* = 0$ and the system is controlled by the proposed controller, that is $\Delta u^*$ from \[eq:coptcontred\], $\dot{\overline{V}}^*$ is negative definite. Second, assume that $D^*(E)<0$ for $E\neq0$. In this case, negative definiteness of the $\dot{\overline{V}}^*$ under the proposed controller can be proved by contradiction. Assume that $\dot{\overline{V}}^*$ is negative except for some $E, X_d \in ROI \textit{ and } E\neq0$. Therefore one can write $\dot{\overline{V}}^* = {\overline{V}_E^*}^\intercal(F-F_r + G\Delta u^*)\geq0$ for some $E, X_d \in ROI \textit{ and } E\neq0$, which leads to $H(\overline{V}^*)\geq H(V^*) = 0$. In this way one can conclude that $\overline{V}^*\leq V^*$ (from Lemma 1 of [@leake1967construction]) for some $E, X_d \in ROI \textit{ and } E\neq0$. This is contradictory with the assumption of $D^*(E)<0$, so one cannot have $\dot{\overline{V}}^*\geq 0$ for some $E, X_d \in ROI \textit{ and } E\neq0$. From the same reasoning, one also cannot have $\dot{\overline{V}}^*\geq 0$ for all $E, X_d \in ROI \textit{ and } E\neq0$. Therefore, $\dot{\overline{V}}^*$ is negative definite under the proposed controller for $D^*(E)<0$, taking into account that $\dot{\overline{V}}^* = 0$ for $E = 0$ from its construction. Third, assume that $D^*(E)>0$ for $E\neq0$. In this way, One has $H(D^*)> H(S^*) = 0$ for all $E, X_d \in ROI , E\neq0$. The reason is that assuming $H(D^*)\leq H(S^*)$ holds for some $E, X_d \in ROI \textit{ and } E\neq0$, leads to $D^*\geq S^*$, which is contradictory to $S^* = \overline{V}^* + D^*$ in the current case. Also one has $H(D^*) = 0$ for all $X_d \in ROI , E=0$, by its definition. Note that one can write $${D^*_E}^\intercal G\Delta u^* = {\overline{V}^*_E}^\intercal G\Delta u^*_d = -2{\Delta u^*}^\intercal R\Delta u^*_d. \label{eq:aux1}$$ By expanding \[eq:rehamilopt\], one can write $$\begin{gathered} {D^*_E}^\intercal(F-F_r + G\Delta u^*_{d})+ {D^*_E}^\intercal G\Delta u^* + E^\intercal QE \\+ {\Delta u^*}^\intercal R\Delta u^* + 2{\Delta u^*}^\intercal R\Delta u^*_d +{\Delta u^*_{d}}^\intercal R\Delta u^*_{d}\\ + {\overline{V}^*_E}^\intercal G\Delta u^*_d + {\overline{V}^*_E}^\intercal(F-F_r + G\Delta u^*)= 0. \label{eq:rehamiloptexpand}\end{gathered}$$ By substituting \[eq:aux1\] into \[eq:rehamiloptexpand\] and combining the terms one has $$\begin{gathered} {D^*_E}^\intercal(F-F_r + G\Delta u^*_{d}) + E^\intercal QE \\+ (\Delta u^* - \Delta u^*_{d})^\intercal R(\Delta u^* - \Delta u^*_{d})\\+ {\overline{V}^*_E}^\intercal(F-F_r + G\Delta u^*)= 0, \label{eq:aux2}\end{gathered}$$ furthermore, by minimizing first three terms in \[eq:aux2\], it can be written as $$\begin{gathered} {D^*_E}^\intercal(F-F_r + G\Delta u^*_{d}) + E^\intercal QE \\+{\Delta u^*_{d}}^\intercal R\Delta u^*_{d}+ {\overline{V}^*_E}^\intercal(F-F_r + G\Delta u^*)\leq 0. \label{eq:aux3}\end{gathered}$$ First three terms of \[eq:aux3\] are equal $H(D^*)$, by definition. By using $H(D^*)>0$ for $E\neq0$ (as per current case) in \[eq:aux3\], one will conclude that for all $X_d,E\in ROI, E\neq0$, $${\overline{V}^*_E}^\intercal(F-F_r + G\Delta u^*)< 0. \label{eq:aux4}$$ Also one has ${\overline{V}^*_E}^\intercal(F-F_r + G\Delta u^*) = 0$ for $E=0$, by definition. As a result, $\dot{\overline{V}}^*$ is negative definite under the control $\Delta u^*$ for $D^*(E)>0$. $\overline{V}^*(E)$ is positive definite by its construction and as we proved $\dot{\overline{V}}^* = {\overline{V}_E^*}^\intercal(F-F_r + G\Delta u^*)$ is negative definite in the ROI. Therefore $\overline{V}^*(E)$ is a Lyapunov function for the system under control of $\Delta u^*$, that is from \[eq:coptcontred\]. Therefore, the closed loop system, from the proposed method, is locally asymptotically stable. An experimental case study ========================== To show and compare the performance of the presented approach, an experimental study is done on a developed Delta parallel robot, as depicted in \[fig:delta\]. ![Delta robot used in the experiments[]{data-label="fig:delta"}](ADP.jpg){width=".48\textwidth"} Delta robot is a parallel manipulator with three transnational DOFs[^19] designed by Clavel[@clavelphdthesis]. The dynamic model of this robot can be presented in the following form [@tsai1999robot] $$M(w,q)\ddot{w} + C(w,q,\dot{w},\dot{q})\dot{w} + G(w,q) = J(w,q)^\intercal \tau, \label{eq:deldyn}$$ where $M,C,G,w,q$, and $\tau$ are mass matrix, Coriolis matrix, gravitational vector, workspace coordinate vector, joint space coordinate vector, and motor torques, respectively. Different methods have been used to control the Delta robot [@castaneda2014robust; @paccot2009review; @taghirad2013parallel; @codourey1998dynamic]. In the present work, computed torque method,which is usually used as a benchmark in the related literature, and sliding mode control, which is suitable benchmark of robustness, are considered for comparison. The control law based on CT[^20] can be written as [@taghirad2013parallel] $$\tau = J^{-\intercal}\left(C\dot{w} + G + M(\ddot{w}_d-K_d\dot{\tilde{w}}-K_p\tilde{w} )\right), \label{eq:cct}$$ where $w_d,K_d$, and $K_p$ are desired position, derivative gain, and proportional gain, respectively. $\tilde{w} = w-w_d$ is also position error. The control law for SMC[^21] can also be written as [@slotine1991applied] $$\tau = J^{-\intercal}\left(C\dot{w} + G + M(\ddot{w}_d-\lambda \dot{\tilde{w}}-Ksat(\frac{S}{\varphi}) )\right), \label{eq:csmc}$$ where $S = \dot{\tilde{w}} + \lambda \tilde{w}$, $\lambda$, $K$, and $\varphi$ are sliding surface, sliding surface parameter, sliding mode controller gain, and boundary layer, respectively. Also $sat$ denotes saturation function. To train ADP based controller, $500$ sets of randomly generated data is used and the training is done $10$ times independently with least square method. Then weights of these 10 trainings are averaged and used for the experiments. Based on our experience, the averaged weights present good repeatability, whereas in each individual training different weights may be achieved (even if higher number of data is used for an individual training). To make three controllers comparable, they are tuned so that they have similar rise time. Also, experiments are done with $500 Hz$ sampling frequency. Furthermore, no friction compensation is done in experiments. For all experiments, actuators are saturated at $5 \, N.m$. Other parameters used in the tests can be found in \[appendixparams\]. Results ------- Two scenarios are considered to compare the performance of the methods. First, the robot is supposed to draw a circle in $z$-plane, with $x = 250\, cos(\pi t)\, (mm)$ and $y = 250\, sin(\pi t)\, (mm) $. Second, the robot is supposed to go to two different locations sequentially, i.e., $\begin{bmatrix}100& 100& 450\end{bmatrix}^\intercal \,(mm)$ and $\begin{bmatrix}-100& -100& 600\end{bmatrix}^\intercal \,(mm)$. Moreover, to compare robustness of the controllers, both scenarios are repeated by adding a $1kg$ mass as an uncertainty to the end effector. Also all experiments started from robot’s home position at $\begin{bmatrix}0& 0& 500\end{bmatrix}^\intercal \,(mm)$. Video of the tests can be found in [@testvideo]. Results related to first scenario without uncertainty are summarized in \[fig:circtraj,fig:excirc,fig:eycirc,fig:ezcirc,table:ms\]. ![First scenario desired trajectory[]{data-label="fig:circtraj"}](desiredcirc.pdf){width=".48\textwidth"} ![First scenario tracking error $e_x$: without uncertainty[]{data-label="fig:excirc"}](errorxcirc.pdf){width=".48\textwidth"} ![First scenario tracking error $e_y$: without uncertainty[]{data-label="fig:eycirc"}](errorycirc.pdf){width=".48\textwidth"} ![First scenario tracking error $e_z$: without uncertainty[]{data-label="fig:ezcirc"}](errorzcirc.pdf){width=".48\textwidth"} x y z --------------------------------- ---------- ---------- ---------- $\mu_{\left|e\right|}\,(mm)$ CT $2.2119$ $2.1916$ $0.4873$ ADP $1.6669$ $1.7063$ $0.2387$ SMC $1.4699$ $1.4020$ $0.2214$ $\sigma_{\left|e\right|}\,(mm)$ CT $1.1095$ $1.1505$ $0.2358$ ADP $0.8907$ $0.8686$ $0.1751$ SMC $0.7393$ $0.7247$ $0.0861$ : Mean and standard deviation of steady state $\left|e\right|$, first scenario without uncertainty[]{data-label="table:ms"} Results related to second scenario without uncertainty are presented in \[fig:steptraj,fig:exstep,fig:eystep,fig:ezstep\]. ![Second scenario desired trajectory[]{data-label="fig:steptraj"}](desiredstep.pdf){width=".48\textwidth"} ![Second scenario tracking error $e_x$: without uncertainty[]{data-label="fig:exstep"}](errorxstep.pdf){width=".48\textwidth"} ![Second scenario tracking error $e_y$: without uncertainty[]{data-label="fig:eystep"}](errorystep.pdf){width=".48\textwidth"} ![Second scenario tracking error $e_z$: without uncertainty[]{data-label="fig:ezstep"}](errorzstep.pdf){width=".48\textwidth"} The difference between performance of the three methods (without the uncertain mass), as observed through \[fig:excirc,fig:eycirc,fig:ezcirc,fig:exstep,fig:eystep,fig:ezstep,table:ms\], is small. However, even these small differences are considerable in the context of robotic applications, given tight tolerances and high accuracy requirements. CT does the worst, both in step response and following a circle. The performance of the proposed method is very close to that of SMC. But except for the $y$ coordinate error of step test, SMC controller slightly does a better job. Results of experiments with a mass added as uncertainty are shown \[fig:excircu,fig:eycircu,fig:ezcircu,table:msu,fig:exstepu,fig:eystepu,fig:ezstepu\]. ![First scenario tracking error $e_x$: with uncertainty[]{data-label="fig:excircu"}](errorxcircu.pdf){width=".48\textwidth"} ![First scenario tracking error $e_y$: with uncertainty[]{data-label="fig:eycircu"}](errorycircu.pdf){width=".48\textwidth"} ![First scenario tracking error $e_z$: with uncertainty[]{data-label="fig:ezcircu"}](errorzcircu.pdf){width=".48\textwidth"} x y z --------------------------------- ---------- ---------- ----------- $\mu_{\left|e\right|}\,(mm)$ CT $2.6833$ $2.8572$ $2.2906$ ADP $1.7528$ $1.8637$ $1.2390$ SMC $1.6172$ $1.7284$ $0.8945$ $\sigma_{\left|e\right|}\,(mm)$ CT $1.4124$ $1.5423$ $0.55518$ ADP $0.9451$ $0.9336$ $0.3820$ SMC $0.8496$ $0.8882$ $0.2250$ : Mean and standard deviation of steady state $\left|e\right|$, first scenario with uncertainty[]{data-label="table:msu"} ![Second scenario tracking error $e_x$: with uncertainty[]{data-label="fig:exstepu"}](errorxstepu.pdf){width=".48\textwidth"} ![Second scenario tracking error $e_y$: with uncertainty[]{data-label="fig:eystepu"}](errorystepu.pdf){width=".48\textwidth"} ![Second scenario tracking error $e_z$: with uncertainty[]{data-label="fig:ezstepu"}](errorzstepu.pdf){width=".48\textwidth"} As it can be seen in \[fig:excircu,fig:eycircu,fig:ezcircu,fig:exstepu,fig:eystepu,fig:ezstepu,table:msu\], adding the uncertainty to the system increases steady state errors for all three methods. While one can say the robustness of SMC is higher than other two methods, the performance of the proposed methods still remains close to that of SMC. Moreover, the computed torque controller falls much behind in comparison to the other two methods, as expected. Despite advantages of the proposed optimal controller, that can be seen in experiments, like every other method it has its disadvantageous too. ADP, that is used to solve the nonlinear optimal control problem in a closed form, is a numerical procedure and it can have convergence problems in practice. To be more precise, depending on the approximation error of the neural network that is used, the ADP based algorithm may not converge for all values of $Q,R$, and sampling time. Therefore, the designer should consider this in controller design stage. The other disadvantage of ADP is about choosing basis functions. Even though some works has been done about this, there is no conclusive work yet. Generally as ROI gets bigger in dimension, finding basis functions that accurately interpolate the value function, gets much harder. Consequently the convergence of the algorithm will be affected, however normalizing the data might be helpful. The other issue is the so called curse of dimensionality [@Bellman:1957] in dynamic programming. Despite mitigation of this problem by implementing ADP, and also in the proposed method through reduction of value function parameters by introducing expectation of value function (expectation of value function depends only on $E$ instead of $E$ and $X_d$), the problem still exists. The other point to be mentioned is about chattering. This is usually considered a problem related to SMC. However, this phenomenon can also happen for the other two methods, because of control discontinuity resulted from digital implementation. In tuning all of the controllers, it was observed that there is an upper-bound on the aggressiveness of each of the controllers, that can be achieved without chattering. Conclusion ========== In this paper, a new framework is introduced for optimal tracking problem of a class of nonlinear systems. In contrast to previous works on optimal control, the presented approach can track any trajectories (of course in the ROI) after one training. Also using expectation of total cost, number of parameters decreased. This mitigates curse of dimensionality. The presented method is then applied to a relatively complex nonlinear system and its performance is shown experimentally. {#appendixexample} Consider the following optimal tracking system that is defined based on exact total cost $$\dot{x} = x + u,$$ $$J(x_0) = \int_{0}^{\infty} exp(-\rho t)(q(x-r)^2 + u^2) dt,$$ where $\rho>0$ and $q = 1$. Let assume that the desired trajectory is $r(t) = 2$. In this case if the initial condition is $x_0 = 1$. Then the optimal control intuitively becomes $u^*(t) = -1$, this can be easily verified from the standard LQT solution. Applying the optimal control, the state time history becomes $x(t) = 1$. Therefore, the optimal tracking control based on exact total cost is not asymptotically stabilizing. This happens due to the fact that the reference trajectory is not within invariant sets of the system (this makes error dynamics non-stationary at origin), which is needed for asymptotic stability of the closed loop system under optimal control. To be more precise the optimal tracking controller based on the total cost can only asymptotically track reference trajectories that are among invariant sets of the system. This solution is acceptable in an economical optimization problem, however in control context, asymptotic stability is favored. To achieve general asymptotic stability from optimal control resulted from optimizing exact total cost one typically needs $q \rightarrow \infty$. Experimental parameters {#appendixparams} ======================= Here, control parameters used in the experiments are given. For geometrical and inertial parameters of the robot please contact the corresponding author. Computed torque parameters: $$\begin{array}{lr} K_p = 1600 & \text{proportional gain}\\ K_d = 100 & \text{derivative gain} \end{array}$$ Proposed controller parameters: $$\begin{array}{ll} Q = D^{\intercal} D & \text{state penalizing matrix} \end{array}$$ $$\begin{array}{ll} \text{where} & D = \begin{bmatrix} 20 & 0 & 0 & 1 & 0 & 0\\ 0 & 20 & 0 & 0 & 1 & 0\\ 0 & 0 & 20 & 0 & 0 & 1 \end{bmatrix} \end{array}$$ $$\begin{array}{ll} R = 0.001I_{3 \times 3} & \text{control penalizing matrix}\\ \end{array}$$ $$\begin{array}{ll} \begin{split} \varphi(X) = \left[x_1x_2 \quad x_1x_3 \quad x_1x_4 \quad \dotso \right.\\ \qquad x_2x_3 \quad x_1x_5 \quad x_2x_4 \\ \qquad x_1x_6 \quad x_2x_5 \quad x_3x_4 \\ \qquad x_2x_6 \quad x_3x_5 \quad x_3x_6 \\ \qquad x_4x_5 \quad x_4x_6 \quad x_5x_6 \\ \qquad {x_1}^2 \quad {x_2}^2 \quad {x_3}^2 \\ \left. \qquad {x_4}^2 \quad {x_5}^2 \quad {x_6}^2 \right]^{\intercal}\end{split} & \text{basis function} \end{array}$$ where $$\begin{gathered} \begin{bmatrix} x_1 \quad x_2 \quad x_3 \end{bmatrix} = \begin{bmatrix} e_x \quad e_y \quad e_z \end{bmatrix} \text{\qquad and}\\ \begin{bmatrix} x_4 \quad x_5 \quad x_6 \end{bmatrix} = \begin{bmatrix} \dot{e}_x \quad \dot{e}_y \quad \dot{e}_z \end{bmatrix}\end{gathered}$$ $$\begin{gathered} \begin{split} W = \left[0.0025 \quad -0.1939 \qquad 0.0330 \qquad \dotso \right.\\ \quad -0.2257 \qquad 0.0026 \quad -0.0009 \\ \qquad 0.0008 \qquad 0.0317 \quad -0.0026 \\ \qquad 0.0002 \quad -0.0055 \qquad 0.0507 \\ \quad -0.0001 \quad -0.0002 \quad -0.0002 \\ \qquad 1.8550 \qquad 1.8911 \qquad 1.9928 \\ \left. \qquad 0.0012 \qquad 0.0012 \qquad 0.0016 \right]^{\intercal}\end{split} \\ \text{optimal weights}\end{gathered}$$ Sliding mode parameters: $$\begin{array}{ll} K = 70 & \text{sliding mode gain}\\ \lambda = 20 & \text{sliding surface parameter}\\ \varphi = 0.35 & \text{boundary layer} \end{array}$$ [10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{} A. E. Bryson, *Applied optimal control: optimization, estimation and control*.1em plus 0.5em minus 0.4emRoutledge, 1975. P. Werbos, “Adp: Goals, opportunities and principles,” *Handbook of learning and approximate dynamic programming*, vol. 2, p. 1, 2004. F. L. Lewis and D. Vrabie, “Reinforcement learning and adaptive dynamic programming for feedback control,” *IEEE circuits and systems magazine*, vol. 9, no. 3, pp. 32–50. G. N. Saridis and C.-S. G. Lee, “An approximation theory of optimal control for trainable manipulators,” *IEEE Transactions on systems, Man, and Cybernetics*, vol. 9, no. 3, pp. 152–159, 1979. T. [Ç]{}imen, “State-dependent riccati equation (sdre) control: A survey,” *IFAC Proceedings Volumes*, vol. 41, no. 2, pp. 3761–3775, 2008. T. Lahdhiri and H. A. Elmaraghy, “Design of an optimal feedback linearizing-based controller for an experimental flexible-joint robot manipulator,” *Optimal Control Applications and Methods*, vol. 20, no. 4, pp. 165–182, 1999. T. [Ç]{}Imen and S. P. Banks, “Nonlinear optimal tracking control with application to super-tankers for autopilot design,” *Automatica*, vol. 40, no. 11, pp. 1845–1863, 2004. J.-H. Chou, C.-H. Hsieh, and J.-H. Sun, “On-line optimal tracking control of continuous-time systems,” *Mechatronics*, vol. 14, no. 5, pp. 587–597, 2004. G.-Y. Tang, Y.-D. Zhao, and B.-L. Zhang, “Optimal output tracking control for nonlinear systems via successive approximation approach,” *Nonlinear Analysis: Theory, Methods & Applications*, vol. 66, no. 6, pp. 1365–1377, 2007. T. W. McLain, C. A. Bailey, and R. W. Beard, “Synthesis and experimental testing of a nonlinear optimal tracking controller,” in *Proceedings of the 1999 American Control Conference (Cat. No. 99CH36251)*, vol. 4.1em plus 0.5em minus 0.4emIEEE, 1999, pp. 2847–2851. A. Heydari and S. N. Balakrishnan, “Fixed-final-time optimal tracking control of input-affine nonlinear systems,” *Neurocomputing*, vol. 129, pp. 528–539, 2014. H. Modares, B. Kiumarsi, K. G. Vamvoudakis, and F. L. Lewis, “Adaptive $h_{\infty}$ tracking control of nonlinear systems using reinforcement learning,” in *Adaptive Learning Methods for Nonlinear System Modeling*.1em plus 0.5em minus 0.4emElsevier, 2018, pp. 313–333. B. Kiumarsi and F. L. Lewis, “Actor–critic-based optimal tracking for partially unknown nonlinear discrete-time systems,” *IEEE Transactions on Neural Networks and Learning Systems*, vol. 26, no. 1, pp. 140–151, 2014. H. Modares and F. L. Lewis, “Optimal tracking control of nonlinear partially-unknown constrained-input systems using integral reinforcement learning,” *Automatica*, vol. 50, no. 7, pp. 1780–1792, 2014. A. G. Khiabani and A. Heydari, “Design and implementation of an optimal switching controller for uninterruptible powersupply inverters using adaptive dynamic programming,” *IET Power Electronics*, 2019. Q. Yang and S. Jagannathan, “Reinforcement learning controller design for affine nonlinear discrete-time systems using online approximators,” *IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)*, vol. 42, no. 2, pp. 377–390, 2011. S. E. Lyshevski, “Optimal tracking control of nonlinear dynamic systems with control bounds,” in *Proceedings of the 38th IEEE Conference on Decision and Control (Cat. No. 99CH36304)*, vol. 5.1em plus 0.5em minus 0.4emIEEE, 1999, pp. 4810–4815. M. Abu-Khalaf and F. L. Lewis, “Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network hjb approach,” *Automatica*, vol. 41, no. 5, pp. 779–791, 2005. Y.-M. Park, M.-S. Choi, and K. Y. Lee, “An optimal tracking neuro-controller for nonlinear dynamic systems,” *IEEE Transactions on Neural Networks*, vol. 7, no. 5, pp. 1099–1110, 1996. H. Zhang, Q. Wei, and Y. Luo, “A novel infinite-time optimal tracking control scheme for a class of discrete-time nonlinear systems via the greedy hdp iteration algorithm,” *IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)*, vol. 38, no. 4, pp. 937–942, 2008. T. Dierks and S. Jagannathan, “Optimal tracking control of affine nonlinear discrete-time systems with unknown internal dynamics,” in *Proceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference*.1em plus 0.5em minus 0.4em IEEE, 2009, pp. 6750–6755. R. Kamalapurkar, H. Dinh, S. Bhasin, and W. E. Dixon, “Approximate optimal trajectory tracking for continuous-time nonlinear systems,” *Automatica*, vol. 51, pp. 40–48, 2015. S. Lyashevskiy and A. U. Meyer, “Control system analysis and design upon the lyapunov method,” in *Proceedings of 1995 American Control Conference-ACC’95*, vol. 5.1em plus 0.5em minus 0.4emIEEE, 1995, pp. 3219–3223. M. Athans and P. L. Falb, *Optimal control: an introduction to the theory and its applications*.1em plus 0.5em minus 0.4emCourier Corporation, 2013. B. Strulovici and M. Szydlowski, “On the smoothness of value functions and the existence of optimal strategies in diffusion models,” *Journal of Economic Theory*, vol. 159, pp. 1016–1055, 2015. R. Leake and R.-W. Liu, “Construction of suboptimal control sequences,” *SIAM Journal on Control*, vol. 5, no. 1, pp. 54–63, 1967. R. W. Beard, G. N. Saridis, and J. T. Wen, “Galerkin approximations of the generalized hamilton-jacobi-bellman equation,” *Automatica*, vol. 33, no. 12, pp. 2159–2177, 1997. T. Bian and Z.-P. Jiang, “Value iteration, adaptive dynamic programming, and optimal control of nonlinear systems,” in *2016 IEEE 55th Conference on Decision and Control (CDC)*.1em plus 0.5em minus 0.4emIEEE, 2016, pp. 3375–3380. T. Kompas and L. Chu, “A comparison of parametric approximation techniques to continuous-time stochastic dynamic programming problems,” Tech. Rep., 2010. R. Bellman, *Dynamic Programming*, 1st ed.1em plus 0.5em minus 0.4emPrinceton, NJ, USA: Princeton University Press, 1957. S. Boyd and L. Vandenberghe, *Convex optimization*.1em plus 0.5em minus 0.4emCambridge university press, 2004. R. Clavel, “Conception d’un robot parallèle rapide à 4 degrés de liberté,” Ph.D. dissertation, EPFL, Lausanne, Switzerland, 1991. L.-W. Tsai, *Robot analysis: the mechanics of serial and parallel manipulators*.1em plus 0.5em minus 0.4emJohn Wiley & Sons, 1999. L. A. Casta[ñ]{}eda, A. Luviano-Ju[á]{}rez, and I. Chairez, “Robust trajectory tracking of a delta robot through adaptive active disturbance rejection control,” *IEEE Transactions on control systems technology*, vol. 23, no. 4, pp. 1387–1398, 2014. F. Paccot, N. Andreff, and P. Martinet, “A review on the dynamic control of parallel kinematic machines: Theory and experiments,” *The International Journal of Robotics Research*, vol. 28, no. 3, pp. 395–416, 2009. H. D. Taghirad, *Parallel robots: mechanics and control*.1em plus 0.5em minus 0.4emCRC press, 2013. A. Codourey, “Dynamic modeling of parallel robots for computed-torque control implementation,” *The International Journal of Robotics Research*, vol. 17, no. 12, pp. 1325–1336, 1998. J.-J. E. Slotine *et al.*, *Applied nonlinear control*, vol. 199, no. 1. F. Asadi and A. Heydari. (2019) [Video: experimental evaluation of ADP based sub-optimal tracking controller]{}. <https://youtu.be/KqMM5zBDRDw>. [^1]: Adaptive (approximate) dynamic programming [^2]: The authors are with the Department of Mechanical Engineering, Southern Methodist University, Dallas, TX, 75205 USA.\ E-mail: farshidasadi47@yahoo.com, aheydari@smu.edu.\ Corresponding author: Farshid Asadi. [^3]: This research was partially supported by the United States National Science Foundation through Grant 1745212. [^4]: Linear quadratic regulator [^5]: State dependent Riccati equation [^6]: All vectors are column vectors. [^7]: The dynamic system has a direct dependence on time through $X_d$ and its time derivative. [^8]: The equilibrium point of the dynamic system does not lie at origin. [^9]: This steady state error is because of the analytical construction of the controller, not from disturbance and\\or uncertainties. [^10]: Assuming that the application tolerates the first and the second mentioned problems. [^11]: Hamilton- Jacobi- Bellman [^12]: Region of interest [^13]: Left hand side [^14]: Policy iteration [^15]: Value iteration [^16]: Neural network [^17]: Elements of the sequence are positive. [^18]: In this work, admissible controls are limited to asymptotically stabilizing controllers. [^19]: Degrees of freedom [^20]: Computed torque [^21]: Sliding mode control
--- abstract: 'Understanding the epidemic dynamics, and finding out efficient techniques to control it, is a challenging issue. A lot of research has been done on targeted immunization strategies, exploiting various global network topological properties. However, in practice, information about the global structure of the contact network may not be available. Therefore, immunization strategies that can deal with a limited knowledge of the network structure are required. In this paper, we propose targeted immunization strategies that require information only at the community level. Results of our investigations on the SIR epidemiological model, using a realistic synthetic benchmark with controlled community structure, show that the community structure plays an important role in the epidemic dynamics. An extensive comparative evaluation demonstrates that the proposed strategies are as efficient as the most influential global centrality based immunization strategies, despite the fact that they use a limited amount of information. Furthermore, they outperform alternative local strategies, which are agnostic about the network structure, and make decisions based on random walks.' author: - bibliography: - 'Ref1.bib' title: ' Community-based Immunization Strategies for Epidemic Control' --- Introduction ============ Outbreak of infectious diseases is a serious threat to the lives of people and it also brings serious economic loss for the victim countries. So, it is very important to discover the propagation rules in social groups in order to prevent the epidemics or at least to control the epidemic spreading. Vaccination allows to protect people and prevent them to transmit disease among their contacts. As mass vaccination is not always feasible, due to limited vaccination resources, targeted immunization strategies are of prime interest for public health. Impact of the contact network topology on disease transmission is a hot topic in the complex network litterature. A lot of work has been done towards this direction [@EpdInt; @pastorepidemic; @gong2013efficient; @halloran2008modeling; @barthelemyvelocity; @singh2012rumour; @Anuappb; @anuncc]. Top-ranked influential spreaders need to be identified for targeted immunization. In social networks, nodes with high centrality are considered to be the influential nodes. Since the most central nodes can diffuse their influence to the whole network faster than the rest of nodes, they are the ones to be targeted. Unfortunately, there is no general consensus on the definition of centrality and many measures have been proposed. Degree centrality and betweenness centrality are the most popular ones [@degcent1; @degcent2; @Betcent1; @Betcent2]. Targeting of high-centrality individuals in a network is a global strategy because it requires the knowledge of the whole network. The main drawback of these strategies is that very often there is no information available about the global structure of the real-world networks. Hence, efficient immunization strategies, based on locally available network information are of prime interest. Such strategies rely only on local information around selected nodes. The most basic strategy is random immunization where target nodes are picked at random regardless of the network topology. In acquaintance immunization, the selected node is chosen randomly among the neighbors of a randomly picked node. As randomly selected acquaintances of nodes are likely to have more connections than randomly selected nodes, this method targets highly connected nodes. It can be viewed as a local approximation of a global degree based strategy. It is well-recognized that numerous real-world networks ranging from biological to social systems share some topological properties. They are scale-free and exhibit the small-world property. Furthermore, they are characterized by a high clustering coefficient and a well-defined community structure. While a lot of research has been done to study and understand the effect of the degree distribution of the network on the dynamics of epidemic spreading, the impact of the community structure has not been paid a lot of attention. More recently, this property of the networks is exploited to control the spreading of disease [@salathe; @hebert]. Unfortunately, these studies feature two significant shortcomings. Firstly, the community structure of the empirical data is questionable. Indeed in [@salathe] the ground truth community structure is based on a functional definition that is not necessarily encoded in the network topology while in [@hebert] the authors rely on the results of a community detection method to uncover the community structure. As there is no consensual community detection method, results are very sensitive to this issue. Secondly, simulated data are based on simple models that do not reproduce properly the community structure observed in real-world networks. The main contributions of this paper are twofold. To overcome previous research drawbacks, we analyze the impact of the community structure using a more realistic community-structured benchmark with controlled topological properties. Furthermore, we introduce new local influence measures based on the community structure. An extensive experimental evaluation is conducted in order to investigate their efficiency as compared to global centrality measures and the local Community Bridge Finder algorithm (CBF) introduced in [@salathe], which has been designed for community structured networks. The remainder of this paper is organized as follows. In section 2, the benchmark model is introduced and its main properties are recalled. In the section 3, the immunization techniques are presented. Section 4 is devoted to the experimental results. We conclude with a discussion of our observations and findings in section 5. Synthetic Benchmark Data ======================== Generative models allow producing large collections of synthetic networks easily and quickly. These models also allow to control some topological properties of the generated networks, so as to get them close to the targeted system features. The only point of concern is how closely the generated networks are able to represent the real-world networks, which is necessary for obtaining relevant test results. To date, the LFR (Lancichinetti, Fortunato and Radicchi)[@LFR] model is the most efficient solution in order to generate synthetic networks with community structure. Consequently, it is used to generate the networks with a non-overlapping community structure. It is based on the configuration model (CM) [@CM], which generates networks with power law degree distributions. The generative process of the LFR algorithm performs in three steps. First, a network with power-law degree distribution with exponent $\gamma$, is generated using the configuration model. Second, virtual communities are defined so that their sizes follow a power-law distribution with exponent $\beta$. Each node is randomly assigned to a community, with the constraint that the community size must be greater than or equal to the node’s internal degree. Third, an iterative process takes place in order to rewire certain links, so as to get the proportion of intra-community and inter-community links close to the mixing coefficient value $\mu$, while preserving the degree distribution. This model guarantees to obtain realistic features (power-law distributed degrees and community sizes) for the generated networks. It also includes a rich set of parameters that can be tuned to get the desired network topology. These parameters are, the mixing parameter $\mu$, the average degree $k$, the maximum degree $k_{max}$, the maximum community size $c_{max}$, and the minimum community size $c_{min}$. For small $\mu$ values, the communities are distinctly separated and thus easily identifiable because of lesser inter-community links. Whereas, when $\mu$ increases, the proportion of inter-community links becomes higher, making community distinction and identification a difficult task. The network has no community structure for a limit value of the mixing coefficient given by: $$\mu_{lim} > (n - n_c^{max})/n,$$ where $n$ and $n_c^{max}$ are the number of nodes in the network and in the biggest community, respectively [@LF2009b]. Immunization Strategies ======================= Targeted immunization strategies can be divided in two categories based on their requirement about the knowledge of the network topology. Global strategies exploit the knowledge of the full network structure in order to find the influential nodes while local strategies are able to work with a limited amount of information. **Global strategies:** ---------------------- These immunization strategies are based on an ordering of the nodes of the whole network according to an influence measure. Nodes are then targeted (removed) in the decreasing order of their rank. The influence of a node is computed according to some centrality measures. In this study, the most influential centrality measures (degree and betweenness) are considered in order to compare with the proposed strategies. We have not considered k-core centrality for evaluation as it is shown to be not very effective in finding the influential nodes for targeted immunization.[@hebert] **1. *global\_deg*:** Degree centrality denotes the number of immediate neighbors of a node, i.e. which are only one edge away from the node. It is simple but very coarse. It can be interpreted as the number of walk of length 1 starting at the considered node. It measures the local influence of a node. It has many ties and fails to take into account the influence weight of even the immediate neighbors. Even if it is a local measure, the immunization strategy is global because it needs to rank all the nodes of the network according to their degree. **2. *global\_bet\_cent*:** Betweenness centrality defines the influence of a node based on the number of shortest paths between every pair of node, of which it is a part of. It basically tries to identify the influence of a node in terms of information flow through the network. In this strategy, the nodes are targeted based on their overall betweenness centrality. The computation of betweenness has high time complexity. Note that removing a node can modify the network topology and hence the centrality of the remaining nodes might not be the same. So, the centralities of the remaining nodes after removing the node with highest centrality need to be recalculated. In the global strategies, we have recalculated the centralities of the remaining nodes after removing nodes with highest respective centrality measures. **Local strategies:** ---------------------- Various methods based of the community characteristics are proposed to immunize or remove the nodes from the communities in order to control the epidemic spreading. These strategies are local because they only require information at the community level. In the case where the community structure is unknown, it can be uncovered with local community detection algorithms. Therefore, these strategies do not require any information about the global structure of the network. In a network with community structure, the degree of a node can be split into two contributions: the intra-community links connecting it to nodes in its own community and the inter-community links connecting it to nodes outside its community. Strength of the community structure of a network depends upon inter and intra-community links. A network is said to have a strong and well-defined community structure if a small fraction of total links in the networks lies between the communities. On the contrary, if a large fraction of total links lies between the communities, then the network does not contain well-defined communities and it is said to have a weak community structure. The topology of a network can be fully specified by its adjacency matrix $A$. In the case of an undirected, unweighted network, $A(i, j)$ is equal to 1 if $i$ and $j$ are directly connected to each other otherwise it is equal to zero. Considering a community $C$ of a network, the total degree of a node $i$ can be split in to two parts: $$k_i(C) = k_i^{in}(C) + k_i^{out}(C).$$ The degree $k_i$ of a node $i$ is equal to the total number of its connections, $k_i = \sum_j A(i ,j)$. The Indegree of a node is equal to the number of edges connecting it to other nodes of the same community and can be calculated as $k_i^{in}(C) = \sum_{j\in C } A(i, j)$. The Outdegree of a node $i$ is equal to the number of connections to the nodes lying outside the community and can be calculated as $k_i^{out}(C) = \sum_{j \notin C} A(i, j)$. **1. *inout\_diff\_nodes*:** In this strategy, the nodes targeted for immunization are ranked according to the difference of their indegree and outdegree. For a node $i$, this difference is given by: $$k_i^{iod} = k_i^{in} - k_i^{out}.$$ A fraction of the nodes with highest $k_i^{iod}$ in each community are removed from the network. The reason for selecting these nodes is that they are more committed to their community and have very few connections outside their community. It is speculated that, removing these nodes will weaken the community structure of the network. **2. *outin\_diff\_nodes*:** In this case, nodes are targeted for immunization based on the difference of their outdegree and indegree. For a node $i$, it is given by: $$k_i^{oid} = k_i^{out} - k_i^{in}.$$ These nodes share more connections with nodes outside their community than inside. So, they act as bridge between the communities and are responsible for spreading the information outside of their community. It is speculated that removing such nodes will stop the information flow between different communities. **3. *indeg\_nodes*:** In this strategy, a given fraction of total nodes in each community with the highest indegree centrality are selected for immunization. These nodes can be considered as the core of their community which are responsible for maximum information flow inside the community. If these nodes are removed, it may weaken the community structure of the network and thus prevent the epidemic spreading inside the communities. **4. *outdeg\_nodes*:** In this strategy, nodes are targeted based on their outdegree centrality. A fraction of nodes with highest outdegree centrality, in each community are removed from the network with the intention of isolating the different communities. The idea is to break the bridges between the communities and thus to prevent the epidemic spreading across the communities. **5. *community bridge finder (CBF)*:** Along with degree and betweenness centrality based strategies, the proposed strategies are also compared with a stochastic strategy, *CBF*, proposed by Salathe *et al.* [@salathe]. The CBF algorithm is a random walk based algorithm, aimed at identifying nodes connected to multiple community. The algorithm begins with selecting a random node as the starting node. Then a random path is followed until a node is found that is not connected to more than one of the previously visited nodes on the random walk. It assumes that such node is more likely to belong to a different community. This strategy is completely agnostic about the network structure. In the local strategies the number of nodes to be removed from a community is proportional to the community size. Hence, more nodes are removed from larger communities than from the smaller communities. Experimental Results ==================== To investigate the efficiency of the proposed strategies, synthetic networks are generated using the LFR algorithm. Several studies of real-world networks are considered in order to select appropriate values for the network parameters. For the power-law exponents, we have used $\gamma = 3$ and $\beta = 2$, which seem to be close to most of the real-world networks. Concerning the number of nodes and links, no typical values emerge. Studies show that the size of real-world networks can differ very much, ranging from tens to millions of nodes. It is also difficult to characterize the average and maximal degrees of the network because of their variable nature. As a result, we selected some consensual values for these parameters, while also keeping in mind the computational aspect of the simulations [@Hocine]. The values of the parameters used for LFR networks generation in this study are given in the Table \[t1\]. Number of nodes, n 7500 ------------------------------------- --------------- Average degree, $\langle k \rangle$ 10 Maximum degree, $k_{max}$ 180 Mixing parameter, $\mu$ 0.3, 0.5, 0.7 $\gamma$ 3 $\beta$ 2 Minimum community size, $C_{min}$ 5 Maximum community size, $C_{max}$ 180 : Parameters for the LFR network generation[]{data-label="t1"} In order to better understand the influence of the community structure, networks with various $\mu$ values (ranging from 0.3 to 0.7) are generated. For each $\mu$ value, 10 sample networks are generated. To study the spread of an infectious disease in a contact network, the popular SIR model of epidemic spreading is used. Each node in the network is in one of the three possible states: (S)usceptible, (I)nfected, or (R)esistant/immune. Susceptible nodes represent the individuals which are not yet infected with the disease. Infected nodes are the ones which have been infected with the disease and can spread the disease to the susceptible nodes. Resistant nodes represent the individuals which have been infected and have been immunized or died. These nodes are not able to be infected again, neither can they transmit the infection to others. Initially, all the nodes are treated as susceptible. After this initial set-up, a fraction of nodes are chosen at random to be infected and the remaining nodes are considered to be susceptible. The infection spreads through the contact network during a number of time steps, where $\lambda$ is the transmission rate of infection being spread from an infected node to a susceptible node, and $I$ is the number of infected neighboring nodes. Infected nodes recover with rate $\sigma$ at each time step. After the recovery of a node occurs, its state is toggled from infected to resistant. Simulations are halted if there are no infected nodes left in the network, and the total number of infected nodes is analyzed. Each simulation is run on these 10 networks and the mean values of the results together with their standard deviations are reported in the figures. ![Effect of $\mu$, mixing parameter of the LFR Network on the total infected nodes. Here, $\lambda$ = 1[]{data-label="f1"}](muR.eps){width="0.9\linewidth" height="2.3in"} Fig. \[f1\] illustrates the impact of the mixing parameter, $\mu$ on the total infected nodes during the epidemic spreading. It can be observed that as $\mu$ increases, the total number of infected nodes during the epidemic spreading also increases. Indeed, the inter-community links i.e. the links between communities act as the carrier of the information across different communities, and consequently, they play a major role in the epidemic spreading. The infection starting inside a community spreads easily to other communities through these inter-community links. The total number of infected nodes increases when the community structure gets weaker. On the contrary, when the communities are very cohesive, there is few inter-community links and the epidemic is trapped in the community where it started. The simulations are performed with different values of recovering rate, $\sigma$, as there might be different rates of recovery in real world scenarios. It can be observed that for small values of $\sigma$, epidemic spreads over a larger population than in the case of large value of $\sigma$. Indeed, infected nodes recover at a very lower rate in the case of low $\sigma$, and hence keep on spreading the infection. It takes more time to get all the infected nodes in the network to the recovered state. Whereas, when the recovery rate is high, the infected nodes recover before the epidemic spreads to a larger population. $\begin{array}{c} \includegraphics[width=0.9\linewidth, height=2.4 in]{0.3-3-0.1.eps}\\ \mbox{(a)}\\ \includegraphics[width=0.9\linewidth, height=2.4 in]{0.5-3-0.1.eps}\\ \mbox{(b)}\\ \includegraphics[width=0.9\linewidth, height=2.4 in]{0.7-3-0.1.eps} \\ \mbox{(c)} \\ \end{array}$ The results of the simulations using various available and proposed immunization strategies are shown in Figs. \[f2\] and \[f3\]. Fig. \[f2\] illustrates the results for low values of transmission and recovering rate ($\sigma$ and $\lambda$ are both equal to 0.1). Note that the global centrality based methods (degree, betweenness) are very efficient. After removing or immunizing only 30% of the nodes, epidemic spreading has died. This is expected as both the methods exploit the information about the overall network topology. When the *CBF* strategy is used, 50% nodes need to be removed in order to stop the epidemic spreading. This is the price to pay for not knowing the global structure of the network. More interesting, the proposed indegree and outdegree centrality based methods, *indeg\_nodes* and *outdeg\_nodes* respectively, are almost as effective as the global centrality methods despite the fact that they are agnostic about the full network structure. Furthermore, they perform a lot better than the *CBF* strategy, with just the information about the community structure of the network. The reason why the proposed methods are as effective, can be explained by the specific position of the targeted nodes within the network. The nodes with the highest indegree can be considered as the core points of their community. In a network of people, these individuals are the leaders, representatives or agents of information flow in their communities. These high indegree nodes are connected to many other nodes in their community. Most of the regular nodes in a community are not directly connected to each other. They are connected with each other through the paths which would most likely contain these high indegree nodes. When these high indegree nodes are removed from the communities, the communities breaks from inside. In other words, the paths connecting the regular nodes to each other are broken. The remaining nodes are not able to contact each other and thus the epidemic is not able to affect a significant part of community, and it dies soon. Whereas the nodes with the highest outdegree are the ones which have lot of connections to other communities. In a group of people, these are the individuals which pass all the information contained in their group to other groups. These nodes can be considered to be the bridges between the communities. When these nodes are removed, most of the paths or bridges between different communities are lost. Communities are isolated and thus the epidemic is not able to spread across the communities. The underlying idea in selecting these centrality measures is that they intuitively represent the global degree and betweenness centralities at the community level. A node with a high indegree or outdegree has generally a high overall degree. Indeed, the total degree of a node is the addition of indegree and outdegree. A node with high indegree is probably a node with a high betweenness measure in its community, as it will be contained in most of the paths connecting the regular nodes to each other in the community. Similarly, a node with high outdegree is probably a node with high betweenness measure in the overall network. Indeed, these high outdegree nodes are part of most of the paths connecting the nodes falling in different communities. $\begin{array}{c} \includegraphics[width=0.9\linewidth, height=2.4 in]{0.3-3-0.9.eps}\\ \mbox{(a)}\\ \includegraphics[width=0.9\linewidth, height=2.4 in]{0.5-3-0.9.eps}\\ \mbox{(b)}\\ \includegraphics[width=0.9\linewidth, height=2.4 in]{0.7-3-0.9.eps}\\ \mbox{(c)} \end{array}$ Fig. \[f2\](a) shows the results of the immunization strategies for a strong community structure with $\mu$ = 0.3. In other words, the communities are densely connected and there is a low proportion of inter-community links. Here, we observe that even the proposed *inout\_diff\_nodes* method gives better results than the *CBF* strategy. In the case of strong community structure, most of the nodes have all the connections inside their community. The *inout\_diff\_nodes* method selects the nodes to be immunized on the basis of the highest difference of indegree and outdegree. It targets the nodes which have more connections inside their community than outside. These nodes are more committed to their community, and have very few connections outside of their community. These nodes are the ones which once infected from outside, have a greater chance of infecting the whole community. So, targeting these nodes helps to prevent the epidemic spreading in a network with a strong community structure. In Fig. \[f2\](c), $\mu$ = 0.7, the network has a weak community structure, i.e. around 70% of the links lie between the communities. In this case, *inout\_diff\_nodes* method is not that effective, but *outin\_diff\_nodes* method works better than both *CBF* and *inout\_diff\_nodes*. *outin\_diff\_nodes* method selects the nodes on the basis of highest difference of their outdegree and indegree. These are the nodes which have more connections to the other communities than to their own community. After catching the infection from inside their community, these nodes quickly spread the infection to other communities because of their numerous outside connections. In the case of weak community structure, the *outin\_diff\_nodes* method works better as it targets the nodes which work as bridges between the communities and are responsible for spreading the epidemic across different communities. In the case of $\mu$ = 0.5, (See Fig. \[f2\](b)) when inter and intra-community links are almost as numerous, the *inout\_diff\_nodes* and *outin\_diff\_nodes* are not very effective. Indeed, the difference of indegrees and outdegrees of the nodes is around zero in this case, and these methods are not be able to decide which are the influential nodes. Fig. \[f3\] reports the SIR simulation results for $\lambda$ = 0.9 and $\sigma$ = 0.1. The results are very similar except the fact that in this case, a greater number of nodes need to be immunized in order to stop the epidemic spreading. This is true for all the strategies. For example, when $\lambda$ is equal to 0.1, degree and betweenness centrality based methods required only 30% of the nodes to be removed to mitigate the epidemic spreading, whereas now they require 50% of the nodes to be immunized or removed. Indeed, when $\lambda$ increases, the probability of an infected node to infect its neighbors gets higher. So, the epidemic spread at a higher rate, and thus more nodes are needed to be immunized to prevent the epidemic spreading. To summarize, according to the experimental results, the proposed indegree and outdegree centrality based strategies- *indeg\_nodes* and *outdeg\_nodes* respectively, are effective in identifying the influential nodes to be selected for immunization to prevent or mitigate the epidemic spreading. These methods are as effective as the global degree and centrality based methods, but they do not require any information about the global structure of the network. The proposed strategies perform better than the *CBF* algorithm, with only the information about the local (community-wise) structure of the network. This suggests that the local information is sufficient in order to design an immunization strategy. Conclusions =========== There might not be enough information available about the global structure of the underlying relevant contact network in order to control the epidemic spreading. Therefore, efficient immunization strategies are required that can work with the information available at the community level. Results of our investigation, on a realistic synthetic benchmark, show that the community structure plays a major role in the epidemic dynamics. It is observed that the proposed indegree and outdegree centrality based immunization strategies are efficient methods to control the epidemic spreading. These strategies work as well as the global centrality measures (degree and betweenness), without any knowledge of the global network structure. Therefore, these centrality measures defined at the community level are good approximations of global centrality measures. The indegree of a node represents its degree and betweenness centralities, relative to its own community, whereas, the outdegree centrality of a node represents these global centralities relative to the other communities of the network. However, unlike global centrality measures, the proposed measures do not need to have any knowledge of the global network topology and thus can be easily and quickly computed. Furthermore,they perform better than *CBF*. Performances of the two other proposed local strategies (*inout\_diff\_nodes* and *outin\_diff\_nodes*) depends on the strength of the community structure. In a network with strong community structure, *inout\_diff\_nodes* method performs better, whereas *outin\_diff\_nodes* method is more efficient in the case of weak community structure. Finally, the main lesson of this work is that exploiting the local information on the network topology can be very effective in order to design efficient immunization strategies that can be used in large scale networks. These preliminary results pave the way for more investigations on alternative community topological measures. Acknowledgement {#acknowledgement .unnumbered} =============== The authors thank Mr. Upendra Singh (B.Tech., MANIT Bhopal, India) for implementing the SIR model of epidemics, used in this paper.